X (formerly known as Twitter) could easily be a casualty of this war
Here's the introduction to my AI class - hope this contributes to the discussion:
Randy Pausch, author of the Last Lecture, Professor of Computer Science, Human Interaction and Design at Carnegie Mellon University said the most important thing you are going to hear in this class.
“If I could only give three words of advice, they would be, 'Tell the truth.' ”
“If I got three more words, I'd add, 'All the time. ' ”
Why would I begin a class about Artificial Intelligence with a quote about Truth?
Because, telling the truth is what will separate you from AI boogey man.
Never trust AI. It will lie to you without reservation. It does not even know when it is lying. AI is like an obsequious servant that tells you what you want to hear.
The problem with making "creation of misinformation illegal" is that you are assuming that we can determine what is and is not misinformation definitively.
What will actually happen with any such law is the persecution of people who say things that people disagree with. This isn't something you can ban and still have any semblance of freedom of speech; the freedom to dissent is core to freedom of speech.
Just think about how Trump acts, or how DeSantis acts, and think about what would happen if it was legal for them to persecute people for spreading "misinformation" that is, in fact, true. The same goes for socialists, who routinely spread misinformation and then claim anyone who points out that they're lying that the people calling them out are spreading misinformation.
There is no way to create a law like this which is remotely compatible with freedom of speech. Slander and libel are specific enough to be actionable, but more general news is generally not slanderous or libelous.
Who decides what disinformation is? What if they're wrong?
What if instead of using the power of the state to punish alleged AI-powered disinformation, we instead encourage critical thinking and skepticism? After all, you don't need AI to create disinformation. People have been doing it on their own for all of human history.
If a sea of disinformation becomes reality (and it seems it will) and if AI is not capable of fighting that (and it seems unlikely it can), the information world will become indeed a sea of garbage.
What happens then is that we might see then return of 'curated information', a.k.a. 'serious journalism' where people start to pay for checked information. The simplest form being sites that get an 'OK' stamp for which they have to pay money. In the end, the fact that everybody can become a publisher might slowly disappear again.
Can you define disinformation?
I don't think that any article that is concerned about the 'Firehouse of Falsehoods' should be quoting the highly politicised Wikipedia as a reliable source... 😉
Will the readers of RT and Sputnik go to the CounterCloud site to check their version?
Dear Gary, love your newsletter and have been reading for a while now. In your last point you suggest AI that we need AI that is smart enough to detect false information. We in Factiverse are working exactly on that. We started our research in 2016 at the University of Stavanger, Norway and has for many years trained our ML models on curated certified and trustworthy data. We recently launched on Producthunt our first product - an AI editor or BS detector if you use chatgpt to generate content. You can copy paste your text, and we will identify sentences that are controversial and automatically search in Google and Bing for evidence. We will show you what sources are disputing and supporting your arguments according to our data. We would love for you and your readers to try, and give honest feedback - we have sophisticated patented tech but are figuring our the product market fit :) https://www.producthunt.com/products/factiverse-editor
The problem with punishing people for wholes-sale disinformation is then you need to know who is the one saying what is right and what is wrong. Laws could easily be twisted to start punishing actual science pupication, for instance, and lead to a pretty dystopian future.
I love your work Gary! To be honest right now spam, trash and cheating is probably the number one application of AI/LLMs.
very 'antihomeostatic', as norbert wiener said
Social media sites and even browsers themselves have the tools to spot most bots (based on behavior) and bot written posts (based on content structure). Imagine if suspected bots had their profiles flagged on social, or the web page for an article you’re reading gets a top line banner that your web browser is 90% certain this content is AI generated. That would be a big and doable step in the right direction heading into 2024!
This features prominently in my next novel. But are 1000x more accurate with brainwave 🧠data.
Precision phishing scams,
Automated contra memes,
& transcranial magnetic stimulation,
lock a programmer into doing the bidding of an algorithm.
Yay to censorship!!! Yay to the criminalization of speech!!! Nay to a world where governments and large corporations and the powerful in general have to deal with pesky counter-narratives!!! That's not at all threatening to democracy...
I would hope that most adults realize that we can't build truth into AI because we don't know what the truth is (I guess we could just use mechanical turk and call it a day :). Or maybe I can get some investors for mechanical delphi. It will just call mechanical turk but it will be super truthy).
I would also hope that you realize that you are proposing a blatantly unconstitutional mass criminalization of speech.
I fail to see why the problem you describe makes this necessary, even if I didn't believe in free speech. The contention is that because 'fake' news can be automatically generated we need to increase censorship. How does this meaningfully differ from last month where 'fake' news could be human generated (and of course automatically generated as well - but we are in freak out mode here). You could obviously generate a lot more and do it more quickly, but the limitation is dissemination not generation. I am not sure that I can effectively roll out a million stories per minute from a twitter account, or what human would be able to read them.
Ultimately people will just have to decide what to trust based on the source. Much like we have been doing for millennia.
The best "AI documentary" I've seen was Chimp Empire on Netflix, which never mentioned AI even once. This excellent four hour film takes you in to a tribe of chimps in the Ugandan jungle.
The main talk away of the film for me was that humans are just apes taken to the next level. The similarity between their behaviors and ours are truly remarkable.
As example, chimps are obsessed with territory, and routinely fight with other tribes of chimps to defend their own territory, and try to expand it. Chimps fight in a manner to remind one of a bar brawl, whereas we humans fight for territory with tanks, missiles and bombers etc. Our motivations are still very ape-like, but we use better tools to pursue the same agendas. For example, there is no fundamental difference between a chimp territory battle and the war in Ukraine, other than the tools being used.
We should expect AI to inherit our basic nature, just as we inherited the basic nature of our ape ancestors. AI is just chimps => humans taken to the next level. And so some of what AI does will be beautiful, and some of it will be horrific.
What I'm getting at is that the "enshittification" being discussed here arises from an evolutionary progression that was installed in our DNA millions of years before we were even human. We probably can tweak this phenomena a bit, but we aren't going to fix it. As example, we've managed to tweak human violence a bit with religions and laws etc, but we're no where near fixing it, and are unlikely to ever do so. It's just too deep within us to remove.
Just as the our ape ancestors had no choice about humans evolving from them, and then coming to dominate them, I suspect the same will be true of AI and us. I used to argue against AI, an activity I now see as arising from a wishful thinking fantasy that we have a choice in the matter. The story of AI didn't begin in recent decades, it began millions of years ago.
The chimps in Chimp Empire lived in a lush natural jungle. And then we humans came along and started building filthy junk pile concrete jungles on top of the natural jungles. We humans enshittify everything we touch. As so, as our child, AI is going to do the same thing, just on a larger scale.
The progression from apes to humans to AI is not evolution creating "advancement". It's evolution progressing through a process of degradation. It's like when you throw the core of the apple you just ate on to the ground. Within minutes the apple core starts to degrade, and then gradually over a period of time it vanishes.
It's like our aging. You're either going to die young, or grow old. You can tweak this reality a bit, but you can't fix it. And so the rational act is to make peace with the inevitable, and try to enjoy the process.
You start your article with a quotation from TheDebrief.org, a source about which you acknowledge not being familiar. Isn’t this how it happens? With all the sources with which you are familiar, why use one that you are not?