30 Comments

Here's the introduction to my AI class - hope this contributes to the discussion:

Randy Pausch, author of the Last Lecture, Professor of Computer Science, Human Interaction and Design at Carnegie Mellon University said the most important thing you are going to hear in this class.

“If I could only give three words of advice, they would be, 'Tell the truth.' ”

“If I got three more words, I'd add, 'All the time. ' ”

Why would I begin a class about Artificial Intelligence with a quote about Truth?

Because, telling the truth is what will separate you from AI boogey man.

Never trust AI. It will lie to you without reservation. It does not even know when it is lying. AI is like an obsequious servant that tells you what you want to hear.

Expand full comment

The problem with making "creation of misinformation illegal" is that you are assuming that we can determine what is and is not misinformation definitively.

What will actually happen with any such law is the persecution of people who say things that people disagree with. This isn't something you can ban and still have any semblance of freedom of speech; the freedom to dissent is core to freedom of speech.

Just think about how Trump acts, or how DeSantis acts, and think about what would happen if it was legal for them to persecute people for spreading "misinformation" that is, in fact, true. The same goes for socialists, who routinely spread misinformation and then claim anyone who points out that they're lying that the people calling them out are spreading misinformation.

There is no way to create a law like this which is remotely compatible with freedom of speech. Slander and libel are specific enough to be actionable, but more general news is generally not slanderous or libelous.

Expand full comment

Who decides what disinformation is? What if they're wrong?

What if instead of using the power of the state to punish alleged AI-powered disinformation, we instead encourage critical thinking and skepticism? After all, you don't need AI to create disinformation. People have been doing it on their own for all of human history.

Expand full comment

If a sea of disinformation becomes reality (and it seems it will) and if AI is not capable of fighting that (and it seems unlikely it can), the information world will become indeed a sea of garbage.

What happens then is that we might see then return of 'curated information', a.k.a. 'serious journalism' where people start to pay for checked information. The simplest form being sites that get an 'OK' stamp for which they have to pay money. In the end, the fact that everybody can become a publisher might slowly disappear again.

Expand full comment
Aug 17, 2023·edited Aug 17, 2023

How is it even possible to contemplate wielding any sword of truth, standing alone against the Hydra of mis/dis information? The trick is not to attempt to fight the many falsehoods but to fortify the truth. It's not to go on the offensive but to establish a defensive position. How the hell you do this is anyone's guess.

It's all very well citing 'curated information' or 'serious journalism', but who's doing the curation? Who's to decide what 'serious' journalism is? You're just adding another layer of corruptible 'protection'.

At least with a physical Terminator style robot that is supposedly going to destroy humanity, you can clearly identify the enemy. Not so this one.

Expand full comment

The issue is not truth. Not even science does truth (only logic does, but it is in itself pretty useless). It is about *trust*, not truth.

The way you do this is partly though politics, e.g. by establishing enforceable (through the independent judiciary) rules, a.k.a. laws. You could for instance strengthen/expand rules that already exist about information (e.g. on products, libel, copyright, and criminal stuff).

Nobody says this is easy, but we might for instance create rules for information based on 'reach' (the more 'reach' your information really has, the more your information has to be trustworthy).

The hardest problem is that stupid 'free speech absolutists' (e.g. Musk) and other extremist individualists get in the way of societal rules and norms and they have a lot of power (money) to influence the population.

Expand full comment
Aug 17, 2023·edited Aug 17, 2023

"Stupid" free speech absolutists? Use politics to control who/what people can/cannot say? Whose politics would that be then? One can't deny Musk being a snake oil salesman as long as people are made aware. This issue is not actually about AI it's about how to control for people uttering nonsense against universal truths. And by control, I don't mean denying them having their say, but limiting the influence of their nonsense. That's really all you can do. Controlling what people are "allowed" to say is a very dark road to go down. So then it comes back to identifying what's true and what isn't true. And by 'true' I mean that which allows a society to continue to function and progress. That 'truth' is one that is determined by current consensus. In that sense it is 'universal'. As long as a truth remains universal it has utility and vice-versa, and in this way it has at least some built-in defense mechanism. So yeah, it's easier to make a case for a defence of the truth than fighting it's numerous enemies, aided and abetted by an amplifying AI. What does an aiding defense of the truth look like? That's the $64,000 question.

Expand full comment

Can you define disinformation?

Expand full comment
Aug 17, 2023·edited Aug 17, 2023

I don't think that any article that is concerned about the 'Firehouse of Falsehoods' should be quoting the highly politicised Wikipedia as a reliable source... 😉

Expand full comment

well, Wikipedia is itself a firehose of falsehoods... :)

Expand full comment

Will the readers of RT and Sputnik go to the CounterCloud site to check their version?

Expand full comment

BBC

Bad science, AI used to target kids with on youtube

https://youtu.be/ojjn9T_fuUw?si=now large language model’s llm’s and generative AI can compete with junk science for children’s minds in the next generation and your market market and products.

Expand full comment

Dear Gary, love your newsletter and have been reading for a while now. In your last point you suggest AI that we need AI that is smart enough to detect false information. We in Factiverse are working exactly on that. We started our research in 2016 at the University of Stavanger, Norway and has for many years trained our ML models on curated certified and trustworthy data. We recently launched on Producthunt our first product - an AI editor or BS detector if you use chatgpt to generate content. You can copy paste your text, and we will identify sentences that are controversial and automatically search in Google and Bing for evidence. We will show you what sources are disputing and supporting your arguments according to our data. We would love for you and your readers to try, and give honest feedback - we have sophisticated patented tech but are figuring our the product market fit :) https://www.producthunt.com/products/factiverse-editor

Expand full comment

The problem with punishing people for wholes-sale disinformation is then you need to know who is the one saying what is right and what is wrong. Laws could easily be twisted to start punishing actual science pupication, for instance, and lead to a pretty dystopian future.

Expand full comment

I love your work Gary! To be honest right now spam, trash and cheating is probably the number one application of AI/LLMs.

Expand full comment

very 'antihomeostatic', as norbert wiener said

Expand full comment

Social media sites and even browsers themselves have the tools to spot most bots (based on behavior) and bot written posts (based on content structure). Imagine if suspected bots had their profiles flagged on social, or the web page for an article you’re reading gets a top line banner that your web browser is 90% certain this content is AI generated. That would be a big and doable step in the right direction heading into 2024!

Expand full comment

This is a fight (the technical one) you're bound to lose. Just like IT security tooling has not removed the threat (the bad stuff evolves too). Furthermore, people generally consume information not that rational that such a banner would have an effect (if only because there will also be useful AI-generated stuff)

Expand full comment

I mean, I know that the tech companies won't do it. But they *should* do it. Imo we should try to fight disinformation and keep trying to fight it.

Expand full comment

Yes, we should. But legal instruments are probably much more efficient than technical ones (though we probably need both). The problem is that those that make money or power from this will fight to keep their money/power, so implementing legal instruments is hard if not impossible to do.

Expand full comment

Success would include both legal & tech instruments! Honestly, neither have a chance without a big public campaign with substantial mass involvement. And that involves coalition work with advocacy groups, labor unions & policy orgs.

Expand full comment

This features prominently in my next novel. But are 1000x more accurate with brainwave 🧠data.

Precision phishing scams,

Automated contra memes,

& transcranial magnetic stimulation,

lock a programmer into doing the bidding of an algorithm.

Expand full comment

Yay to censorship!!! Yay to the criminalization of speech!!! Nay to a world where governments and large corporations and the powerful in general have to deal with pesky counter-narratives!!! That's not at all threatening to democracy...

I would hope that most adults realize that we can't build truth into AI because we don't know what the truth is (I guess we could just use mechanical turk and call it a day :). Or maybe I can get some investors for mechanical delphi. It will just call mechanical turk but it will be super truthy).

I would also hope that you realize that you are proposing a blatantly unconstitutional mass criminalization of speech.

I fail to see why the problem you describe makes this necessary, even if I didn't believe in free speech. The contention is that because 'fake' news can be automatically generated we need to increase censorship. How does this meaningfully differ from last month where 'fake' news could be human generated (and of course automatically generated as well - but we are in freak out mode here). You could obviously generate a lot more and do it more quickly, but the limitation is dissemination not generation. I am not sure that I can effectively roll out a million stories per minute from a twitter account, or what human would be able to read them.

Ultimately people will just have to decide what to trust based on the source. Much like we have been doing for millennia.

Expand full comment

We already know how people come to trust something. (1) if they get it form a source the consider 'close' (and social media hacks into that because those influencers feel like 'close' and those fellow commenters do as well) and (2) If you hear it often (hello, social media attention algorithms...). But you are right that we might get back to trusting *sources* (e.g. serious journalism, or science, both not perfect but the best we have), still even that is much less powerful than (1) and (2) as they are the core of how humans come to convictions.

Expand full comment

The best "AI documentary" I've seen was Chimp Empire on Netflix, which never mentioned AI even once. This excellent four hour film takes you in to a tribe of chimps in the Ugandan jungle.

The main talk away of the film for me was that humans are just apes taken to the next level. The similarity between their behaviors and ours are truly remarkable.

As example, chimps are obsessed with territory, and routinely fight with other tribes of chimps to defend their own territory, and try to expand it. Chimps fight in a manner to remind one of a bar brawl, whereas we humans fight for territory with tanks, missiles and bombers etc. Our motivations are still very ape-like, but we use better tools to pursue the same agendas. For example, there is no fundamental difference between a chimp territory battle and the war in Ukraine, other than the tools being used.

We should expect AI to inherit our basic nature, just as we inherited the basic nature of our ape ancestors. AI is just chimps => humans taken to the next level. And so some of what AI does will be beautiful, and some of it will be horrific.

What I'm getting at is that the "enshittification" being discussed here arises from an evolutionary progression that was installed in our DNA millions of years before we were even human. We probably can tweak this phenomena a bit, but we aren't going to fix it. As example, we've managed to tweak human violence a bit with religions and laws etc, but we're no where near fixing it, and are unlikely to ever do so. It's just too deep within us to remove.

Just as the our ape ancestors had no choice about humans evolving from them, and then coming to dominate them, I suspect the same will be true of AI and us. I used to argue against AI, an activity I now see as arising from a wishful thinking fantasy that we have a choice in the matter. The story of AI didn't begin in recent decades, it began millions of years ago.

The chimps in Chimp Empire lived in a lush natural jungle. And then we humans came along and started building filthy junk pile concrete jungles on top of the natural jungles. We humans enshittify everything we touch. As so, as our child, AI is going to do the same thing, just on a larger scale.

The progression from apes to humans to AI is not evolution creating "advancement". It's evolution progressing through a process of degradation. It's like when you throw the core of the apple you just ate on to the ground. Within minutes the apple core starts to degrade, and then gradually over a period of time it vanishes.

It's like our aging. You're either going to die young, or grow old. You can tweak this reality a bit, but you can't fix it. And so the rational act is to make peace with the inevitable, and try to enjoy the process.

Expand full comment