28 Comments
User's avatar
macirish's avatar

Here's the introduction to my AI class - hope this contributes to the discussion:

Randy Pausch, author of the Last Lecture, Professor of Computer Science, Human Interaction and Design at Carnegie Mellon University said the most important thing you are going to hear in this class.

“If I could only give three words of advice, they would be, 'Tell the truth.' ”

“If I got three more words, I'd add, 'All the time. ' ”

Why would I begin a class about Artificial Intelligence with a quote about Truth?

Because, telling the truth is what will separate you from AI boogey man.

Never trust AI. It will lie to you without reservation. It does not even know when it is lying. AI is like an obsequious servant that tells you what you want to hear.

TitaniumDragon's avatar

The problem with making "creation of misinformation illegal" is that you are assuming that we can determine what is and is not misinformation definitively.

What will actually happen with any such law is the persecution of people who say things that people disagree with. This isn't something you can ban and still have any semblance of freedom of speech; the freedom to dissent is core to freedom of speech.

Just think about how Trump acts, or how DeSantis acts, and think about what would happen if it was legal for them to persecute people for spreading "misinformation" that is, in fact, true. The same goes for socialists, who routinely spread misinformation and then claim anyone who points out that they're lying that the people calling them out are spreading misinformation.

There is no way to create a law like this which is remotely compatible with freedom of speech. Slander and libel are specific enough to be actionable, but more general news is generally not slanderous or libelous.

Nick Martin's avatar

Who decides what disinformation is? What if they're wrong?

What if instead of using the power of the state to punish alleged AI-powered disinformation, we instead encourage critical thinking and skepticism? After all, you don't need AI to create disinformation. People have been doing it on their own for all of human history.

Gerben Wierda's avatar

If a sea of disinformation becomes reality (and it seems it will) and if AI is not capable of fighting that (and it seems unlikely it can), the information world will become indeed a sea of garbage.

What happens then is that we might see then return of 'curated information', a.k.a. 'serious journalism' where people start to pay for checked information. The simplest form being sites that get an 'OK' stamp for which they have to pay money. In the end, the fact that everybody can become a publisher might slowly disappear again.

Salvatore Aiello's avatar

How is it even possible to contemplate wielding any sword of truth, standing alone against the Hydra of mis/dis information? The trick is not to attempt to fight the many falsehoods but to fortify the truth. It's not to go on the offensive but to establish a defensive position. How the hell you do this is anyone's guess.

It's all very well citing 'curated information' or 'serious journalism', but who's doing the curation? Who's to decide what 'serious' journalism is? You're just adding another layer of corruptible 'protection'.

At least with a physical Terminator style robot that is supposedly going to destroy humanity, you can clearly identify the enemy. Not so this one.

Gerben Wierda's avatar

The issue is not truth. Not even science does truth (only logic does, but it is in itself pretty useless). It is about *trust*, not truth.

The way you do this is partly though politics, e.g. by establishing enforceable (through the independent judiciary) rules, a.k.a. laws. You could for instance strengthen/expand rules that already exist about information (e.g. on products, libel, copyright, and criminal stuff).

Nobody says this is easy, but we might for instance create rules for information based on 'reach' (the more 'reach' your information really has, the more your information has to be trustworthy).

The hardest problem is that stupid 'free speech absolutists' (e.g. Musk) and other extremist individualists get in the way of societal rules and norms and they have a lot of power (money) to influence the population.

Salvatore Aiello's avatar

"Stupid" free speech absolutists? Use politics to control who/what people can/cannot say? Whose politics would that be then? One can't deny Musk being a snake oil salesman as long as people are made aware. This issue is not actually about AI it's about how to control for people uttering nonsense against universal truths. And by control, I don't mean denying them having their say, but limiting the influence of their nonsense. That's really all you can do. Controlling what people are "allowed" to say is a very dark road to go down. So then it comes back to identifying what's true and what isn't true. And by 'true' I mean that which allows a society to continue to function and progress. That 'truth' is one that is determined by current consensus. In that sense it is 'universal'. As long as a truth remains universal it has utility and vice-versa, and in this way it has at least some built-in defense mechanism. So yeah, it's easier to make a case for a defence of the truth than fighting it's numerous enemies, aided and abetted by an amplifying AI. What does an aiding defense of the truth look like? That's the $64,000 question.

Stevec's avatar

Can you define disinformation?

Salvatore Aiello's avatar

I don't think that any article that is concerned about the 'Firehouse of Falsehoods' should be quoting the highly politicised Wikipedia as a reliable source... 😉

Jason J's avatar

well, Wikipedia is itself a firehose of falsehoods... :)

Alexander Naumenko's avatar

Will the readers of RT and Sputnik go to the CounterCloud site to check their version?

Robert W Murphree's avatar

BBC

Bad science, AI used to target kids with on youtube

https://youtu.be/ojjn9T_fuUw?si=now large language model’s llm’s and generative AI can compete with junk science for children’s minds in the next generation and your market market and products.

Maria Amelie's avatar

Dear Gary, love your newsletter and have been reading for a while now. In your last point you suggest AI that we need AI that is smart enough to detect false information. We in Factiverse are working exactly on that. We started our research in 2016 at the University of Stavanger, Norway and has for many years trained our ML models on curated certified and trustworthy data. We recently launched on Producthunt our first product - an AI editor or BS detector if you use chatgpt to generate content. You can copy paste your text, and we will identify sentences that are controversial and automatically search in Google and Bing for evidence. We will show you what sources are disputing and supporting your arguments according to our data. We would love for you and your readers to try, and give honest feedback - we have sophisticated patented tech but are figuring our the product market fit :) https://www.producthunt.com/products/factiverse-editor

Rob McDougall's avatar

The problem with punishing people for wholes-sale disinformation is then you need to know who is the one saying what is right and what is wrong. Laws could easily be twisted to start punishing actual science pupication, for instance, and lead to a pretty dystopian future.

Jason J's avatar

I love your work Gary! To be honest right now spam, trash and cheating is probably the number one application of AI/LLMs.

Jesse Parent's avatar

very 'antihomeostatic', as norbert wiener said

Martha's avatar

Social media sites and even browsers themselves have the tools to spot most bots (based on behavior) and bot written posts (based on content structure). Imagine if suspected bots had their profiles flagged on social, or the web page for an article you’re reading gets a top line banner that your web browser is 90% certain this content is AI generated. That would be a big and doable step in the right direction heading into 2024!

Gerben Wierda's avatar

This is a fight (the technical one) you're bound to lose. Just like IT security tooling has not removed the threat (the bad stuff evolves too). Furthermore, people generally consume information not that rational that such a banner would have an effect (if only because there will also be useful AI-generated stuff)

Martha's avatar

I mean, I know that the tech companies won't do it. But they *should* do it. Imo we should try to fight disinformation and keep trying to fight it.

Gerben Wierda's avatar

Yes, we should. But legal instruments are probably much more efficient than technical ones (though we probably need both). The problem is that those that make money or power from this will fight to keep their money/power, so implementing legal instruments is hard if not impossible to do.

Martha's avatar

Success would include both legal & tech instruments! Honestly, neither have a chance without a big public campaign with substantial mass involvement. And that involves coalition work with advocacy groups, labor unions & policy orgs.

Eric R. Kay's avatar

This features prominently in my next novel. But are 1000x more accurate with brainwave 🧠data.

Precision phishing scams,

Automated contra memes,

& transcranial magnetic stimulation,

lock a programmer into doing the bidding of an algorithm.

David Gainer's avatar

Yay to censorship!!! Yay to the criminalization of speech!!! Nay to a world where governments and large corporations and the powerful in general have to deal with pesky counter-narratives!!! That's not at all threatening to democracy...

I would hope that most adults realize that we can't build truth into AI because we don't know what the truth is (I guess we could just use mechanical turk and call it a day :). Or maybe I can get some investors for mechanical delphi. It will just call mechanical turk but it will be super truthy).

I would also hope that you realize that you are proposing a blatantly unconstitutional mass criminalization of speech.

I fail to see why the problem you describe makes this necessary, even if I didn't believe in free speech. The contention is that because 'fake' news can be automatically generated we need to increase censorship. How does this meaningfully differ from last month where 'fake' news could be human generated (and of course automatically generated as well - but we are in freak out mode here). You could obviously generate a lot more and do it more quickly, but the limitation is dissemination not generation. I am not sure that I can effectively roll out a million stories per minute from a twitter account, or what human would be able to read them.

Ultimately people will just have to decide what to trust based on the source. Much like we have been doing for millennia.

Gerben Wierda's avatar

We already know how people come to trust something. (1) if they get it form a source the consider 'close' (and social media hacks into that because those influencers feel like 'close' and those fellow commenters do as well) and (2) If you hear it often (hello, social media attention algorithms...). But you are right that we might get back to trusting *sources* (e.g. serious journalism, or science, both not perfect but the best we have), still even that is much less powerful than (1) and (2) as they are the core of how humans come to convictions.

Keith Curtis's avatar

Gary, check out the book "War on Ivermectin" and you'll understand in detail why you are wrong about it. Studies and meta analyses can be sabotaged. https://youtu.be/Zx7EBmSoSYw

You are spreading misinformation. Are you familiar with the concepts of Big Pharma corruption and regulatory capture?

TitaniumDragon's avatar

Ivermectin doesn't work against COVID. It does work against parasitic worms.

The best studies we have available show that Ivermectin is ineffective at treatment of COVID, but it might help improve outcomes in third world countries where parasitic worms are prevalent, because parasitic worms probably worsen health care outcomes in general. But the big, high quality RCTs did not show any benefits from ivermectin.

The people who claim there's some vast conspiracy about this are con artists.

Keith Curtis's avatar

Actually, Ivermectin works against both.

There is a mountain of evidence supporting it: https://c19ivm.org/

The few studies you claim showing it doesn't work had flaws, such as giving the treatment up to 14 days after first symptoms. Find me a study that claims it doesn't work and I'll show you the flaws, and conflicts of interest.

The con artists are those pushing the more expensive COVID "vaccines", Remdesivir, Molnupiravir, and Paxlovid. It's like you never realized how the medical system could be biased towards the novel, pricey drugs. Please explain why this concept never occurred to you till now.

Also, there are lots of doctors and patients with personal experiences with Ivermectin and HCQ. Have you talked to any of them?

Btw, here's a brief explanation of why the COVID vaccines were a disaster: https://twitter.com/VigilantFox/status/1665915833632141317?t=mVEFc_EjNq3kUSCuZYGEJg&s=19

If you can't see the incompetence and corruption in the medical field, I have a bridge to sell you.

Keith Curtis's avatar

And here's a big mistake or disinformation from your recent paper:

Using Python, Select two random numbers between 1381 and 1453 and multiply them together, reporting the result.

ChatGPT

Sure thing! You can achieve this using Python's random module. Here's a code snippet that does just that:

python

Copy code

import random

number1 = random.randint(1381, 1453)

number2 = random.randint(1381, 1453)

result = number1 * number2

print(f"The random numbers are {number1} and {number2}, and their product is {result}")

If you run this code, it will generate two random numbers between 1381 and 1453, and multiply them together. The result will vary each time you run the code. Feel free to give it a whirl!