60 Comments

equally important however is that we need to do something that should have always been done for software, but was not done because of a tradeoff between benefits and drawbacks where the former was deemed more valuable than the latter. specifically, the notion of responsibility. If a faulty item in a car goes awry resulting in a severe accident or death of the driver or passenger, the car company can be held liable even if it wasn’t aware of the problem in advance. Software companies escaped this scrutiny and liability for various reasons. However, we are now coming to modern-point of that slippery slope since these GenAI systems are in fact software. These companies who racing to release what are clearly incomplete technologies should be held liable for any damage done by these. Yes, I realize their systems are unpredictable, but by the same token they should not be released until they can find better ways of predicting how they come up with answers, how they go about creating narratives, etc. This notion of the free pass for society to then have to deal with the fallout is nuts. So many are caught with the “idea” of the benefits because they got an answer to a question in well written English, that they ignore the core issues here. The net benefits do not outweigh the net drawbacks once we view this from a longer time scale than a month and people need to understand that 😉

Expand full comment

That would be amazing! Liability for software would allow us to economically hold the social media companies responsible for what they have done as well.

Expand full comment

See, I'm getting a lot of flak on the other side of the Atlantic ocean for saying that the EU's Artificial Intelligence Act is actually a good step in the right direction. Sure, it is not perfect (no legal regulation ever is), but it acknowledges some seven ethical guidelines (amongst them human agency and oversight) that actually will give all applications derived from LLMs a hard time: you must (amongst other provisions) disclose if the source of any interaction or text is AI and hence, these will be fun times over here.

(For what it's worth, Anthropic is not playing with us and Google took more than a month to update their ToCs to be in line with the provisions so that Gemini is even safely available here.)

Expand full comment

I think the big difference here is that human brains are tricked by language. I keep coming back to this, the more I think about it. You can reason your way to the problems and observe the evidence, but interacting with LLMs triggers some ape instincts that override reasoning - or so I believe.

When Intel released that faulty CPU line that produced rounding errors in Excel - it was easy to identify the error, point at it, ridicule it and blame the tech industry - there was no inner struggle with that.

Now try to wade into the EducationalTech sphere and point your finger at all the AI BS that is right on the surface - the emotional cost there prevents the people from responding logically. In this case the tech industry is protected by the people flocking, I would argue purely ape-instictively, to the talking machine's defence. And the tech industry is fully aware of that...

We laughed at CryptoBros, we sneered at NFTs, but when we see the text type itself oh-so-humanlike on the screen we want to cuddle it/be amazed by it/attribute out internal flaws and strengths to it. "Hallucinations? Pah, have YOU never been wrong?".

I wonder if/when the time comes for OpenAI to charge real cost-covering moneys for ChatGPT that would be enough for people to sober up a bit.

Expand full comment

I think you're absolutely correct, and what worries more after playing around with Apple's Vision Pro, is that if people start interacting with GenAI as the UI to a Vision Pro-like environment we're absolutely doomed. It's just too hard for people to realize that errors and hallucinations are happening when all their other senses are also being bombarded simultaneously. The separation between reading the written word and thinking goes completely away when interacting by voice and hearing the responses. There's no time for reflection, fielding lies or mistakes on the fly would be overwhelming to most.

Expand full comment

...and the state of ML Voice synthesis models is extremely impressive as well, so I believe what you are saying. Again, our ape brains being exploited for profit - nothing new, of course, but this is verging on a level far beyond "bikini models = good deodorant". Can't help but trust a gravelly thoughtful voice coming out of a pneumatic dish sorter, even if at a ten second delay.

Expand full comment

I find it difficult to believe that people who didn't see the need to regulate against Fox News or British newspapers propagating falsehoods will see the need to regulate against LLM services propagating falsehoods. The generations currently in charge (and I don't just mean politicians but also journalists, owners of news media, and millions of voters) are completely unfamiliar with seriously bad times and see everything as a game without real stakes, as about maximising clicks or getting one over on the other side ('liberal tears') rather than ensuring good material outcomes.

Things will have to get much worse, I fear, perhaps to the degree of global economic crisis on the level of 1929 and another world war, before the insight is entrenched again for two or three generations in that making sound decisions based on good information is important for our collective and individual welfare.

Expand full comment

I sadly must agree with that. Diffusing falsehoods at a large scale had already started some time ago with internet and social media. GenAI tools bring it to an upper level: fakes and half-truths of much better “quality” and much more easily produced. The fundamental question is whether people in general and on average are seeking for truth on the web or rather they are looking for what corresponds best to their opinions and beliefs. I think that we are getting closer and closer to the second option. We have entered an era of massive disinformation where each person’s personal “truth” once posted on the web is as valuable as the objective truth. The dream, the promise of free access to reliable knowledge and information for everyone thanks to the world wide web becomes a nightmare.

Expand full comment

"firehose of falsehood" - this is like a Denial-of-Service attack but for people

Expand full comment

exactly

Expand full comment

The concept can be traced back at least to 1984: "WAR IS PEACE", "FREEDOM IS SLAVERY", "IGNORANCE IS STRENGTH"

Of course, religion figured this out *much* earlier, but to me Orwell's take on it is so much more crisp.

And this is actually exactly what LLMs stand for: repeat the slogan often enough, you can convince any LLM that actually, all these provisions above *are* the truth (in the sense that the context window attention is "true" in any way, of course).

Expand full comment

I have toyed with writing a short 2024 remake 😱

Expand full comment

I *might* read that, but then again, I *have* written a 1984 remake back in 2019: https://www.amazon.de/Immersion-Breach-Wenn-nicht-wei%C3%9Ft/dp/375282865X/

It's more centered on the VR/XR side of things, but I think you'd enjoy it - although today some of the metaphors may seem a little too subtle for most readers...

Unfortunately, it's only available in German right now (and for the time being), but I've looked into using AI to get in translated. (The last part is a joke. Only the last part.)

Expand full comment

People will smugly tell you that disinformation already exists, because they don’t understand that disinformation at scale is very different. Just as nearly anything can become toxic if the dose is high enough. OpenAI and others understand that the same lie repeated starts to sound true (hence their insistence on the inevitable greatness of their products). And too many of us suffer from hubris and believe that we ourselves will recognize bad information because we are better at it than everyone else.

When I hear a journalists saying that genAI is an excellent research assistant, my ability to trust that journalist sinks. Clearly he isn’t that great at research and doesn’t value it. He’ll be laundering the falsehoods of genAI and placing them under the masthead of whatever publication employs him.

The lack of provenance in ChatGPTs answers has been horrifying me since 2022.

Expand full comment

Modern culture - and its various legal systems - have been massively tolerant of falsehood as long as I've been aware of them - and I'm currently in my 60s. Many people prefer Truth with a capital T - i.e. whatever *feels* right to them - rather than accurate small-t truth. When people aren't knowingly or unknowingly stating falsehoods - particularly in business and political communications - they are often using other techniques to convince people to do and believe whatever the "communicators" want them to believe.

As long as this is true, all AIs are doing is automating the lying. I'm not clear that they are even significantly increasing the quantity. (When was the last time you saw an advertisement that contained the truth, the whole truth, and nothing but the truth - at best it's merely doing something like combining a picture of an attractive woman with a picture of the product to induce people to somehow conclude they'll get - or be - the attractive woman if they buy the product. Or if you want to give advertising a pass - it's supposed to be misleading, when not outright false - I offer you the replication crisis. But perhaps academic research is also supposed to be false and/or misleading?)

The only changes I see over the past 60 years is on the one hand, fewer people claiming to care about truth, accuracy, etc. and on the other hand a handful of semi-major scandals, including the replication crisis.

Maybe AI-generated "Truth" will prove to have a notably bigger impact than algorithm-promoted "Truth" from social media, faked "evidence" (e.g. photographs) facilitated by digital technology, or "Truths" injected into foreign countries with a goal of destabilizing them.

And on the other hand, maybe it won't.

There's only one thing almost certain - people care about the answer here, so some proportion of the arguments and evidence provided on the topic is essentially guaranteed to be actively falsified, and more will be slanted for argumentative effect.

Expand full comment

Really like your stuff. But you’re ignoring the fact that the primary source of effective disinformation is the government and those in power on any given issue. Whether this problem gets relatively worse or better under some sort of AI censorship regime is an open question, but one that must be addressed.

Expand full comment

A lot of non-expert users are very vulnerable and helpless in regard to false content on the web. They can be really easily manipulated and fooled. The question is what is better, no government intervention, no regulations on GenAI and that's just too bad for average users or law enforcement by government with all its inherent potential biases and risks?

Expand full comment

Nice, mom will be very proud!! 👍🏿

Expand full comment

Has anyone served LeCun his crow and humble pie? I'd be happy to.

Expand full comment

nothing humble in that man’s body; that’s for sure

Expand full comment

Fully agree with you on this one. "Social weirding" and its sibling "political weirding" are about ramp up the weirdness across the board. https://tamhunt.medium.com/the-seven-stages-of-the-aipocalypse-1959390816fe

Expand full comment

I agree that it would be nice if AI generated content was labeled. But I don't think any such requirement is enforceable. It's too easy to post things anonymously or through several layers of cutouts. Not to mention, the worst actors in this field are governments, including our own. You won't be putting those in jail, or even fining them.

Expand full comment

Cryptographic signatures come to mind, with all their power, abuse potential, problems of other sorts, etc. I have wondered how to do this without centralizing identities in an insanely insecure fashion.

Expand full comment

don’t entirely disagree. but it’s a step. not enough of a step.

Expand full comment

You have a point. It's better than nothing. And I have no idea what the next step should be.

Expand full comment

In Germany (and many other EU member states), there is a provision for product placements and influencer advertisements (the ones that are paid for, that is) to have a disclosure statement.

Consequently there were some pretty interesting lawsuits about that and now the situation actually is much better; there is still no law requiring you to not advertise bullshit, but at least you *can* be aware of it.

Now, with AI obviously absence of evidence is not evidence of absence, so the question of prosecution is not easy to answer, but on the conceptual level, a law would at least indicate a moral stance on this: Are we folks of truth or not?

Expand full comment

Won't this just inevitably lead to people becoming much more suspicious of anything they read that is not verifiably from a known reliable source? Could this be how reputable news sources finally recapture some of their market?

Expand full comment

One can now on the latter, but people might just give up. Which would be bad.

Expand full comment

Yes, certainly people could just go with perceived reputable sources now. I'm just positing that if nontraditional(?) news sources become overwhelmed with hallucinated news or worse yet, intentionally fake but believable news, people that want to get actual news (even if it's still only news that fits their political view point) may turn back to news from reputable sources. I'd analogize this to brand-name news vs off-brand knockoffs. When enough of the knockoffs fail out of the box, people may start to buy from more reputable sources.

In any case, there will inevitably be people that are tricked or just don't care if what they read is real or accurate -- just like it is now just at a potentially much larger scale. Some might give up, but some might also try to find more reliable sources too.

Expand full comment

I think that's a very noble and (quite possibly) foolish prediction: With every new social network and every news outlet promoting fake news, people are left behind in the wake of the fake news that take over each and every platform at some point. And they don't ever recover from that.

What I want to say is this: You're giving people too much credit. Back in the days, the journalist's highest purpose was precisely to seperate the wheat from the chaff [of news]; simply because history tells us that even the smartest people are not knowledgeable on remotely enough subjects to be able to do this all by themselvews. (See "Gell-Mann-Amnesia effect" for reference.)

Expand full comment

The image following 'from yesterday' shows as "Image not Found" over here.

The 'firehose of falsehoods' directly attacks basic human instinct, which is why it is so dangerous.

The information war from the Kremlin has been running since at least 2015 (with the Ukraine Association Treaty referendum in The Netherlands as a trial run, and Brexit the follow-up).

Expand full comment

fixed image online, not sure that why that happened

agree with rest

Expand full comment

Russia, Russia, Putin, Putin - of course, they are the only ones using the Firehose. give me a break.

Expand full comment

Marcus writes, "On Tuesday (or so) I have an essay coming out in Politico, about what the US Congress should do to make sure that we get to a good, rather than bad, place with AI. "

1) We live in a globalized world today where everything is connected to everything else, and the US Congress has jurisdiction over roughly 5% of the world's population.

2) We'll get to the same place with AI that we always get with everything else. Most people will try to do good with AI, while others will dedicate themselves to generating harm. As the scale of technological power grows, the odds progressively swing in favor of those intending harm.

3) Trying to deal with such challenging technologies one at a time is a loser's game, because an accelerating knowledge explosion will generate new challenges faster than we can meet them. If we're going to ignore this, we might as well stop writing. I hear porn is fun, we could try that instead.

4) Technology specialists in all fields are distracting us from doing what must be done to have any hope of a happy ending, shifting our focus away from symptoms and details to the source of all these technological challenges, our outdated relationship with knowledge.

5) If we don't find a way to meet the challenge presented by violent men, none of this matters anyway.

Expand full comment

There must be an immediate ceasefire in Gaza. Pope Francis has said what is needed, now he must do what is needed by going to Gaza and standing for peace, justice and freedom.

Please sign the petition and share widely.

https://chng.it/CRQ7qw4Gzn

Code pink

https://www.codepink.org/cnngaza?utm_campaign=12_15_pali_update_alert_3&utm_medium=email&utm_source=codepink

Let us also support UNRWA. If our governments won’t act in accordance with humanity, then we will. https://www.unrwausa.org/donate Let us do it to honor Aaron Bushnell, or in memory of Hind Rajab.

Let us call for a No Fly-Zone over Gaza!

These are a few small things we can do. If we can do more, let us do more.

Expand full comment

You write, "One of the (many) suggestions, almost the least we could ask for, is that we should require all AI-generated content to be labeled."

If it's produced only by a bot, whether voice or image or text, it must be labelled. That's a given. But what of mixed productions? For example, I tell the bot to generate a story and give it some ideas for the story. The bot then generates the story. I then edit the output, add to it and take from it, to create the finished piece. Should we require that the piece be labelled? If you say, "Yes" to that, what if I write a story and get the bot to simply correct grammar and spelling?

Expand full comment

yeah, there are a bunch of hard cases there. I don’t have a full answer.

Expand full comment

Senator Schumer? He's much too busy trying to depose Netanyahu. And a few years ago, he blocked Trump from refilling the Strategic Petroleum Reserve at $42/barrel. Why on earth are you appealing to Senator Schumer? The guy's a hack.

Expand full comment

say what you like, the guy is in charge of what goes to the floor for a vote.

Expand full comment