62 Comments
Apr 6Liked by Gary Marcus

equally important however is that we need to do something that should have always been done for software, but was not done because of a tradeoff between benefits and drawbacks where the former was deemed more valuable than the latter. specifically, the notion of responsibility. If a faulty item in a car goes awry resulting in a severe accident or death of the driver or passenger, the car company can be held liable even if it wasn’t aware of the problem in advance. Software companies escaped this scrutiny and liability for various reasons. However, we are now coming to modern-point of that slippery slope since these GenAI systems are in fact software. These companies who racing to release what are clearly incomplete technologies should be held liable for any damage done by these. Yes, I realize their systems are unpredictable, but by the same token they should not be released until they can find better ways of predicting how they come up with answers, how they go about creating narratives, etc. This notion of the free pass for society to then have to deal with the fallout is nuts. So many are caught with the “idea” of the benefits because they got an answer to a question in well written English, that they ignore the core issues here. The net benefits do not outweigh the net drawbacks once we view this from a longer time scale than a month and people need to understand that 😉

Expand full comment
Apr 7Liked by Gary Marcus

I find it difficult to believe that people who didn't see the need to regulate against Fox News or British newspapers propagating falsehoods will see the need to regulate against LLM services propagating falsehoods. The generations currently in charge (and I don't just mean politicians but also journalists, owners of news media, and millions of voters) are completely unfamiliar with seriously bad times and see everything as a game without real stakes, as about maximising clicks or getting one over on the other side ('liberal tears') rather than ensuring good material outcomes.

Things will have to get much worse, I fear, perhaps to the degree of global economic crisis on the level of 1929 and another world war, before the insight is entrenched again for two or three generations in that making sound decisions based on good information is important for our collective and individual welfare.

Expand full comment

"firehose of falsehood" - this is like a Denial-of-Service attack but for people

Expand full comment
Apr 6Liked by Gary Marcus

People will smugly tell you that disinformation already exists, because they don’t understand that disinformation at scale is very different. Just as nearly anything can become toxic if the dose is high enough. OpenAI and others understand that the same lie repeated starts to sound true (hence their insistence on the inevitable greatness of their products). And too many of us suffer from hubris and believe that we ourselves will recognize bad information because we are better at it than everyone else.

When I hear a journalists saying that genAI is an excellent research assistant, my ability to trust that journalist sinks. Clearly he isn’t that great at research and doesn’t value it. He’ll be laundering the falsehoods of genAI and placing them under the masthead of whatever publication employs him.

The lack of provenance in ChatGPTs answers has been horrifying me since 2022.

Expand full comment

Really like your stuff. But you’re ignoring the fact that the primary source of effective disinformation is the government and those in power on any given issue. Whether this problem gets relatively worse or better under some sort of AI censorship regime is an open question, but one that must be addressed.

Expand full comment
Apr 7Liked by Gary Marcus

Nice, mom will be very proud!! 👍🏿

Expand full comment

Has anyone served LeCun his crow and humble pie? I'd be happy to.

Expand full comment
Apr 6Liked by Gary Marcus

Fully agree with you on this one. "Social weirding" and its sibling "political weirding" are about ramp up the weirdness across the board. https://tamhunt.medium.com/the-seven-stages-of-the-aipocalypse-1959390816fe

Expand full comment

I agree that it would be nice if AI generated content was labeled. But I don't think any such requirement is enforceable. It's too easy to post things anonymously or through several layers of cutouts. Not to mention, the worst actors in this field are governments, including our own. You won't be putting those in jail, or even fining them.

Expand full comment

Won't this just inevitably lead to people becoming much more suspicious of anything they read that is not verifiably from a known reliable source? Could this be how reputable news sources finally recapture some of their market?

Expand full comment

The image following 'from yesterday' shows as "Image not Found" over here.

The 'firehose of falsehoods' directly attacks basic human instinct, which is why it is so dangerous.

The information war from the Kremlin has been running since at least 2015 (with the Ukraine Association Treaty referendum in The Netherlands as a trial run, and Brexit the follow-up).

Expand full comment

Russia, Russia, Putin, Putin - of course, they are the only ones using the Firehose. give me a break.

Expand full comment

Marcus writes, "On Tuesday (or so) I have an essay coming out in Politico, about what the US Congress should do to make sure that we get to a good, rather than bad, place with AI. "

1) We live in a globalized world today where everything is connected to everything else, and the US Congress has jurisdiction over roughly 5% of the world's population.

2) We'll get to the same place with AI that we always get with everything else. Most people will try to do good with AI, while others will dedicate themselves to generating harm. As the scale of technological power grows, the odds progressively swing in favor of those intending harm.

3) Trying to deal with such challenging technologies one at a time is a loser's game, because an accelerating knowledge explosion will generate new challenges faster than we can meet them. If we're going to ignore this, we might as well stop writing. I hear porn is fun, we could try that instead.

4) Technology specialists in all fields are distracting us from doing what must be done to have any hope of a happy ending, shifting our focus away from symptoms and details to the source of all these technological challenges, our outdated relationship with knowledge.

5) If we don't find a way to meet the challenge presented by violent men, none of this matters anyway.

Expand full comment

There must be an immediate ceasefire in Gaza. Pope Francis has said what is needed, now he must do what is needed by going to Gaza and standing for peace, justice and freedom.

Please sign the petition and share widely.

https://chng.it/CRQ7qw4Gzn

Code pink

https://www.codepink.org/cnngaza?utm_campaign=12_15_pali_update_alert_3&utm_medium=email&utm_source=codepink

Let us also support UNRWA. If our governments won’t act in accordance with humanity, then we will. https://www.unrwausa.org/donate Let us do it to honor Aaron Bushnell, or in memory of Hind Rajab.

Let us call for a No Fly-Zone over Gaza!

These are a few small things we can do. If we can do more, let us do more.

Expand full comment

You write, "One of the (many) suggestions, almost the least we could ask for, is that we should require all AI-generated content to be labeled."

If it's produced only by a bot, whether voice or image or text, it must be labelled. That's a given. But what of mixed productions? For example, I tell the bot to generate a story and give it some ideas for the story. The bot then generates the story. I then edit the output, add to it and take from it, to create the finished piece. Should we require that the piece be labelled? If you say, "Yes" to that, what if I write a story and get the bot to simply correct grammar and spelling?

Expand full comment

Senator Schumer? He's much too busy trying to depose Netanyahu. And a few years ago, he blocked Trump from refilling the Strategic Petroleum Reserve at $42/barrel. Why on earth are you appealing to Senator Schumer? The guy's a hack.

Expand full comment