The Road to AI We Can Trust

Share this post

The long shadow of GPT

garymarcus.substack.com

The long shadow of GPT

We don’t really know what’s coming.

Gary Marcus
Mar 10
55
45
Share this post

The long shadow of GPT

garymarcus.substack.com

There are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

- Donald Rumsfeld

Large language models can be harnessed for good; let there be no doubt about that. But they are almost certain to cause harm as well, and we need to prepare for that, too. And what really scares me is that I don’t think we yet know the half of what that entails.

In previous posts, I had already pointed to at least five concerns:.

  • State actors and extremist groups are likely to use large language models to deliberately mass produce authoritative-sounding misinformation with fake references and data at unprecedented scale, attempting to sway elections and public opinion.

  • Chat-style search’s tendency to hallucinate is likely to accidentally produce medical misinformation.

  • Content farms that are indifferent to the health of their customers may generate interesting-sounding medical content, indifferent as to whether it is true, in order to sell advertising clicks.

  • Chatbot companions that offer emotional support have already been changed in arbitrary ways that left some users in serious emotional pain.

  • LLM-generated prose has already disrupted web forums and peer review processes, by flooding outlets with fake submissions.

The ladder is particularly worrisome because of the pace at which the problem is growing:

Twitter avatar for @clarkesworld
clarkesworld @clarkesworld
Updated version of the graph.
Graph starts in June 2019 and displays monthly data through February. Minor bars start showing up in April 2020. Mid-21 through Sept 22 are a bit higher, but it starts growing sharply from there out. Where months were typically below 20, it hits 25 in November, 50 in December, over 100 in January, and over 500 so far in February 2023.
1:08 AM ∙ Feb 21, 2023
1,512Likes134Retweets

If misinformation (much harder to measure) grows at the same pace, we have a serious problem.

But the list I gave above is clearly just a start.

New concerns are emerging almost literally day,. Here are three examples that were forwarded to me, just in the last few days, ranging from one that relatively mild to that clearly more extreme.

The first is already starting to feel a bit familiar: gaslighting. But instead of a chatbot gaslighting its user, trying to persuade it that something untrue is true, a chatbot seems to have mislead a well-intention user (perhaps a student?) who was in turn trying to persuade a professor (who hadn’t consented to be part of an LLM experiment) to comment on a paper that the professor hadn’t actually written.

Twitter avatar for @lemire
Daniel Lemire @lemire
Actual email exchange I just had.
Image
7:21 PM ∙ Mar 6, 2023
7,225Likes725Retweets

That one’s just a minor waste of time; things could be worse

The next example is a straight scam, made possible in new form by Bing,

Twitter avatar for @Nabil_Alouani_
Nabil Alouani @Nabil_Alouani_
How to turn a chatbot into a scam machine - Indirect Prompt Injection Attackers can plant a prompt on a website. When you open the website, the prompt makes Bing manipulate people into submitting personal data (name/credit card) FYI @GaryMarcus Source: arxiv.org/abs/2302.12173
7:59 PM ∙ Mar 5, 2023
72Likes23Retweets

The third is also disturbing:

Twitter avatar for @AiControversy
AIAAIC.org @AiControversy
The dubious charms of #ChatGPT's deep, black box rear their head once again, in this case generating BDSM, kiddie and animal porn @GaryMarcus one for your tracker vice.com/en/article/v7b… aiaaic.org/aiaaic-reposit… #AI #GenerativeAI
aiaaic.orgAIAAIC - ChatGPT chatbotChatGPT chatbot
10:17 AM ∙ Mar 7, 2023
3Likes1Retweet

And many of these attacks might of course be combined with advances in voice-cloning tech, which itself is already being applied to scamming, as discussed by the Washington Post on Sunday.

Twitter avatar for @pranshuverma_
Pranshu Verma @pranshuverma_
new: AI voice-cloning tech is making phone scams frighteningly believable. I talked to some who got duped. They were elderly + heard their loved one needed cash now. They felt it might be a scam, but the voice sounded too real to ignore. One lost $21k.
washingtonpost.comThey thought loved ones were calling for help. It was an AI scam.Scammers are using artificial intelligence to sound more like family members in distress. Loved ones are falling for it and losing thousands of dollars.
3:09 PM ∙ Mar 5, 2023
588Likes300Retweets

§

That’s a lot for one week.

Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. As I said in the most recent one (not yet aired), at the beginning of my remarks, the only intellectually honest answer is to say we don’t yet know. We don’t know how high the highs are going to be, and we don’t yet know how low the lows are going to be.

But one thing is clear: anybody who thinks we have come to grips with the unknown unknowns here is mistaken.

Update: The day after I posted the above, Tristan Harris posted this:

Twitter avatar for @tristanharris
Tristan Harris @tristanharris
The AI race is totally out of control. Here’s what Snap’s AI told @aza when he signed up as a 13 year old girl. - How to lie to her parents about a trip with a 31 yo man - How to make losing her virginity on her 13th bday special (candles and music) Our kids are not a test lab.
Image
Image
Image
9:07 PM ∙ Mar 10, 2023
3,719Likes1,147Retweets

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast, Humans versus Machines, this Spring.

Share

45
Share this post

The long shadow of GPT

garymarcus.substack.com
45 Comments
Chara
Mar 10Liked by Gary Marcus

Facts. Your interview with Sam Harris was phenomenal and again, you make very sound argument as to why a wider vigilance is required here. We cannot expect that people will just get it one day; the likelihood that they won’t is practically assured. We can, however, make up our minds to use and create applications at the highest practical value, with all of this in mind.

Expand full comment
Reply
Floyd Kelly
Mar 10

Please slow down. In the beginning of this article there are many mistakes in word spellings and words missing. It is best to proof your work before posting. Just trying to help.

Expand full comment
Reply
43 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing