The Road to AI We Can Trust

Share this post

This Week in AI Doublespeak

garymarcus.substack.com

This Week in AI Doublespeak

A few more words about bullshit

Gary Marcus
Mar 18
66
41
Share this post

This Week in AI Doublespeak

garymarcus.substack.com

Wikipedia’s definition of Doublespeak

Has tech become the new politics?

Here’s some first-class AI doublespeak, straight from the Ministry of Damage Control, mostly from just this past week:

CNBC Headline

Except when those wrong answers are not.

Twitter avatar for @gbrumfiel
Geoff Brumfiel @gbrumfiel
Ok, I mean this is pretty incredible. GPT-4 has invented a news story about Iran concealing a giant nuclear reactor at it's main research site in Arak. It also claims I wrote the story! If Iran DID conceal a reactor, that would be major news and trigger all kinds of alarm.
Image
7:21 PM ∙ Mar 15, 2023
41Likes13Retweets

Very useful. Very cool.

§

Pay no attention to the messes we make, says OpenAI

1
:

This is an example ofthe tu quoque fallacy, a special form of distraction by ad hominem argument, March 15 at TechCrunch. Would you use a calculator that makes mistakes?

§

And by the way, no need to worry, says OpenAI, because our models can reason:

Interview with ABC News, March 16 , mischaracterizing what GPT does

True LLMs don’t (just) memorize, and true that their models make for lousy databases (for a database a hallucination is an outright fail) but if the definition of reasoning is to obtain valid conclusions from known facts, GPT-4 frequently falls short there, too.

Twitter avatar for @leonpalafox
Leon Palafox @leonpalafox
This is wrong, very very wrong. The rates impacted the bank, not the startup and there has yet to be a bankruptcy ⁦@GaryMarcus⁩
Image
2:11 PM ∙ Mar 12, 2023

So-called “reasoning” by free association, even constrained by a giant database, isn’t really reasoning:

Twitter avatar for @fchollet
François Chollet @fchollet
So far all evidence that LLMs can perform few-shot reasoning on novel problems seems to boil down to "LLMs store patterns they can reapply to new inputs", i.e. it works for problems that follow a structure the model has seen before, but doesn't work on new problems.
9:11 AM ∙ Dec 23, 2022
652Likes73Retweets

§

Here’s another form of Orwellian posturing. Microsoft’s website tells you this

“We are committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust.“ (Screenshot from this morning.)

But their actions speak differently.

Platformer
Microsoft just laid off one of its responsible AI teams
I. Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a ti…
Read more
7 days ago · 80 likes · 3 comments · Zoë Schiffer and Casey Newton

§

Once upon a time (February 27, to be specific) OpenAI promised to take good care of us little people:

Nowadays, what we get instead is the old “It’s not our fault; we warned you things might go wrong” excuse:

Twitter avatar for @fabiochiusi
Fabio Chiusi @fabiochiusi
“I'm particularly worried that these models could be used for large-scale disinformation," said the creator of one of the models that can be used for large-scale disinformation
abcnews.go.comOpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’The CEO behind the company that created ChatGPT believes artificial intelligence will reshape society as we know it, but admits there are extraordinary risks.
7:47 AM ∙ Mar 17, 2023
529Likes177Retweets

Nabil Alouani nails what’s really going on:

Twitter avatar for @Nabil_Alouani_
Nabil Alouani @Nabil_Alouani_
@GaryMarcus OpenAI: "We're worried about disinformation." Also OpenAI: "We released the perfect tool to generate endless fake news and propaganda. We don't have any significant way to identify AI-generated bullshit. Oh and we won't disclose anything about how our models work. Good luck!"
12:14 PM ∙ Mar 17, 2023
50Likes13Retweets

Note that the same logic applies to essentially every potential harm OpenAI recently warned of. None are solved, and we are told nothing about how the models work.

Some of the many risks of GPT-4, but now in more believable, more persuasive form

Good luck, humans!

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is deeply concerned about current AI but genuinely hopeful that we might do better.

Watch for his new podcast, Humans versus Machines, this Spring.

1

The line attributed to Greg Brockman (“it’s not perfect, but neither are you.”) does not appear in the actual news story (aside from the headline), but does appear in the live demo from which the news story was drawn.

41
Share this post

This Week in AI Doublespeak

garymarcus.substack.com
41 Comments
TheOtherKC
Writes The Cybernetic DM
Mar 18Liked by Gary Marcus

Funny how the risk posed by AI is exactly high enough for them to carefully guard their methods, but never so high that they have to refrain from releasing potentially profitable products.

Expand full comment
Reply
BadCat
Mar 18Liked by Gary Marcus

* " . . . we create a reasoning engine, not a fact database…” Gaslighting.

* Brockman's "neither are you" - extraordinary comment.

Anymore, big profile actors often seem like manure spreaders, whether in politics or business.

Expand full comment
Reply
6 replies by Gary Marcus and others
39 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing