The Road to AI We Can Trust

Share this post

A Few Words About Bullshit

garymarcus.substack.com

A Few Words About Bullshit

How MetaAI’s Galactica just jumped the shark

Gary Marcus
Nov 16, 2022
62
34
Share this post

A Few Words About Bullshit

garymarcus.substack.com

“what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad."

– Douglas Hofstadter

MetaAI has got a new AI system—trained on a hardcore diet of science, no less—and Yann LeCun is really, really proud of it:

Twitter avatar for @ylecun
Yann LeCun @ylecun
A Large Language Model trained on scientific papers. Type a text and galactica.ai will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode
Twitter avatar for @paperswithcode
Papers with Code @paperswithcode
🪐 Introducing Galactica. A large language model for science. Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Explore and get weights: https://t.co/jKEP8S7Yfl https://t.co/niXmKjSlXW
8:43 PM ∙ Nov 15, 2022
1,154Likes227Retweets

Sounds great! I can’t wait to see the fawning New York Times story tomorrow morning.

But…wait…well, um, how do I put this politely? It prevaricates. A lot.

Just like every other large language model I have seen. And, to be honest, it’s kind of scary seeing an LLM confabulate math and science. High school students will love it, and use it to fool and intimidate (some of) their teachers. The rest of us should be terrified.

Perhaps first to point this out, earlier this evening, was David Chapman. Click through to find out what Galactica fabricated about bears in space!

Twitter avatar for @Meaningness
David Chapman @Meaningness
🤖 Meta (= Facebook) announced a new "language model" today, trained on millions of scientific papers. Judging from examples in the HN discussion, it's hilariously bad. Language models should model language, not "knowledge." news.ycombinator.com/item?id=336112…
Image
Image
9:43 PM ∙ Nov 15, 2022
40Likes8Retweets

Minutes after I noticed Chapman’s post, My friend Andrew Sundstrom began flooding me with a stream of examples of his own, too good for me not to share (with his permission):

Pitch perfect and utterly bogus imitations of science and math, presented as the real thing. (More examples: https://cs.nyu.edu/~davise/papers/ExperimentWithGalactica.html)

Is this really what AI has come to, automatically mixing reality with bullshit so finely we can no longer recognize the difference?

Share

34
Share this post

A Few Words About Bullshit

garymarcus.substack.com
34 Comments
Rebel Science
Nov 16, 2022Liked by Gary Marcus

No one disputes the fact that Yann LeCun is a praiseworthy deep learning pioneer and expert. But, in my opinion, LeCun's fixation on DL as the cure for everything is one of the worst things to have happened to AGI research.

Deep learning has absolutely nothing to do with intelligence as we observe it in humans and animals. Why? Because it is inherently incapable of effectively generalizing. Objective function optimization (the gradient learning mechanism that LeCun is married to) is the opposite of generalization. This is not a problem that can be fixed with add-ons. It's a fundamental flaw in DL that makes it irrelevant to AGI.

Generalization is the key to context-bound intelligence. My advice to LeCun is this: Please leave AGI to other more qualified people.

Expand full comment
Reply
4 replies
Walid Saba
Writes The Science of NLU
Nov 16, 2022Liked by Gary Marcus

The LLM charade continues... hopefully not for long.

Expand full comment
Reply
32 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing