27 Comments
Feb 2, 2023·edited Feb 2, 2023Liked by Gary Marcus

Epic summary, Gary --- and lots of gold nuggets! I, for one, am glad that your views are finally being heard, despite all the gaslighting over the last 3 or 4 years and still rampant wishful thinking now. Please keep speaking up and bring to us more realistic and clear-headed views on AI, because we need them to make real progress.

Expand full comment

I have been *stunned* at LeCun, perhaps naively. Through pure chance, I encountered theorists who made me extremely skeptical of the relationship between these types of agents and what was historically meant by "AI" and now by "AGI"; later, working with ML made me even more skeptical. To see people I once considered opponents in this debate silently switching sides has been shocking to me, but probably shouldn't have been (anymore than seeing founders trumpet their current fixations and enterprises before later pretending they never bought the hype is).

Love your work, of course!

Expand full comment

In my unprofessional, long-time hobbyist opinion, it does feel like GPT has been refined, not improved. That is to say, its strengths have been made stronger. It is a bullshitter, but it has become an excellent, entertaining bullshitter, and there are use cases for excellent bullshitting. I should know, I use those very use cases!

Yet those weaknesses endure, fundamentally limiting the system to cases where a high rate of failure is acceptable, even amusing, and human oversight is ever present. With each refinement, as reliably as the tides, waves of awe and utopian and dystopian prognostication rise, crest, and recede. Reality sets in and the world, under no obligation to follow the tropes of science fiction, remains fundamentally unchanged.

Expand full comment

Intelligence is... integrity.

Expand full comment

My hypothesis is that LeCun is a large language model

Expand full comment

Gary, my take is that Yann has a new much more expansive model built on deep learning paradigm (many units connected via gradient learning in a long chain). This new model has short term memory, and lots of other parts that a simple deep net lacks. When he is looking at his new fancy model, he sees the things that the pure deep model could not do. so NOW he is saying, LLM != thinking.

Still I am taken with his new model. It is a very broad sketch, and will take many years to get it to really work, but I think it will behave very differently from a thinking perspective.

And of course he is a full professor. We should just not expect him to acknowledge ANYTHING. :-)

Expand full comment

"GPT hasn’t really changed, either."

This is wrong. There was a massive change between GPT2 and 3: 3 writes proper English. 2 still had lots of grammatical disfluencies, and the semantic problems were at the sentence level.

GPT3 has literally solved the problem of grammatical output. I know your whole point is that grammatical output alone doesn't do anything, but that doesn't change the fact that this is a huge step forward. It's an advance over the chess and go supremacy, because natural languages are evolved systems, not designed systems. AFAIK, GPT-3 was the first software that could properly mimic any evolved, biological system.

It's not AGI (and we'll probably never get anything people recognise as AGI); but it is something.

Expand full comment

I’m really curious about where ChatGPT is getting its information.

I ran an informal experiment the other day to test the accuracy of the chatbot, and the results were baffling. I picked 15 old, public-domain stories that I’ve read in the past few years and asked the chatbot to write summaries of them. All these stories are available for free in their entirety online at sites like Project Gutenberg and American Literature Online and have been for years. (If I remember correctly, they were all added before 2021, the year that the chatbot was done being trained.) The point is that the chatbot should have had access to all the original texts.

I found that it did a pretty good job with stories written by famous authors like D.H. Lawrence and Joseph Conrad. It produced summaries that were accurate, clear, and concise. If I hadn’t known about ChatGPT, I would have assumed that they were written by a professional book reviewer or librarian. For lesser-known authors like Wilkie Collins or E.F. Benson, it did poorly. Sometimes the summaries had a tangential relation to the original stories, but for many of them, Chatbot just fabricated bizarre scenarios that had nothing to do with the original texts—not even close. (Some were surreal and laugh-out-loud funny.) This leads me to believe that the Chatbot is examining other people’s comments about the texts, not the *texts themselves*.

What should I make of that? Has anyone else had a similar experience?

Expand full comment

Misleading, again. Read for yourself. LeCun is not defending LLM. He is pointing out to his research paper, that has nothing to do with LLM, which, according to him, shows that machines can learn from data.

See the paper: "Tracking the World State with Recurrent Entity Networks". https://arxiv.org/abs/1612.03969

At this stage it is a question if you actually understand how LLM relates to other architectures, or if for you anything which says "neural net" is same thing.

Expand full comment

All quite standard human reaction to controversy about technology. We could all stand to be more humble in recognizing our limitations, generous in granting others recognition of their achievements, and civil in our debates. Alas, we would lose both power and popularity.

Expand full comment

Just read an article on the different types of gaslighting and LeCun's tweets (knowingly or unknowingly - who knows) cover about half - the other half includes things like 'religious gaslighting' 😅 That's one of the reasons why the Twitterverse is not for me!

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

I am reminded of Hanlon's razor: "never attribute to malice what you can attribute to stupidity" (or Bonhoeffer's words on stupidity versus malice. Bonhoeffer wrote for instance "There are human beings who are of remarkably agile intellect yet stupid"). I've noticed that the people with the naive stories about AI generally tend to believe them. That was probably true of LeCun as well. He is now — one hopes — being educated by reality, a reality that some were already aware of.

See https://www.linkedin.com/pulse/stupidity-versus-malice-gerben-wierda/ where a relation is made between Bonhoeffer and what Stanislas Dehaene has so brilliantly uncovered and written about human intelligence

"Against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here; reasons fall on deaf ears; facts that contradict one’s prejudgment simply need not be believed- in such moments the stupid person even becomes critical – and when facts are irrefutable they are just pushed aside as inconsequential, as incidental. In all this the stupid person, in contrast to the malicious one, is utterly self-satisfied and, being easily irritated, becomes dangerous by going on the attack." — Bonhoeffer

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

Yep. For students of (the history of) AI the true frustrating thing is the constant repeat of this pattern. I have also observed that people who work in this area aren't always as gaslighting as the loud voices that float to the top.

Aside: I've learned form a friend once: "there are many ways to rise to the top, one of these is being a lightweight". (Originally in Dutch "one of these is by lacking weight" as "weight' has the double meaning of mass-weight and intellectual-weight (consequence) in the sentence)

“Never agree with the aforementioned critics but start mimicking their approach.” — this happened to Dreyfus. His critique was privately listened to and publicly scorned.

"Observe how aforementioned critics gain even more relevance and popularity for being right." — this never happened to Dreyfus. And it doesn't sound right for any reasonable value of 'popularity'. Which critic who (correctly) corrected the fairy tales we *like* to hear has *ever* become popular? I suspect this is — for psychological reasons — unavoidable, a psychological reason being that the brain craves reinforcement and the critics by definition come late to the game.

Expand full comment

AI will always be garbage intelligence. All the WEFers in Davos are going to be replaced with chatbots that no one listens to.

Expand full comment