55 Comments

I'd be very interested to know some of the thinking and underlying observations that caused Yann to change his mind on these issues. It can be very difficult for people to publicly reverse themselves on almost anything, even more so something so close to their area of expertise, and possibly so intertwined with their scientific reputation.

Expand full comment
Jun 4·edited Jun 4Liked by Gary Marcus

Having grown up in between a physics lab (father) and a computer (my mother was a ...), and then spectator of decades of knock down drag out scientific brawling, the biggest research question I was left with was .... 'what makes smart people so stupid?'.

Kathryn Myronuk does a good talk on 'the failure modes of experts'

Expand full comment

Surely someone has done some interesting research on that topic.

Expand full comment

I did a course in Future Studies to get more information about how this is addressed. Also the Santa Fe Institute and the RAND Institute all did projects on the topic over time. This is also in Psychology relating to Cognitive Blindspots.

Also learning how the US Army approaches information gathering in conditions of uncertainty - particularly with emerging technologies (and with the help of emerging technologies). I think this is practiced at the Creative Computation Design ? Problem Solving ? or Games ? institute in LA.

Expand full comment

It might be helpful if Yann would point out where you still disagree today.

Expand full comment
Jun 4Liked by Gary Marcus

Great post, and by far my favorite part is where you end on such a positive note with a call to action to put aside egos and join forces! I respect both you and Yann greatly as scientists, and while I think it's important to set the record straight with respect to the attribution of positions that folks have adopted over time (the irony given the recent Musk vs. Le Cun tête-à-tête re: scientific attribution is not lost on me!), I am much more interested in how we can all move forward to develop AI together. I am excited by Yann's latest agenda on model-based AI (JEPA), as well as hearing Demis' focus on planning and memory in recent talks that he's given. It's time we moved beyond LLMs.

Expand full comment
Jun 4·edited Jun 4Liked by Gary Marcus

One of the things I noticed back when I was in academia is the egos become so hardened that it’s like talking to an AI. It's a kind of self-blindness, like in a machine. In fact, you could program a chat bot to simulate some of these professors, since the ego is a mechanical process that can be predicted by probabilities… just set them up in a simulated colloquium and watch them do battle. :))

Not that there’s any of that going on here today…

Expand full comment

However brilliant a scientist LeCun may be, his personality comes off as one that uses a combination of hype-riding and contrarianism to position himself as Superior To Everyone Else. I personally know one specimen like that - he's an incel who's profoundly unhappy with himself and can't even admit it due to the risk of the ego injury it would cause. Needless to say neither of these specimens impress me.

Expand full comment

He's an incel? Is that even true? If it is, how is that appropriate?

Expand full comment

OP is speaking of their friend in that sentence.

Expand full comment

I see. However, it still seems like an inappropriate thing to say and it poisons the well.

Expand full comment
Jun 4Liked by Gary Marcus

Excellent review, once again, of state of play. The fact that this is the nth time around this track suggests, to me, it won’t be the last. Why not? The old adage suggests itself: it is hard to convince someone whose paycheck depends on believing X to not believe X. This extends to corporate “people” as well as people that make up these corporate “people.” At the very least, every real change of mind will have to be presented as a novel discovery so that earlier critics whose views are tacitly adopted gain no credibility. After all, we will revisit this track again and dont want anyone pre-doubting the point of it all when we make our n+1th revolution around it.

Expand full comment
Jun 4·edited Jun 4Liked by Gary Marcus

"For a successful technology, reality must take precedence over public relations, for nature cannot be fooled", Richard Feynman, Report of the Presidential Commission on the Space Shuttle Challenger Accident — Appendix F, 1986, https://www.e-education.psu.edu/files/meteo361/file/nasa_report.pdf

Expand full comment
author

such a great quote

Expand full comment

Doesn't LeCun's paycheck depend at least partly on continuing to make improvements? He has an incentive to acknowledge the limitations of current approaches -- as he is now doing. It's not like he's a government bureaucrat whose job depending on stopping people doing things.

Expand full comment

Yes, partly. It also depends on keeping the hype ship afloat. This is not the first time the AI hype cycle is with us. The trick has always been to ride the cycle hard till the problems endemic to the enterprise (those Gary returns to again and again) surface yet again, then, first, deny theses are real problems this time, second point out these problems as research topics on the way to being solved, third admit that the prospective solutions are defective and forth jump to the next big thing to restart the cycle. The step between 3 and 4 involves laying claim to the insights of others concerning the problems slowing down the trumpeted this-time-is-different progress. Le Cun seems to be at this stage. It suggests the hype cycle has peaked. If so, it will have been in record time.

Expand full comment

Excellent response! I am finding that people do not understand the criticalness of data. In GeoAI especially with remote sensing /EO / satellite data, the industry is excited because they run models on some free data and think the result is good. AI does not magically make data smarter often it is the opposite. 🤷🏻‍♀️

Expand full comment
Jun 4Liked by Gary Marcus

> but at the time, back in 2015, you thought big deep nets were all you need, writing in Nature in 2015 that “big activity vectors, big weight matrices and scalar non-linearities … perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning” — dismissing then any need for specific efforts devoted to commonsense, physical reasoning or world models.

This! They were too much relying on a wrong (naive) understanding of higher mental functions ...

Expand full comment

I've been watching AI from the sidelines for lo these many years, starting with Blocks World, and I saw firsthand what Google was able to do with the massive amounts of data they have.

Concurrently, I've also seen "AI researchers" (I put that in quotes deliberately) debate endlessly about how to represent knowledge and learning, and generally speaking, come up empty. Once ML had a billion pieces of data to learn from, things went pretty fast. Until they hit the wall. Which we're now at. My 50,000 foot view.

Expand full comment
Jun 4Liked by Gary Marcus

As my old Postdoc advisor, (a theoretical physicist turned synthetic biologist) once lectured me on, there are two type of models- physics based ones that rely on first principles and statistical models that do not. I found this annoying at the time as he was having me work on using ML/statistical modeling, but he was correct. While LLMs can be more generalizable than his mathematical models, his mathematical models are motivated by actual physical understanding. LLMs will never ever be able to reason in such a way...

Expand full comment
Jun 5Liked by Gary Marcus

Gary, super enlightening; one minor point: the final two links (AlphaGeometry and Meta’s Cicero) both resolve to the same post... about Cicero. I could not find anything in your Substack about AlphaGeometry other than a single mention in March 10 2022 in "Deep Learning is Hitting a Wall"; tho' other stacks have written about it. Thx if there is more commentary on it ICIMI, but not a priority.

Expand full comment
author
Jun 5·edited Jun 5Author

i never wrote about AlphaGeometry but may yet; will fix the AG link

Expand full comment
author

(AG link is fine in the online version, now)

Expand full comment

Great thx. Yes, that's the first place I went to; alas, I'm growing a bit addicted to *your* takes on these things, vs. the originator's 😏

Expand full comment
Jun 5Liked by Gary Marcus

Good luck Gary but I wont hold my breath.

Expand full comment
Jun 4Liked by Gary Marcus

$$$.

Expand full comment
Jun 4Liked by Gary Marcus

Courageous, useful. Thank you.

Expand full comment

(Sighing) could have saved us all a lot of argumentation if LeCun et al. would have read Fodor, Jerry A. and Zenon W. Pylyshyn, 1988, “Connectionism and Cognitive Architecture: A Critical Analysis”, Cognition, 28(1–2): 3–71. doi:10.1016/0010-0277(88)90031

Expand full comment

I should still have my copy of the book version. It was reading for a CogSci class as part of my grad work in philosophy at USC. Still extremely relevant. (Maybe it was a collection that focused on their ideas.)

Expand full comment

Hi Gary, your link to AlphaGeometry goes to the Cicero article.

Expand full comment