60 Comments
Jul 31Liked by Gary Marcus

"The fact that the GenAI bubble is apparently bursting sooner than expected may soon free up resources for other approaches."

I fear that these AI winters don't really work that way, and for good reason. From an investor's point of view, these "other approaches" must make their own believable promises to deliver huge paybacks. And, because they'll still be considered AI, they will be loathe to spend more money on them for quite a while. Also, the alternatives (eg, neurosymbolic AI) will most likely be considered long-term speculation, yet another thing investors don't want to hear.

Expand full comment

This is why LeCun and other pioneers of GenAI labeled it 'Deep Learning', because the phrase AI had been so tainted.

Expand full comment

Maybe they can call them "statistical prediction models". Sounds nice and boring and maybe like a thing that really exists :)

Expand full comment

Alternatively, "Stochastic parrot" has a nice exotic ring to it. You can buy one off the perch.

Expand full comment

Actually, I thought that many people were using that term or something similar.

Expand full comment

Agreed, investors will be burned for a while and see through relabeling like "Deep Learning".

Expand full comment
Jul 31Liked by Gary Marcus

"The fact that the GenAI bubble is apparently bursting sooner than expected may soon free up resources for other approaches, e.g., into neurosymbolic AI" - I truly hope so!

Expand full comment

This is just the beginning. The over hype had to die when reality takes place! As long as people can still see. That is something I am concerned about..... Can folks still see.

Expand full comment

No one is making money. Yet no one dares to drop out of the race as long as there is still hope for improvement and money to burn. The winner is still Nvidia.

Expand full comment

Until the tide comes in and everyone realizes they're sitting on excess compute capacity and won't need to expand again for a while, at which point NVDA will likely tank.

While undoubtedly the biggest winner in all of this, NVDA's price assumes continued strong *growth* in compute demand, not merely continued demand.

Expand full comment

Gary, I agree with you, this is excellent news. Let the natural selection begin. If AI is to be transformative, the sooner we find the “10,000 ways that won’t work”, the better.

Expand full comment

(Warning: sarcasm and mockery ahead…)

Well, Anita and investors need not worry: in 15 years Amy Hood, CIOs, or whoever will be replaced by AIs and robotics tech, and the investors can all go play golf in Hawaii... Isn’t that what is implied, and worth waiting for, when the companies and the economy are all on autopilot, AGI even replacing many researchers, so we can watch it all spew out a vast cornucopia of wealth and products and services and new knowledge and technology and science and social systems – the culminating flowering of The Information Age? (The Machine Age has already happened with politicians, apparently… 😂).

Regarding the next Great White Hope of neurosymbolic AI: mark my words, it will be merely another delay in facing up to the underlying philosophical issues (about intelligence and its link with consciousness, etc.) that almost no one is willing, or interested to look at – or probably is even aware of, apparently…

Expand full comment

Can't come soon enough.

I love Gary, but where I depart is that I don't care whether or not neurosymbolic AI supplants deep learning. I don't even know what "AI" means anymore, seems like it either refers to statistical prediction models or imaginary future-tech. Howabout we just go back to identifying the concrete tasks that we want computers to perform, and *then* developing the tools for the tasks?

If people wanna call those tools "AI" then fine, whatever. But the "product first, application second" experiement we've been doing for the past year has delivered little more than naive tech executives shoving chatbots into existing products and making them worse. Apparently their next idea is empowering the chatbots to make actionable decisions, which of course will be a disaster. All they can do with AI is offer solutions to nonexistent problems, like how painfully difficult it is to put things on your calendar yourself, or find readily available information with a normal search engine, or set a fucking alarm clock app using your fingers.

What, have you not been yearning to engage in conversation with your alarm clock app? AI can make that dream a reality!

There are no intelligent machines, deep network or neurosymbolic or otherwise. Not today, not in a year, not in a decade, maybe not ever. Best of luck to all those who pursue this dream, but it's a job for academics, not Microsoft and Google. Expecting today's machines to do things that require intelligence will only produce disappointment.

Expand full comment

Chinese navy should be building their own intelligent supercomputers to grab Taiwan (Chinese Manhattan project) but they don't;

Chinese "Ai" is a bunch of frivolous products for dumb consumer kids

Expand full comment

Ivy League has tens of billions of dollars in endowments but doesn't build their own intelligent supercomputers; because they don't need hype cycles & stock market bubbles; big tech needs that

Expand full comment

To your point: 2.6 trillion USD wiped out since Jul 10 between NVDA, TSLA, MSFT, AMZN, GOOGL, AAPL, META.

Source: https://www.linkedin.com/posts/jasonbelldata_tonight-activity-7224318448938426368-R4v9?utm_source=share&utm_medium=member_ios

Expand full comment

My only disappointment is not buying put options when I had the chance :)

Expand full comment

Sorry Gary, but investors are going to be too burned to “free up money for other approaches” no matter how promising they may be. It’s going to be another 20-30 AI winter 😞

Expand full comment
Jul 31·edited Jul 31

Will this actually affect the VC's and their allies at all?

Do the VC's have some reason to worry that deep pocketed investors will start to do proper analysis and due diligence on future investments after getting burned dozens of times on crap like FTX, Theranos, self-driving, blockchain, etc.... Etc...

If I understand things rightly the VC's and their associates get a nice skim off the top of all investor money whether things flop or not.

I'm sure they'd PREFER to get a Google level unicorn, but a few to several percent off the top of billions from various wealth funds is a nice consolation prize.

Aren't we just going to see them use the money they gained from the current tech bubble to start pumping up the 'next big thing'?

Expand full comment

My rudimentary understanding is that if something flops, an investor will have lost money they put in. Not sure what they would be skimming off the top of.

And they will be less inclined to invest in a similar company in the future because they don’t want to lose more money

Expand full comment

I suspect it's for the same reason people keep buying lottery tickets. Plus which, a lot of the money seems to come from various managed funds where the person writing the check isn't actually gambling with their own money.

And aren't there tax shenanigans they can do when they lose everything on a bad investment?

Expand full comment

Microsoft Chief Financial Officer Amy “Lightyear” Hood said Microsoft's extraordinary investment in building and leasing data centers to support artificial intelligence would pay off “ to infinity and beyond."

Expand full comment

HA! Anecdotal evidence of course, but I use it everyday for various writing tasks and have even built 2 GPT assistants. My $21.00 (with tax) subscription to ChatGPT is well worth. I would imagine the issue is with a lag between the deployment of products vs. in-house training. I just train myself, but most people have a life!

Expand full comment

Bubbles do have some inertia, like dotcom or blockchain did. I would not be surprised human inertia will keep it alive for a few years. But then again, "predicting is hard, especially of the future". (Niels Bohr, I think) Or "he who predicts lies, even if he is telling the truth" (Arab proverb). So I'm probably wrong.

You do indeed need an element geared to trustworthiness. Logic (symbolic stuff) has that in spades, but on itself it is as problematic as GenAI (see AI history). The marriage of both isn't a walk in the park, if not the 'hard problem' in disguise. Is there any reasonable proposal how to actually do this?

So neurosymbolic might be as problematic an avenue as scaling digital neural nets.

Expand full comment

I always thought “GenAI” was spelled a little to like “Genie”….

Expand full comment

I keep seeing people try to use GenAI to complete tasks that could be done more accurately with a simple for loop. The bubble bursting would not only free up resources into other avenues of AI research, but also just more useful software development in general.

Expand full comment