27 Comments
Feb 18, 2023ยทedited Feb 18, 2023Liked by Gary Marcus

Reminds me of the story in P.T. Barnum's autobiography of launching a "lottery" to get rid of a bunch of old bottles and blackened tinware. The intrigue served as a distraction. "The tickets went like wildfire," he wrote. "Customers did not stop to consider the nature of the prizes." Bing's "P.T. Barnum" play took advantage of the huge hype wave on ChatGPT, then Google allowed themselves to feel pressured into playing the game, too.

Expand full comment

It is unavoidable this will crash, I think. When that happens (as it has happened before), the world will switch โ€” again โ€” from optimism to pessimism. These are both shallow (so typically human) reactions. What we need, though, is realism. Which is not particularly easy to establish, given how human minds work.

Expand full comment

And while the optimism wanes, plenty more meaningful (and limited) applications of big-data and deep-nets will be introduced and people won't think of it as AI, continuing the long trend of moving goalposts.

Expand full comment

Haha. I've always felt that we've been in an AI winter since the beginning of the field in the 1950s and spring never came. The AI gods have decreed that, unless humanity figures out how to build an intelligent robot that can walk into an unfamiliar kitchen and fix a breakfast of bacon and eggs, orange juice and coffee, the AI winter will continue.

As my French friend is fond of saying, "Merde! ร‡a arrive quand, le printemps ?"

Expand full comment

Funny; I woke up to your morning post saying the same thing and I lived through the second AI winter from the late 80's. But too many $ bets might make this a shallow "trough of disillusionment".

Expand full comment

Along with the failure to meet expectations, there is too much money on this bet. This can cause that any small improvement will be carefully protected by the big players to gain some "market advantage". This will cause a research publications slowdown and accelerate the AI winter: https://aboutailatam.substack.com/p/ai-winter-is-coming

Expand full comment

Too much money also attracts charlatans and fraudsters who tend to make big claims that never pan out. It happened in the self-driving car industry. Billions have been wasted and many more will continue to be wasted.

Expand full comment

In part because there is _some_ success. The cars _can_ drive themselves in _limited_ ways! The waste comes from overextended expectations and sunk-costs. But it isn't like zero-progress has been made, we are instead talking about the boundaries of applicability. Can manufacturers selling a parallel-parking feature is a few steps removed from full self-driving ... but also is something of value people will pay money to receive. And so investment is going to continue, and so will hype as long as incremental improvements keep coming out.

Expand full comment

I suspect that they will find useful applications for LLMs so perhaps not a winter. Cold snap perhaps? Adjustment period for sure.

Expand full comment

Agree. AI is not a "one-size-fits-all" technology. Some specific and well bounded uses cases are needed first.

Expand full comment

You mean like using Watson in medicine?

(Watson failed miserably at medicine and was sold for scrap.)

Somewhat seriously, the NN model (a particular subset of SIMD computations), while having absolutely nothing to do with neurons (hint: it takes an enormous NN to simulate a single real neuron), is a mathematically and computationally interesting beast, whose actual properties should be investigated from mathematical and computational standpoints. But the idea that because you used an SIMD computation to find possibly linteresting phases in multi-metal alloys (a recent paper in Science), you were doing "AI" or using "AI techniques" is silly, obnoxious, and bad science.

Expand full comment

You are right, Watson was indeed a failure, with many lessons learned to bear in mind.

However, my comment is related to a specific use case approach such as the FDAยดs Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. It contains a list of AI/ML-enabled medical devices marketed in the United States as a resource to the public about these devices and the FDAโ€™s work in this area. (There are other specifc use cases in other areas).

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

AI applications development and utilization perhaps could be a step-by-step process, like historically has been the case of science and technology, on a path full of errors, failures, and successes. Smells a little bit like AI winter? surely. The challenge is what paradigm change is needed to overcome this. Who or which company will have the insight to find it.

Expand full comment

Replied on Twitter this time b/c my eye roll at Mr. Goebel's comment was just too big for Substack.

Expand full comment

I, too, smell frost in the air.

Expand full comment

Only in the comments section of Gary Marcus' Substack do I get to play the part of relative AI optimist.

For all the limitations and weaknesses of LLMs, I expect to see development and refinement of LLMs in their strengths continue. Basic coding assistance, carrying out dull and simple coding tasks, writing form letters, summarizing inputs, simple pastiche, and riffing on ideas and themes. Those who have higher ambitions than LLMs can provide will have to focus their work elsewhere.

Expand full comment
author

ha ha. i donโ€™t disagree. but that perhaps doesnโ€™t merit the price tag

Expand full comment

For reasoning and explanations, please see www.executable-english.net

Expand full comment
author

someone flagged a similar post from you as spam; i let one go but I will start taking them down if you persist

Expand full comment

Siri, remind me of this in 6 months.

Expand full comment

Do these marketing people on Twitter really think we are stupid?

I certainly believe that there is cetainly an AI spring to replace those marketing puppets by ChatGPT, apart from that we still have the artificial stupidity as always.

Expand full comment

What is the compelling benefit of AI that justifies creating what could turn out to be yet another existential threat? If you wish to be a critic, go for the throat. :-)

Expand full comment

Hi there,

I have been reading your posts about the problems with current approaches to AI, and I appreciated your explanations. However, I think that it would be more fruitful if you actually contributed to solving at least some of the issues rather than keeping pointing out the problems. If not you, than who?

Expand full comment

To think that they are separate/different is wishful. Constructing meaning from point cloud or camera data, word sequence data etc involve more than mere symbol-shoving. Faking intelligence isn't the same as being intelligent.

Expand full comment

Damage control has already started. From Roose's article:

In an interview on Wednesday, Kevin Scott, Microsoftโ€™s chief technology officer, characterized my chat with Bing as โ€œpart of the learning process,โ€ as it readies its A.I. for wider release.

โ€œThis is exactly the sort of conversation we need to be having, and Iโ€™m glad itโ€™s happening out in the open,โ€ he said. โ€œThese are things that would be impossible to discover in the lab.โ€

Expand full comment