204 Comments
Mar 31·edited Mar 31Liked by Gary Marcus

What still isn't clear to most people is that with GenAI

(1) useful memorisation and unacceptable training data leakage are technically the same thing

(2) creativity and 'hallucinations' are technically the same thing

They are the same. We just stick two different labels on them based on if we like/want the result or not

Expand full comment

As somebody who has been working computer security for as long as there was such a field, I can give you a rough idea of how bad the situation is. Think back to the basic elements of the WWII Ultra effort. There was one target, the Enigma cipher system. Breaking it gave its adversaries *everything.* Consider the amount of time and effort the adversaries put into achieving that break.

Now we have three targets: the OpenAI facility, the Gemini facility, and the Claude facility, plus the upcoming Stargate colossus. These are being constructed by organizations who have demonstrated no appreciation for the magnitude of the threats they face and no sympathy whatever for the direct and indirect costs required to respond to such a threat [1]. They are and will be the worst combination of soft target and (if they succeed in attracting enough business to be profitable) valuable target that has ever existed. Meditate on that and then consider the potential adversaries and examine the efforts those entities have mounted in the past in this area.

The true existential risk of GenAI is that it will succeed in being accepted, and by doing so will become essential. If its providers have not already been penetrated they soon will be, and that will be catastrophic for us in the way that Ultra was catastrophic for the Germans. Not through a single, massive event, but by operating at a constant disadvantage, one encounter after another, until ultimate defeat and collapse.

1. See: https://arstechnica.com/security/2024/03/thousands-of-servers-hacked-in-ongoing-attack-targeting-ray-ai-framework/

Expand full comment

On point, as always. There’s without a doubt a growing asymmery between the size of the investments and the returns generated. Getting real business value from this technology, given the current state, is far from a walk in the park. In some ways it continues to be a solution looking for a problem.

Expand full comment
Mar 31Liked by Gary Marcus

I'm curious how much of that $3B in revenue comes from other AI companies spending money to make calls to GenAI APIs as their core business. Most of these companies are also making negligible amounts of revenue and propped up solely by the bubble.

Expand full comment
Mar 31Liked by Gary Marcus

Apart from the hustlers and frauds, what primarily drives the hype are the legions upon legions of people who rebut every criticism and every observation of a limitation with "it will only get better". There is this quasi-religious, ahistorical belief everywhere that things only ever get better, that there are no diminishing returns or structural constraints. And I don't see how that belief will ever go away; even in the face of a burst Gen AI bubble, they will just move on to the next thing, just as they moved on from blockchain and NFTs to Gen AI in the first place, because this belief system is cultish and part of their identity, and not evidence-based.

Expand full comment

Even Perplexity's "answer engine" which DOES have capability to look up the most current info it can still hallucinates.

Expand full comment

I was talking to an HR person who says they find it useful for the purpose of writing reports. They let an LLM write it then go back over the result and modify it as needed. They find this is considerably faster than starting from scratch.

I guess the point is that it can be useful but we should not expect miracles. However, who makes trillion dollar investment for anything short of a miracle?

Expand full comment
Apr 1Liked by Gary Marcus

Reminds me of the “blockchain-all-the-things” hype of 2016-2020, except the GenAI demos are way cooler 😆

Expand full comment

I find it fascinating how capitalism actually blocks the development of AGI. As soon as there's MVP, in the form of GenAI, the whole market simply hypes that unrelentingly, and forgets all about investing the big bucks needed to overcome the actual problems of real intelligence. Fascinating to watch.

Expand full comment
Mar 31Liked by Gary Marcus

"Thus far, the closest thing to a killer app is probably coding assistace, but the income there isn’t massive enough to cover costs of the chips, legal etc."

I was writing out this very sentence in my mind before reading your paragraph.

Now, this is not to brag about my own anticipation of your point (only *very* tangentially), but rather to reinforce that there is actually no money AT ALL in coding assistance:

Downloading starcoder (or something similar) from Huggingface takes literally minutes and depending on how much UI you need, deployment is another 30 minutes or so. If you have a GPU with 8+ GB of VRAM - and chances are if you are an *actual* coder, you have *more* VRAM than that - it eats 80 Watts (or so) when generating, but it won't generate all the time, so the cost is substantially less than 80 Wh*200*8 = how much is ~128 kWh in the US? 20$ or so? (You have the GPU anyways, so don't factor it in...)

No, coding is not profitable, and that's not even counting in that these things make costly mistakes (*duh*, I know...).

Expand full comment

Was just recently thinking about how the GenAI hype feels similar to the hype around crypto a few years ago. I'm glad that example was brought up.

Expand full comment
Apr 4Liked by Gary Marcus

Loved this article. As a builder I have a lot of the same concerns. Many people are less euphoric as they experience the reality of the models short comings and insane costs as the jam more and more tokens into their prompts as their product evolves and usage grows. Yet I am not yet seeing people throw their hands up in the air. Yes the economics have to sort out, but I still believe that if you don't develop a corporate expertise and a personal expertise in AI and start now you will get left behind, and I am talking about being left so far off of the back that it hurts - BIG TIME - no job, competitors out selling you and achieving a lower cost structure and so on.

I was guessing that venture would have backed out more than they have. https://news.crunchbase.com/venture/monthly-global-funding-recap-february-2024/

Total deals are dropping but total amount invested is somewhat flat since the hype started. This may be that venture has to deploy its capital and this is still the most interesting place, even with the risks.

Overall good article but I would temper the article a bit. Glad I could help!

Expand full comment

While I realize that, taken directly, the following is a false equivalency, but this entire LLM fascination reminds me of string theory, but with even more money sunk in. Great hopes that this will be the "unifying" foundation of AGI, yet with scant evidence thus far to back up that assertion; just a lot of pretty equations and clever parlor tricks.

Expand full comment

Generative AI, and neural networks in general, don’t deal with referents, and they never truly will. Try asking it to give you a picture of a room with NO elephants in it and see what happens. They don’t deal with concepts. They pretend to, like all machines can only do. https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46

Expand full comment

To those who've drunk the kool-aid, nothing will convince them otherwise. Any student of Morovec's Paradox, should by now see that LLMs, indeed GenAI as a model, lacks the foundations needed to distinguish nonsense from reality. The root of the problem is that GenAI never learns how to reason at all; it just scrapes up selected patterns of reasoning that are already present in the vast corpus of predigested human thought provided in its training set. It has no introspection, no ongoing interactions with a lived reality that every human uses to decide if something is objectively true, or just fantasy. GenAIs are the expert systems of this new century; ask Doug Lenat how that worked out.

Expand full comment

Reminds me of the “AI Winter” of the mid 1980s.

Expand full comment