70 Comments
Mar 30Liked by Gary Marcus

"A $100B LLM is still an LLM"

100% this. There have been no significant paradigm shifts that move us away from hallucinations and similar problems with LLMs.

Expand full comment

History repeats itself again. During the previous cycle in the 80ies-90ies, big money were spent on infrastructure and research programs, at that time by governments - Japan's 5th gen computers and the US Strategic Computing Initiative. Just like today, they didn't have a clear strategy back then, the hope was that AI would somehow "emerge" :)

Expand full comment
Mar 30·edited Mar 30

Blind faith in scaling laws that they don't understand plus the promise of owning the means of production is an irresistible combination. Losing their $100B is the only way that they'll learn.

Expand full comment
Mar 30Liked by Gary Marcus

A bad thing about this prospect of a plan is the signal it might send to competitors and companies that are trying (in vain?) to capitalize on LLMs, AI, or whatever the name. Money that companies could spend on useful endeavors will probably be squandered. A technology that will likely cost jobs (implemented successfully or not), that treats humans as externalities, that will likely not result in a more stable food supply, quality of life for the masses and so on, sounds to me like a shortsighted investment driven by greed, narcissism, and perhaps idiocy, resulting in a positive feedback loop that won't end well I'm afraid. But if it can happen, it will. These companies have the deeper coffers to steer and manipulate public opinion and buy or nudge politicians.

Expand full comment

Just think about the public transit system all that money could have built---.

Expand full comment
Mar 30Liked by Gary Marcus

"Presumably the operating idea here is that (a) $100B will equal AGI and (b) that customers will pay literally anything for AGI. I doubt either premise is correct." It just struck me how important that second part is. If AGI amounts to 'another human' then its value is equivalent to 'another human' and for that we already know 'pay anything for' is nonsense.

Expand full comment
Apr 1·edited Apr 1Liked by Gary Marcus

And if an octillion parameter LLM in 2031 ruins lives in a similar manner to the tragedy of subpostmasters and subpostmistresses (“Mr Bates vs The Post Office”) - in this case, the fault is LLM hallucination - will there be outrage?

Or will critics, this time, because the sunk cost is so deep, have their mouths taped up, their bodies broken and their families and lives torn asunder by a sophisticated army of lawyers, corrupt politicians, troll farms and other thugs (maybe literal thugs, and autonomous machines)?

Time to cash in on this next blockbuster movie idea… does anyone have James Cameron’s number?! 😂

Expand full comment

The whole thing with AI seems like pure hype. No one is investing the kind of money that would be needed to really rip apart the brain and work out how intelligence emerges. Why bother when you can churn out another cruddy LLM and just hype it TF?

Capitalism and market forces killed the search for AGİ as soon as a minimal viable fake became available.

Expand full comment

Quite right. This 'single approach' (token/pixel statistics) is going to fail regarding AGI.

Taking OpenAI's own last published scaling numbers (for GPT3) and using the Winogrande benchmark (some real understanding challenges), I have calculated (quick and dirty) from those numbers that we may need something in the order of ~3,500,000,000,000,000 *times* as much as we currently have. See https://ea.rna.nl/2024/02/13/will-sam-altmans-7-trillion-ai-plan-rescue-ai/ the effect of scaling is something like the log of the log of (the log of) size. Going from 10B to 100B investment is going to do nothing.

Expand full comment
Mar 30·edited Mar 30

Looking at self-driving cars from the lens of investor ROI is short-sighted, especially coming from somebody who cares so much for societal well-being as yourself.

Self-driving cars are hard. Harder than any hype people predicted. But you have no workable ideas for what could have been done differently.

The progress in self-driving cars is very good. Think of them as the James Webb telescope. It will get done. It will be good. It will be very much worth it. And investors will get their money. The patient ones.

Expand full comment
founding

hi gary; forget about all the ‘hopehype’ and write your ‘positive’ essays. we all need them!

Expand full comment

Gary, I'd suggest a bit more research on self-driving before you write more. Tesla requires a human in the loop currently as do almost all companies except just a few that have obtained permits from the CPUC in California and some other jurisdictions to operate without human drivers. Tesla is pursuing a far more difficult non-sandboxed approach and that effort, while long and arduous, is about to pay off as v12 FSD is just released and by all accounts is 99% FSD.

We can expect permits for no human supervision in the next year or two, at which point the robotaxi revolution will begin in earnest. While I am strongly opposed to AGI I support narrow AI applications like FSD as being in general a benefit to the environment, human safety, and perhaps even productivity (or at least good sleep), while of course we will see a massive amount of job loss even from this narrow type of AI being rolled out globally in the coming decade.

Expand full comment

I don't know where to post this.

I wondered if anyone remembered this historical example of AI Fakes from the 1980s and a reference to it.

In the 1980s a Nobel Prize economist Winner Herbert A Simon wrote AI program named Newton that was able to curve fit the motion of planets to a inverse squared law. Simon concluded that his program was as smart as human physicist Isaac Newton. Some physicists remarked that Newton didn't invent the inverse squared law of attraction, he invented mass. Does anyone remember this ai program.

Expand full comment
Apr 2·edited Apr 2

I agree that customers wouldn’t pay *literally* anything for AGI, but they’d probably pay an awful lot. If you could replace any remote worker in your company - from a software tester up to an executive - with a bot that never sleeps and never takes a vacation, what would that be worth?

I have no idea what the cost to OpenAI would be to run a single instance of an AI worker, but if it’s less than (to make up a number) $100,000 a year, I’m guessing they can find enough customers to make a nice profit.

If, that is, it’s actually AGI.

Expand full comment

Colorless green ideas sleep furiously.

Expand full comment