69 Comments

Much agreed. Premature commitment is what locks a species down the wrong evolutionary path. “Our collective future” should be decided and build by “our collective”, not any single man with a heroism complex. We’ve got a few too many of those in history…

Expand full comment

i agonized over how directly to say that….

Expand full comment

Very well said and it is truly amazing that Altman doesn’t see that. The level of arrogance is astounding.

Expand full comment

As I said in another comment, it also shows that Altman doesnt really understand how machine learning works.

Expand full comment
Comment removed
Feb 11
Comment removed
Expand full comment

Text is not shallow when it comes to programming. But the irony is that the success of LLMs for programming is not a triumph of machine learning but rather due to the quality (engineered with good old software engineering methods) of the programs it was trained on.

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

I agree with you as shallow in this sense. LLMs cannot be more reliable and trustworthy than the data they are trained on. The fact that LLMs have emerged as powerful instruments for programming is a testament to the half-century long collective efforts of the software engineering community to create robust open-source software.

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

It is always odd to see somebody's free speech being defended by telling others they shouldn't use their own free speech to criticise them. That's not how that works. It is doubly not how that works when the person being criticised is extremely rich, powerful, and well connected and therefore has outsized influence on our collective decision-making. That is precisely when they need to be held to higher standards.

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

No it isnt. 7T does actually decide the fate of the species to a significant extent

Expand full comment

This was true if what Altman did, didnt have huge negative externalities for the rest of us. As things stand, it is not Altman's "own thing".

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

"Altman likely understands machine learning just fine." Are you sure that Altman understands that getting stuck in a local optimum is a negative externality in the economic sense?

Expand full comment

"People look at OpenAI's foray into LLM and assume there's nothing else those people know. LLM is low-hanging fruit. There's more to come."

Is that an evidence-based statement, or a faith-based statement?

Expand full comment

If I understand you correctly your attitude is borrowed from the wild west entrepreneurship which served us well in the 19th century when we could still ignore negative externalities.

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

"The figure of 7 trillion is nonsensical. The system is self-correcting." Agreed. Luckily we are part of how the system self-corrects.

Expand full comment

What was once the AI dream of creating a near-utopia for all mankind is rapidly turning into a low-hanging-fruit-driven gold rush to control the means of production (human-level AGI), as ~200 territory-based tribes (countries) and ~300 million owner/employee-based tribes (profit-motivated companies) all compete against each other in their own short-term self-interest, seemingly oblivious to any consequent long-term harm to the human species as a whole.

Expand full comment

really sad

Expand full comment

It is. The future of all mankind for all eternity literally depends on people like us never giving up trying to make a difference, regardless of how impossible it seems.

Expand full comment

it's not even a gold rush, it's a snake oil rush

Expand full comment

Keep going Gary...you got him right where you want him...panicking! The bright light of skeptical insight is on him.

Expand full comment

In the final analysis, Altman will fail, even if given $7T, because he just doesn't have the knowledge required to deliver on his promises. For me, the real worry is the level of societal damage (likely to be at global scale) that he will inevitably leave in his wake.

Expand full comment
Comment removed
Feb 12
Comment removed
Expand full comment

As any cs graduate would/should know, brute force approaches inevitably fail when faced with an exponential complexity problem. What you've done is simply reach the point of the exponent where it goes steeply upwards in terms of required resources, it was enough to spark public attention but not enough for real use. That is why Altman needs the $7 trillion, to keep up with the exponent, and he is hoping to find fools that do not understand this.

Expand full comment

Simply stated, intelligence has three scalable dimensions: (1) "inventiveness" (i.e. how good the underlying problem-solving algorithms are [e.g. induction, deduction, abduction]; NB neural nets + gradient descent = induction), (2) knowledge/information (which drives the problem-solving algorithms towards solutions), and (3) physical resources (most notably time, energy, and compute). If you're (A) genuinely trying to develop AGI in the best long-term interests of the human species then you're willing spend as long as it takes to safely develop all of (1)-(3) as far as it's possible to go, while at the same time minimising to the maximum extent possible the societally painful effects of such a profound transition - however this of course requires actual knowledge of AGI, as well as a lack of self-interest, which Altman/OpenAI et al clearly do not have. If instead you're (B) in a self-interested race (together with all the other self-interested AI labs in the world) to reach the pot of gold at the end of the AGI rainbow, then you're instead highly motivated to follow the low-hanging fruit (i.e. at each iteration, you take the easiest possible path). Altman/OpenAI et al are all clearly (B) rather than (A), despite any claims to the contrary. If you're an AI lab with lots of $$$ then by far the easiest "dimension of intelligence" (1)-(3) is (3), i.e. any moron can simply buy compute, no actual knowledge or depth of understanding of AGI is required. After (3), the next easiest dimension is (2), e.g. scraping low-quality data from the interweb (copyright, privacy, and intellectual property be damned!) This leaves (1), which, in AGI R&D terms, is the hardest, most difficult dimension to master. For the last ~20 years, and certainly the last ~10 years, the obvious, easiest-way-to-get-quick-results choice for (1) has been neural nets, which has inexorably led the AI labs from fully connected NNs to CNNs to RNNs to transformers to LLMs - and so here we are. But the large AI labs have now hit a wall in respect of (3), i.e. compute, because they've basically used up the entire world's supply of chip / semiconductor capacity, and they've hit a similar wall in respect of (2), because they've now scraped all the world's easily-obtainable data. Rather than address the HARD problem, i.e. better AGI algorithms for (1) than mere NNs/LLMs, the AI labs have tried to extend (2) by synthesising additional low-quality data from the easily-scraped low-quality data that they already have, and Altman's genius idea now seems to be to further extend (3) by building $7T of new semiconductor capacity (owned by him, of course...), basically ANYTHING rather than address the actual, fundamental problem, i.e. new algorithms for (1), because (a) that's hard, (b) that would force them all to admit (to their investors etc) that their current NN/LLM-based approach is fundamentally flawed, and (c) they would all be back at square 1.

Expand full comment

7 trillion dollars is a sum so vast, the human mind really struggles to grasp the extent of it. There is no way it could possibly be an efficient allocation of resources for a technology that is still in its infancy, and whose outputs are unexplainable (when they are not simply regurgitation of copyrighted material) and uncontrollable.

If you want a significant fraction of the GDP of the planet and you have no good plan for spending it you don't want to build AI, you want to build a personal empire.

Expand full comment

2023 global GDP was probably Altman's starting point. So he's asking for ~7% of that.

Expand full comment

It's about the total mineral wealth of the Congo. Presumably Altman is fine with mobilizing millions of child slaves in Africa to dig up the remaining rocks and use it to make a slightly better chatbot, whilst also pushing global temperatures to 3 degrees by 2100 in the process and preventing any of those minerals from being used on anything else that could be of some use.

This is on par for plutocrat delusions I guess.

Expand full comment

After two and a half decades of "trust me, bro, it'll be great" and it not, in fact, being great, but actually TERRIBLE (enshittification, walled gardens, neutering of computers in favor of locked-down phone interfaces, social media and its breaking of society), I will NOT, in fact, trust you, bro.

I have no more benefit of the doubt to give these guys. Whatever Altman actually wrote on Twitter got translated into my brain as "shut up just long enough so I can get away with this swindle, please."

Expand full comment

I find it Altman's recent statements rather like Andreessen's techno optimist manifesto - high on energy and positivity. But nothing really underneath.

Expand full comment

Thank you Gary Marcus! You are a voice of reason!

Expand full comment

And again I am extremely puzzled why somebody who tweets this kind of stuff isn't immediately intellectually discredited. How was this not the parody account?

Expand full comment

Very few people are actually in the "we" implied by Altman's "our collective future."

Expand full comment

So, here's the question: Altman's attempt at a $ 7T raise seems a bit extreme, even for him. The same with Hinton's recent hallucinatory diatribe against you. Sutskever's been saying some weird things as well (http://tinyurl.com/9dm3fn6r). Are things on the Great Rush to AGI falling behind schedule? Are these guys getting just a bit worried and expressing it by doubling down?

Expand full comment

I wouldnt be surprised if generative AI plateaued now ... but the next acceleration will come. So it is important to prepare for that.

Expand full comment

Bumper sticker:

Are you grinding for Sam yet?

Expand full comment

Who would have thought that in the 21st century humanity would have to fight again false prophets like in the middle ages. "Follow me and I will save you!" is a tried-and-true strategy to engineer public influence and use it for one's own benefit. The self-driving car industry tried the same tactic with their claim of saving lives and that therefore anyone who opposed them would be a murderer.

Expand full comment

Would love to see you and Altman on a debate stage.

Expand full comment

Someone offered to host at Davos but he declined

Expand full comment

I support your questions. Some thoughts:

1) For the famed invisible hand to work, progress should not be too fast. Even if progress is good, it doesnt mean that more progress faster is better. Machine learners should know this: It is important to not get stuck in local optima.

2) Collapse of civilizations is a familiar event during human history. What are the chances that our civilization will collapse, and when? (Judging from the effort our industrial leaders spend on building private bunkers on remote islands the probability must be quite high.) Shouldnt Altman spend some money on this interesting question?

3) While I agree that AI has the potential to solve some problems, wouldnt it be a more rational approach to make a list of all problems, prioritize them, and start from the top?

Expand full comment

Poke the bear.

Expand full comment

_Everybody_ is nibbling at OpenAI's heels. I'm not sure how it can continue on with this sort of innovation arms race. https://sites.google.com/view/genie-2024

Though I _am_ looking forward to a text-to-meal generator. Hopefully there are no hallucinations there, I wouldn't want to be poisoned by my morning croissant😂

Expand full comment

not as pretty as sora but in some ways more interesting.

Expand full comment