Much agreed. Premature commitment is what locks a species down the wrong evolutionary path. “Our collective future” should be decided and build by “our collective”, not any single man with a heroism complex. We’ve got a few too many of those in history…
Text is not shallow when it comes to programming. But the irony is that the success of LLMs for programming is not a triumph of machine learning but rather due to the quality (engineered with good old software engineering methods) of the programs it was trained on.
Sorry, I beg to differ. Generative AI makes lots of mistakes when it comes to enterprise application code. I gave up expecting it to be a productivity boost after a year of countless prompt engineering efforts (many late nights trying to get the prompts to work, finishing up at 3/4am).
I agree with you as shallow in this sense. LLMs cannot be more reliable and trustworthy than the data they are trained on. The fact that LLMs have emerged as powerful instruments for programming is a testament to the half-century long collective efforts of the software engineering community to create robust open-source software.
It is always odd to see somebody's free speech being defended by telling others they shouldn't use their own free speech to criticise them. That's not how that works. It is doubly not how that works when the person being criticised is extremely rich, powerful, and well connected and therefore has outsized influence on our collective decision-making. That is precisely when they need to be held to higher standards.
"Altman likely understands machine learning just fine." Are you sure that Altman understands that getting stuck in a local optimum is a negative externality in the economic sense?
OpenAI is far from stuck in a local minimum. Both them, Microsoft, and Google see the current LLM-based approaches as a way of automating work, and there is a lot of demand for that.
Assistants will get better as they get used more, and there are many techniques that can be used to improve their reliability. The field is moving rather fast.
LLM are good at statistical prediction, which can be error-prone. I believe we will see a lot more work on validation, and on agents that are able to use tools, simulations, and working iteratively in order to solve problems reliably.
If I understand you correctly your attitude is borrowed from the wild west entrepreneurship which served us well in the 19th century when we could still ignore negative externalities.
I am all for following proper societal rules. But Altman is not taking decisions on behalf of the species. He's an entrepreneur looking for funding. The figure of 7 trillion is nonsensical. The system is self-correcting.
He has a point though that we'll need more hardware. Also, having all eggs in the NVidia / TSMC basket, so to speak, is not a good thing.
What was once the AI dream of creating a near-utopia for all mankind is rapidly turning into a low-hanging-fruit-driven gold rush to control the means of production (human-level AGI), as ~200 territory-based tribes (countries) and ~300 million owner/employee-based tribes (profit-motivated companies) all compete against each other in their own short-term self-interest, seemingly oblivious to any consequent long-term harm to the human species as a whole.
It is. The future of all mankind for all eternity literally depends on people like us never giving up trying to make a difference, regardless of how impossible it seems.
In the final analysis, Altman will fail, even if given $7T, because he just doesn't have the knowledge required to deliver on his promises. For me, the real worry is the level of societal damage (likely to be at global scale) that he will inevitably leave in his wake.
Current systems are only the beginning. There are many neat ideas out there. We hit a critical mass recently, in terms of data, techniques, resources, and business potential.
As any cs graduate would/should know, brute force approaches inevitably fail when faced with an exponential complexity problem. What you've done is simply reach the point of the exponent where it goes steeply upwards in terms of required resources, it was enough to spark public attention but not enough for real use. That is why Altman needs the $7 trillion, to keep up with the exponent, and he is hoping to find fools that do not understand this.
Simply stated, intelligence has three scalable dimensions: (1) "inventiveness" (i.e. how good the underlying problem-solving algorithms are [e.g. induction, deduction, abduction]; NB neural nets + gradient descent = induction), (2) knowledge/information (which drives the problem-solving algorithms towards solutions), and (3) physical resources (most notably time, energy, and compute). If you're (A) genuinely trying to develop AGI in the best long-term interests of the human species then you're willing spend as long as it takes to safely develop all of (1)-(3) as far as it's possible to go, while at the same time minimising to the maximum extent possible the societally painful effects of such a profound transition - however this of course requires actual knowledge of AGI, as well as a lack of self-interest, which Altman/OpenAI et al clearly do not have. If instead you're (B) in a self-interested race (together with all the other self-interested AI labs in the world) to reach the pot of gold at the end of the AGI rainbow, then you're instead highly motivated to follow the low-hanging fruit (i.e. at each iteration, you take the easiest possible path). Altman/OpenAI et al are all clearly (B) rather than (A), despite any claims to the contrary. If you're an AI lab with lots of $$$ then by far the easiest "dimension of intelligence" (1)-(3) is (3), i.e. any moron can simply buy compute, no actual knowledge or depth of understanding of AGI is required. After (3), the next easiest dimension is (2), e.g. scraping low-quality data from the interweb (copyright, privacy, and intellectual property be damned!) This leaves (1), which, in AGI R&D terms, is the hardest, most difficult dimension to master. For the last ~20 years, and certainly the last ~10 years, the obvious, easiest-way-to-get-quick-results choice for (1) has been neural nets, which has inexorably led the AI labs from fully connected NNs to CNNs to RNNs to transformers to LLMs - and so here we are. But the large AI labs have now hit a wall in respect of (3), i.e. compute, because they've basically used up the entire world's supply of chip / semiconductor capacity, and they've hit a similar wall in respect of (2), because they've now scraped all the world's easily-obtainable data. Rather than address the HARD problem, i.e. better AGI algorithms for (1) than mere NNs/LLMs, the AI labs have tried to extend (2) by synthesising additional low-quality data from the easily-scraped low-quality data that they already have, and Altman's genius idea now seems to be to further extend (3) by building $7T of new semiconductor capacity (owned by him, of course...), basically ANYTHING rather than address the actual, fundamental problem, i.e. new algorithms for (1), because (a) that's hard, (b) that would force them all to admit (to their investors etc) that their current NN/LLM-based approach is fundamentally flawed, and (c) they would all be back at square 1.
Yes, indeed, the amount of data needed increases with the complexity of the problem. LLM are best used to generate hypotheses. Then those should be offloaded to external tools for validation and in depth analysis.
To clarify, it is well-understood that people do not solve problems by just writing out the answer. There need to be a plan, steps, checks, refinements, dead ends, starting over.
What companies like OpenAI need a virtuous loop, where their assistants are adopted, then they can invest more effort into verification and simulators, with LLM being the "idea producer".
7 trillion dollars is a sum so vast, the human mind really struggles to grasp the extent of it. There is no way it could possibly be an efficient allocation of resources for a technology that is still in its infancy, and whose outputs are unexplainable (when they are not simply regurgitation of copyrighted material) and uncontrollable.
If you want a significant fraction of the GDP of the planet and you have no good plan for spending it you don't want to build AI, you want to build a personal empire.
It's about the total mineral wealth of the Congo. Presumably Altman is fine with mobilizing millions of child slaves in Africa to dig up the remaining rocks and use it to make a slightly better chatbot, whilst also pushing global temperatures to 3 degrees by 2100 in the process and preventing any of those minerals from being used on anything else that could be of some use.
After two and a half decades of "trust me, bro, it'll be great" and it not, in fact, being great, but actually TERRIBLE (enshittification, walled gardens, neutering of computers in favor of locked-down phone interfaces, social media and its breaking of society), I will NOT, in fact, trust you, bro.
I have no more benefit of the doubt to give these guys. Whatever Altman actually wrote on Twitter got translated into my brain as "shut up just long enough so I can get away with this swindle, please."
I find it Altman's recent statements rather like Andreessen's techno optimist manifesto - high on energy and positivity. But nothing really underneath.
And again I am extremely puzzled why somebody who tweets this kind of stuff isn't immediately intellectually discredited. How was this not the parody account?
So, here's the question: Altman's attempt at a $ 7T raise seems a bit extreme, even for him. The same with Hinton's recent hallucinatory diatribe against you. Sutskever's been saying some weird things as well (http://tinyurl.com/9dm3fn6r). Are things on the Great Rush to AGI falling behind schedule? Are these guys getting just a bit worried and expressing it by doubling down?
Who would have thought that in the 21st century humanity would have to fight again false prophets like in the middle ages. "Follow me and I will save you!" is a tried-and-true strategy to engineer public influence and use it for one's own benefit. The self-driving car industry tried the same tactic with their claim of saving lives and that therefore anyone who opposed them would be a murderer.
1) For the famed invisible hand to work, progress should not be too fast. Even if progress is good, it doesnt mean that more progress faster is better. Machine learners should know this: It is important to not get stuck in local optima.
2) Collapse of civilizations is a familiar event during human history. What are the chances that our civilization will collapse, and when? (Judging from the effort our industrial leaders spend on building private bunkers on remote islands the probability must be quite high.) Shouldnt Altman spend some money on this interesting question?
3) While I agree that AI has the potential to solve some problems, wouldnt it be a more rational approach to make a list of all problems, prioritize them, and start from the top?
At minimum, more people need to be reading Vaclav Smil, data-driven empiricist. Smil realizes that human civilization and sustenance of the human species is first and foremost reliant on material-realm concerns and priorities. Not algorithms, or digital phantasms conjured by imaginative flights of fancy.
Though I _am_ looking forward to a text-to-meal generator. Hopefully there are no hallucinations there, I wouldn't want to be poisoned by my morning croissant😂
Much agreed. Premature commitment is what locks a species down the wrong evolutionary path. “Our collective future” should be decided and build by “our collective”, not any single man with a heroism complex. We’ve got a few too many of those in history…
i agonized over how directly to say that….
Very well said and it is truly amazing that Altman doesn’t see that. The level of arrogance is astounding.
As I said in another comment, it also shows that Altman doesnt really understand how machine learning works.
Text is not shallow when it comes to programming. But the irony is that the success of LLMs for programming is not a triumph of machine learning but rather due to the quality (engineered with good old software engineering methods) of the programs it was trained on.
Sorry, I beg to differ. Generative AI makes lots of mistakes when it comes to enterprise application code. I gave up expecting it to be a productivity boost after a year of countless prompt engineering efforts (many late nights trying to get the prompts to work, finishing up at 3/4am).
https://www.linkedin.com/posts/simonay_java-springboot-generativeai-activity-7142800733707395072-qnBX?utm_source=share&utm_medium=member_desktop
https://www.linkedin.com/posts/simonay_chatgpt-githubcopilot-softwaredevelopment-activity-7135915397664411648-7DUm?utm_source=share&utm_medium=member_desktop
https://www.linkedin.com/posts/simonay_softwaredevelopers-devopsengineers-sres-activity-7123775368456519681-QIq4?utm_source=share&utm_medium=member_desktop
I agree with you as shallow in this sense. LLMs cannot be more reliable and trustworthy than the data they are trained on. The fact that LLMs have emerged as powerful instruments for programming is a testament to the half-century long collective efforts of the software engineering community to create robust open-source software.
Altman will not stop at text. This is just a very early step.
Nobody is committing the species to anything. Altman is doing his little game, and is fully entitled to. Do your own thing.
It is always odd to see somebody's free speech being defended by telling others they shouldn't use their own free speech to criticise them. That's not how that works. It is doubly not how that works when the person being criticised is extremely rich, powerful, and well connected and therefore has outsized influence on our collective decision-making. That is precisely when they need to be held to higher standards.
Sure, but claiming that Altman is deciding the fate of the species is poor kind of argument.
No it isnt. 7T does actually decide the fate of the species to a significant extent
This assumes Altman gets the 7T and uses it wisely, which is not possible, as the chip demand is much, much smaller than that.
This was true if what Altman did, didnt have huge negative externalities for the rest of us. As things stand, it is not Altman's "own thing".
Any time somebody does something you don't agree with, saying that person takes decisions on behalf of the "species" is a gross exaggeration.
Also, Altman likely understands machine learning just fine. And he either takes a calculated risk here, or it is all for publicity.
People look at OpenAI's foray into LLM and assume there's nothing else those people know. LLM is low-hanging fruit. There's more to come.
"Altman likely understands machine learning just fine." Are you sure that Altman understands that getting stuck in a local optimum is a negative externality in the economic sense?
OpenAI is far from stuck in a local minimum. Both them, Microsoft, and Google see the current LLM-based approaches as a way of automating work, and there is a lot of demand for that.
Assistants will get better as they get used more, and there are many techniques that can be used to improve their reliability. The field is moving rather fast.
"People look at OpenAI's foray into LLM and assume there's nothing else those people know. LLM is low-hanging fruit. There's more to come."
Is that an evidence-based statement, or a faith-based statement?
LLM are good at statistical prediction, which can be error-prone. I believe we will see a lot more work on validation, and on agents that are able to use tools, simulations, and working iteratively in order to solve problems reliably.
If I understand you correctly your attitude is borrowed from the wild west entrepreneurship which served us well in the 19th century when we could still ignore negative externalities.
I am all for following proper societal rules. But Altman is not taking decisions on behalf of the species. He's an entrepreneur looking for funding. The figure of 7 trillion is nonsensical. The system is self-correcting.
He has a point though that we'll need more hardware. Also, having all eggs in the NVidia / TSMC basket, so to speak, is not a good thing.
"The figure of 7 trillion is nonsensical. The system is self-correcting." Agreed. Luckily we are part of how the system self-corrects.
What was once the AI dream of creating a near-utopia for all mankind is rapidly turning into a low-hanging-fruit-driven gold rush to control the means of production (human-level AGI), as ~200 territory-based tribes (countries) and ~300 million owner/employee-based tribes (profit-motivated companies) all compete against each other in their own short-term self-interest, seemingly oblivious to any consequent long-term harm to the human species as a whole.
really sad
It is. The future of all mankind for all eternity literally depends on people like us never giving up trying to make a difference, regardless of how impossible it seems.
it's not even a gold rush, it's a snake oil rush
Keep going Gary...you got him right where you want him...panicking! The bright light of skeptical insight is on him.
In the final analysis, Altman will fail, even if given $7T, because he just doesn't have the knowledge required to deliver on his promises. For me, the real worry is the level of societal damage (likely to be at global scale) that he will inevitably leave in his wake.
Current systems are only the beginning. There are many neat ideas out there. We hit a critical mass recently, in terms of data, techniques, resources, and business potential.
As any cs graduate would/should know, brute force approaches inevitably fail when faced with an exponential complexity problem. What you've done is simply reach the point of the exponent where it goes steeply upwards in terms of required resources, it was enough to spark public attention but not enough for real use. That is why Altman needs the $7 trillion, to keep up with the exponent, and he is hoping to find fools that do not understand this.
Simply stated, intelligence has three scalable dimensions: (1) "inventiveness" (i.e. how good the underlying problem-solving algorithms are [e.g. induction, deduction, abduction]; NB neural nets + gradient descent = induction), (2) knowledge/information (which drives the problem-solving algorithms towards solutions), and (3) physical resources (most notably time, energy, and compute). If you're (A) genuinely trying to develop AGI in the best long-term interests of the human species then you're willing spend as long as it takes to safely develop all of (1)-(3) as far as it's possible to go, while at the same time minimising to the maximum extent possible the societally painful effects of such a profound transition - however this of course requires actual knowledge of AGI, as well as a lack of self-interest, which Altman/OpenAI et al clearly do not have. If instead you're (B) in a self-interested race (together with all the other self-interested AI labs in the world) to reach the pot of gold at the end of the AGI rainbow, then you're instead highly motivated to follow the low-hanging fruit (i.e. at each iteration, you take the easiest possible path). Altman/OpenAI et al are all clearly (B) rather than (A), despite any claims to the contrary. If you're an AI lab with lots of $$$ then by far the easiest "dimension of intelligence" (1)-(3) is (3), i.e. any moron can simply buy compute, no actual knowledge or depth of understanding of AGI is required. After (3), the next easiest dimension is (2), e.g. scraping low-quality data from the interweb (copyright, privacy, and intellectual property be damned!) This leaves (1), which, in AGI R&D terms, is the hardest, most difficult dimension to master. For the last ~20 years, and certainly the last ~10 years, the obvious, easiest-way-to-get-quick-results choice for (1) has been neural nets, which has inexorably led the AI labs from fully connected NNs to CNNs to RNNs to transformers to LLMs - and so here we are. But the large AI labs have now hit a wall in respect of (3), i.e. compute, because they've basically used up the entire world's supply of chip / semiconductor capacity, and they've hit a similar wall in respect of (2), because they've now scraped all the world's easily-obtainable data. Rather than address the HARD problem, i.e. better AGI algorithms for (1) than mere NNs/LLMs, the AI labs have tried to extend (2) by synthesising additional low-quality data from the easily-scraped low-quality data that they already have, and Altman's genius idea now seems to be to further extend (3) by building $7T of new semiconductor capacity (owned by him, of course...), basically ANYTHING rather than address the actual, fundamental problem, i.e. new algorithms for (1), because (a) that's hard, (b) that would force them all to admit (to their investors etc) that their current NN/LLM-based approach is fundamentally flawed, and (c) they would all be back at square 1.
Yes, indeed, the amount of data needed increases with the complexity of the problem. LLM are best used to generate hypotheses. Then those should be offloaded to external tools for validation and in depth analysis.
To clarify, it is well-understood that people do not solve problems by just writing out the answer. There need to be a plan, steps, checks, refinements, dead ends, starting over.
What companies like OpenAI need a virtuous loop, where their assistants are adopted, then they can invest more effort into verification and simulators, with LLM being the "idea producer".
7 trillion dollars is a sum so vast, the human mind really struggles to grasp the extent of it. There is no way it could possibly be an efficient allocation of resources for a technology that is still in its infancy, and whose outputs are unexplainable (when they are not simply regurgitation of copyrighted material) and uncontrollable.
If you want a significant fraction of the GDP of the planet and you have no good plan for spending it you don't want to build AI, you want to build a personal empire.
2023 global GDP was probably Altman's starting point. So he's asking for ~7% of that.
It's about the total mineral wealth of the Congo. Presumably Altman is fine with mobilizing millions of child slaves in Africa to dig up the remaining rocks and use it to make a slightly better chatbot, whilst also pushing global temperatures to 3 degrees by 2100 in the process and preventing any of those minerals from being used on anything else that could be of some use.
This is on par for plutocrat delusions I guess.
After two and a half decades of "trust me, bro, it'll be great" and it not, in fact, being great, but actually TERRIBLE (enshittification, walled gardens, neutering of computers in favor of locked-down phone interfaces, social media and its breaking of society), I will NOT, in fact, trust you, bro.
I have no more benefit of the doubt to give these guys. Whatever Altman actually wrote on Twitter got translated into my brain as "shut up just long enough so I can get away with this swindle, please."
I find it Altman's recent statements rather like Andreessen's techno optimist manifesto - high on energy and positivity. But nothing really underneath.
Thank you Gary Marcus! You are a voice of reason!
And again I am extremely puzzled why somebody who tweets this kind of stuff isn't immediately intellectually discredited. How was this not the parody account?
Very few people are actually in the "we" implied by Altman's "our collective future."
So, here's the question: Altman's attempt at a $ 7T raise seems a bit extreme, even for him. The same with Hinton's recent hallucinatory diatribe against you. Sutskever's been saying some weird things as well (http://tinyurl.com/9dm3fn6r). Are things on the Great Rush to AGI falling behind schedule? Are these guys getting just a bit worried and expressing it by doubling down?
Kinda feels that way
https://x.com/garymarcus/status/1756809605861249391?s=61
I wouldnt be surprised if generative AI plateaued now ... but the next acceleration will come. So it is important to prepare for that.
Bumper sticker:
Are you grinding for Sam yet?
Who would have thought that in the 21st century humanity would have to fight again false prophets like in the middle ages. "Follow me and I will save you!" is a tried-and-true strategy to engineer public influence and use it for one's own benefit. The self-driving car industry tried the same tactic with their claim of saving lives and that therefore anyone who opposed them would be a murderer.
Would love to see you and Altman on a debate stage.
Someone offered to host at Davos but he declined
I support your questions. Some thoughts:
1) For the famed invisible hand to work, progress should not be too fast. Even if progress is good, it doesnt mean that more progress faster is better. Machine learners should know this: It is important to not get stuck in local optima.
2) Collapse of civilizations is a familiar event during human history. What are the chances that our civilization will collapse, and when? (Judging from the effort our industrial leaders spend on building private bunkers on remote islands the probability must be quite high.) Shouldnt Altman spend some money on this interesting question?
3) While I agree that AI has the potential to solve some problems, wouldnt it be a more rational approach to make a list of all problems, prioritize them, and start from the top?
At minimum, more people need to be reading Vaclav Smil, data-driven empiricist. Smil realizes that human civilization and sustenance of the human species is first and foremost reliant on material-realm concerns and priorities. Not algorithms, or digital phantasms conjured by imaginative flights of fancy.
https://www.google.com/search?q=vaclav+smil+archive.org&client=firefox-b-1-d&sca_esv=62aa5c07e907627d&sxsrf=ACQVn0_wUCMqepmfQ9usRufLK8jIOQ3pgA%3A1707712073659&ei=SZ7JZYrgJ5Cv5NoPou-foAo&ved=0ahUKEwjKqJ2c-6SEAxWQF1kFHaL3B6QQ4dUDCA8&uact=5&oq=vaclav+smil+archive.org&gs_lp=Egxnd3Mtd2l6LXNlcnAiF3ZhY2xhdiBzbWlsIGFyY2hpdmUub3JnMggQABiABBiiBDIIEAAYgAQYogQyCBAAGIAEGKIEMggQABiABBiiBDIIEAAYgAQYogRI0w9QiAtYiAtwAXgAkAEAmAGVAaABlQGqAQMwLjG4AQPIAQD4AQHCAgoQIxiwAhiwAxgnwgIOEAAYgAQYigUYhgMYsAPiAwQYASBBiAYBkAYE&sclient=gws-wiz-serp#ip=1
Poke the bear.
_Everybody_ is nibbling at OpenAI's heels. I'm not sure how it can continue on with this sort of innovation arms race. https://sites.google.com/view/genie-2024
Though I _am_ looking forward to a text-to-meal generator. Hopefully there are no hallucinations there, I wouldn't want to be poisoned by my morning croissant😂
not as pretty as sora but in some ways more interesting.