The absence of label on the Y axis maybe tells you all you need to know.”“
Nonsense. “Victoria” is clearly the label on the Y axis on that graph.
It indicates the relative number of Victorias working at OpenAI when each new version of GPT was released.
It’s a good thing Sam Altman just got a $6.6 billion infusion from investors because the number of Victorias at OpenAI is obviously increasing exponentially and a lot of cash will be needed for all their salaries and benefits.
To continue the WeWork comparison, Altman is more like Adam Nuemann than Holmes. They have both described themselves as Gods (though Neumann did this more literally) and see themselves as bringing about a new age for humanity even though their businesses are essentially disappointing and unsustainable. Coincidentally the suggestion that they are somehow superior beings is even in their names: alt man, neu man. Pretty sure they both look in the mirror and see an Übermensch.
Same. His ridiculous "Worldcoin" venture (now just "World", cos why risk underselling something?) is a straight-up grift, window-dressed with the usual "our-vision-for-the-future-of-humankind" stuff, that catches attention from law enforcement everywhere it goes. They're in trouble in Kenya, South Korea, France, Hong Kong, Brazil, Spain... and yet Altman keeps pushing his supposed grand vision to scan all our eyeballs so that we can have access to his crypto-UBI after our jobs are made redundant by superintelligent machines.
The whole thing belongs in a Douglas Adams novel. It'll probably just fizzle out after a while, but there are certainly scenarios where he gets himself into Serious Trouble.
I went to the Elizabeth Holmes trial for one day (one morning, actually, since it was boring). I don't think either is the correct analogy. Maybe Sematech? Enormous publicity, modest results.
When you went thru every possible resources in private equity, just IPO, hype up it to the skies, as general public pours money in (pension funds too), investors and equity owners start to exit, the crowd left holding a bag in couple of years.
> Why was Theranos so believable? Medicine needs to look in the mirror
I worked at a diagnostic lab at the time, and the scientist/clinical staff (especially the lab director) were *extremely* skeptical of Theranos. And that was at a venture-backed startup whose executives often engaged in too-much-hype.
I think Theranos was convincing to some rich people who gave them a ton of money and not particularly convincing to actual experts. Why the former didn't consult the latter is kinda baffling.
With OpenAI, they clearly sell a useful product, though after the 501c3 debacle, I think I'll probably switch my subscription to Anthropic. It seems to me like they might be more like Uber: very little moat so the cost of entry is low, playing fast and loose with laws...
OpenAI is probably more like WorldCom, which hyped the "internet doubles every 18 months" trope (note the similarity). It was ultimately a disaster for investors as it resorted to outright financial fraud to cover up its inability to match its own hype, but it left a very valuable legacy in the form of an expansive fiber network that brought connectivity costs down and enabled the internet. I wouldn't be surprised if OpenAI is net beneficial by catalyzing nextgen ML.
I wouldn't waste time commiserating about wasting immense resources and a half-decade in vain: in a non-centrally planned system, innovations always come with "wasteful" overspending and overshoots and lots of lies and hype. See railroads in the 1880s, canals prior to that, or just look back at the dot-com boom. This time isn't different. Arguably the overshooting and hype allows companies that are being realistic to receive funding they otherwise wouldn't.
The counterfactual of a lack of OpenAI's hype and "waste" is a world where AI research would be slower, Mistral, LLaMA and other open weights models would not exist, and Anthropic (or similar companies) and hundreds of good researchers would not have enough money to develop the field further.
Long time reader. While I obviously agree with Gary about the exaggerated hype, the hallucinations, the large-scale plagiarism, the dubious future returns on investment, and so on, the comparison with Theranos is widely exagerated. I pay for and use ChatGPT everyday for coding help, carefully, and fully aware of its shortcomings. So do millions of other people. Theranos was based on Fraud and never had a working product!
I'm not sure WeWork or Theranos is the right category to think about. It's perhaps closer to kayfabe or what Eric Weinstein called Gated Institutionalized Narrative. No matter what any "detractor" says, the mainstream rushes ahead, likely over the cliff. "Within the next ten years we will have fusion energy. The next larger particle accelerator will finally help us unlock all the secrets of the universe. If not we simply build a larger one." Did we say AGI? What we really meant was probably perhaps in a sense more like Artificial General Quasi Intelligence, actually more like AGQx, because let's be honest, what is intelligence anyway...
(Mainstream) media is only or primarily interested in getting attention at any cost, thus they are parasitic, derivative, fed by the marketing machinery of companies hoping to produce these products, consulting companies feeding off them, VC companies and so on ad nauseam. Promising the next big thing in AI sells better than to be cautious. The coffers of the aforementioned organizations are full enough, also for lobbying, buying, sorry I meant influencing, politicians whose main drive is the will to power. (They can get this power either way, by supporting it or later, after it failed, by punishing the mishpoke.) So, I would rather see this from a dysfunctional ecosystem point of view that may take some damage but will carry on. As the guy says towards the end of Taken "Please understand... it was all business. It wasn't personal."
The NZZ published an article today about potentially wrong numbers in conversations efforts. Based on two papers in nature, one about "80% loss of biodiversity" (doi: https://doi.org/10.1038/d41586-024-02811-w) and, in my words, questionable use of data and statistics in another one (https://doi.org/10.1038/s41467-024-49070-x). It looks like those benefiting from the alarmism don't like the articles, carry on saying the numbers aren't wrong just a little under-researched.
I think you're right about the AGI narrative. Just saw a comment on another Marcus article claiming that o1 is already AGI, just at a young stage of maturity. So regardless of whether AGI is here, "AGI is here" appears to be here. And since no one can define it and no one knows how to identify it, what's to stop Sam Altman from just calling whatever OpenAI's best model is a year or two from now "AGI"? If he's willing to waive around bar exam and math olympiad scores today, I doubt he'll have any qualms simply declaring victory tomorrow.
Only downside is this would likely snuff out the perpertual "just wait til you see what it does next year!" sales pitch. Whatever ends up being labeled AGI is guaranteed to feel anticlimactic relative to the sci-fi fever dream expectations OpenAI and the like have been building up.
Maybe this is the way to officially mark the current hype cycle's conclusion?
I like Eric's analyses a lot in general. However if you listen to him recently (his last appearance on the Modern Wisdom podcast for instance), he seems genuinely astounded by OpenAI's accomplishments, and isn't viewing them through a cyclical analytic lens at all. If I recall correctly, he's taking about how this stuff should probably be developed in secret like the Manhattan project. He also seems to be quite friendly with Sam.
Your thesis will die by a thousand cuts, inflicted by the people who go ahead and actually build stuff with the Open AI API. You're betting on the wrong horse, or in this case, no horse at all.
Homer Simpson-does-AI charts are everywhere. These charts have no scale, no data, no definition. The only thing in common is a concave upward curve rising labeled "AI". Just this morning I ran into another variation of a Simpson chart except this had colors and included a flat line almost at ground zero with the X-axis titled "humans"
There are some really great points made here but...I'm unclear on what it is that they're "failing to deliver." Is it that generative AI is not going to be AGI as soon as they say it will? Is it that the ChatGPT releases and their capabilities are misleading or, as with the Theranos analogy, an actual (and unlawful, and unethical) fabrication? Is it that generative AI itself is a fabrication? I've seen a lot of these "don't believe the hype" takes and I tend to agree to a point—basically, that there's extraordinary and potentially very dangerous exaggeration about what will happen with AI and/or how quickly it will happen. But I can't figure out if "the hype" has just become a catch-all for not agreeing with something and thereby taking every negative story as fact and every positive story as lies...or if there's something more explicit and measurable behind it.
This is a really good point. OpenAI makes products that many people find useful for many things. I think it's the overselling and wacky sci-fi prognosticating that earns them the "hype" label. But just last night I used GPT-4o to help me reorganize a dataset; it saved me a good chunk of time that I would otherwise spent struggling through finicky coding syntax.
If OpenAI would just pitch the applications their products are actually useful for, describe their products as working the way they actually work, and acknowledge the real-world harms of AI rather than imaginary future-world harms, I'd probably see them as a trustworthy company.
I am still waiting for the first self driving AI vacuum cleaner, next AI porn in every American home instead of fitness and AI Dead Sea Mud IPO valuation $ 40 billion.
If I could have a roomba-like carpet cleaner that cleans up after my dogs and cats when I'm not home, my quality of living would increase dramatically.
And yes and no, on one hand hype is similar to dotcom bubble, though I read that even though many people and businesses got wiped out back then, a lot of infra was put down, actual "cables-in-the-ground" that people use today.
In this cycle of AI via let's train limitlessly big LLM, money is also flowing into infra heavily, data centers, software and apparently even nuclear power plants are on the table now! In the end of the day money make it into Nvidia pockets, engineers, AWS etc. So it's not like they are building airplanes that no one will ever fly.
The absence of label on the Y axis maybe tells you all you need to know.”“
Nonsense. “Victoria” is clearly the label on the Y axis on that graph.
It indicates the relative number of Victorias working at OpenAI when each new version of GPT was released.
It’s a good thing Sam Altman just got a $6.6 billion infusion from investors because the number of Victorias at OpenAI is obviously increasing exponentially and a lot of cash will be needed for all their salaries and benefits.
In the long run, OpenAI will be Victorias
But are there enough Victorias???
To continue the WeWork comparison, Altman is more like Adam Nuemann than Holmes. They have both described themselves as Gods (though Neumann did this more literally) and see themselves as bringing about a new age for humanity even though their businesses are essentially disappointing and unsustainable. Coincidentally the suggestion that they are somehow superior beings is even in their names: alt man, neu man. Pretty sure they both look in the mirror and see an Übermensch.
Mind blown with alt-man … good one
Sam Altman gives strong con-man vibes to me, and it wouldn’t surprise me to see him in a cell next to SBF in the future.
Same. His ridiculous "Worldcoin" venture (now just "World", cos why risk underselling something?) is a straight-up grift, window-dressed with the usual "our-vision-for-the-future-of-humankind" stuff, that catches attention from law enforcement everywhere it goes. They're in trouble in Kenya, South Korea, France, Hong Kong, Brazil, Spain... and yet Altman keeps pushing his supposed grand vision to scan all our eyeballs so that we can have access to his crypto-UBI after our jobs are made redundant by superintelligent machines.
The whole thing belongs in a Douglas Adams novel. It'll probably just fizzle out after a while, but there are certainly scenarios where he gets himself into Serious Trouble.
I went to the Elizabeth Holmes trial for one day (one morning, actually, since it was boring). I don't think either is the correct analogy. Maybe Sematech? Enormous publicity, modest results.
https://www.csis.org/analysis/implementing-chips-act-sematechs-lessons-national-semiconductor-technology-center
The shared theme is fomo and investors who will make money whether it succeeds or flops.
When you went thru every possible resources in private equity, just IPO, hype up it to the skies, as general public pours money in (pension funds too), investors and equity owners start to exit, the crowd left holding a bag in couple of years.
Never seen this before?
Not saying there are no Facebooks of IPOing, so on average, yeah it works out
> Why was Theranos so believable? Medicine needs to look in the mirror
I worked at a diagnostic lab at the time, and the scientist/clinical staff (especially the lab director) were *extremely* skeptical of Theranos. And that was at a venture-backed startup whose executives often engaged in too-much-hype.
I think Theranos was convincing to some rich people who gave them a ton of money and not particularly convincing to actual experts. Why the former didn't consult the latter is kinda baffling.
With OpenAI, they clearly sell a useful product, though after the 501c3 debacle, I think I'll probably switch my subscription to Anthropic. It seems to me like they might be more like Uber: very little moat so the cost of entry is low, playing fast and loose with laws...
Theranos never had a product and had no transformative effect on its field. So I think we should be a bit more charitable to OpenAi.
Great post.
OpenAI is probably more like WorldCom, which hyped the "internet doubles every 18 months" trope (note the similarity). It was ultimately a disaster for investors as it resorted to outright financial fraud to cover up its inability to match its own hype, but it left a very valuable legacy in the form of an expansive fiber network that brought connectivity costs down and enabled the internet. I wouldn't be surprised if OpenAI is net beneficial by catalyzing nextgen ML.
I wouldn't waste time commiserating about wasting immense resources and a half-decade in vain: in a non-centrally planned system, innovations always come with "wasteful" overspending and overshoots and lots of lies and hype. See railroads in the 1880s, canals prior to that, or just look back at the dot-com boom. This time isn't different. Arguably the overshooting and hype allows companies that are being realistic to receive funding they otherwise wouldn't.
The counterfactual of a lack of OpenAI's hype and "waste" is a world where AI research would be slower, Mistral, LLaMA and other open weights models would not exist, and Anthropic (or similar companies) and hundreds of good researchers would not have enough money to develop the field further.
Long time reader. While I obviously agree with Gary about the exaggerated hype, the hallucinations, the large-scale plagiarism, the dubious future returns on investment, and so on, the comparison with Theranos is widely exagerated. I pay for and use ChatGPT everyday for coding help, carefully, and fully aware of its shortcomings. So do millions of other people. Theranos was based on Fraud and never had a working product!
Sam’s weird obsession with Scarlet Johansson and her voice as if he was a former nerd proving he’s now a master of the universe was the first sign…
I'm not sure WeWork or Theranos is the right category to think about. It's perhaps closer to kayfabe or what Eric Weinstein called Gated Institutionalized Narrative. No matter what any "detractor" says, the mainstream rushes ahead, likely over the cliff. "Within the next ten years we will have fusion energy. The next larger particle accelerator will finally help us unlock all the secrets of the universe. If not we simply build a larger one." Did we say AGI? What we really meant was probably perhaps in a sense more like Artificial General Quasi Intelligence, actually more like AGQx, because let's be honest, what is intelligence anyway...
(Mainstream) media is only or primarily interested in getting attention at any cost, thus they are parasitic, derivative, fed by the marketing machinery of companies hoping to produce these products, consulting companies feeding off them, VC companies and so on ad nauseam. Promising the next big thing in AI sells better than to be cautious. The coffers of the aforementioned organizations are full enough, also for lobbying, buying, sorry I meant influencing, politicians whose main drive is the will to power. (They can get this power either way, by supporting it or later, after it failed, by punishing the mishpoke.) So, I would rather see this from a dysfunctional ecosystem point of view that may take some damage but will carry on. As the guy says towards the end of Taken "Please understand... it was all business. It wasn't personal."
The NZZ published an article today about potentially wrong numbers in conversations efforts. Based on two papers in nature, one about "80% loss of biodiversity" (doi: https://doi.org/10.1038/d41586-024-02811-w) and, in my words, questionable use of data and statistics in another one (https://doi.org/10.1038/s41467-024-49070-x). It looks like those benefiting from the alarmism don't like the articles, carry on saying the numbers aren't wrong just a little under-researched.
I think you're right about the AGI narrative. Just saw a comment on another Marcus article claiming that o1 is already AGI, just at a young stage of maturity. So regardless of whether AGI is here, "AGI is here" appears to be here. And since no one can define it and no one knows how to identify it, what's to stop Sam Altman from just calling whatever OpenAI's best model is a year or two from now "AGI"? If he's willing to waive around bar exam and math olympiad scores today, I doubt he'll have any qualms simply declaring victory tomorrow.
Only downside is this would likely snuff out the perpertual "just wait til you see what it does next year!" sales pitch. Whatever ends up being labeled AGI is guaranteed to feel anticlimactic relative to the sci-fi fever dream expectations OpenAI and the like have been building up.
Maybe this is the way to officially mark the current hype cycle's conclusion?
I like Eric's analyses a lot in general. However if you listen to him recently (his last appearance on the Modern Wisdom podcast for instance), he seems genuinely astounded by OpenAI's accomplishments, and isn't viewing them through a cyclical analytic lens at all. If I recall correctly, he's taking about how this stuff should probably be developed in secret like the Manhattan project. He also seems to be quite friendly with Sam.
Your thesis will die by a thousand cuts, inflicted by the people who go ahead and actually build stuff with the Open AI API. You're betting on the wrong horse, or in this case, no horse at all.
Homer Simpson-does-AI charts are everywhere. These charts have no scale, no data, no definition. The only thing in common is a concave upward curve rising labeled "AI". Just this morning I ran into another variation of a Simpson chart except this had colors and included a flat line almost at ground zero with the X-axis titled "humans"
There are some really great points made here but...I'm unclear on what it is that they're "failing to deliver." Is it that generative AI is not going to be AGI as soon as they say it will? Is it that the ChatGPT releases and their capabilities are misleading or, as with the Theranos analogy, an actual (and unlawful, and unethical) fabrication? Is it that generative AI itself is a fabrication? I've seen a lot of these "don't believe the hype" takes and I tend to agree to a point—basically, that there's extraordinary and potentially very dangerous exaggeration about what will happen with AI and/or how quickly it will happen. But I can't figure out if "the hype" has just become a catch-all for not agreeing with something and thereby taking every negative story as fact and every positive story as lies...or if there's something more explicit and measurable behind it.
This is a really good point. OpenAI makes products that many people find useful for many things. I think it's the overselling and wacky sci-fi prognosticating that earns them the "hype" label. But just last night I used GPT-4o to help me reorganize a dataset; it saved me a good chunk of time that I would otherwise spent struggling through finicky coding syntax.
If OpenAI would just pitch the applications their products are actually useful for, describe their products as working the way they actually work, and acknowledge the real-world harms of AI rather than imaginary future-world harms, I'd probably see them as a trustworthy company.
I am still waiting for the first self driving AI vacuum cleaner, next AI porn in every American home instead of fitness and AI Dead Sea Mud IPO valuation $ 40 billion.
If I could have a roomba-like carpet cleaner that cleans up after my dogs and cats when I'm not home, my quality of living would increase dramatically.
And yes and no, on one hand hype is similar to dotcom bubble, though I read that even though many people and businesses got wiped out back then, a lot of infra was put down, actual "cables-in-the-ground" that people use today.
In this cycle of AI via let's train limitlessly big LLM, money is also flowing into infra heavily, data centers, software and apparently even nuclear power plants are on the table now! In the end of the day money make it into Nvidia pockets, engineers, AWS etc. So it's not like they are building airplanes that no one will ever fly.
Is it good or not? Hard to say.
Affordable gaming rigs for all in three years?