Maybe it’s too soon to declare victory on my August 12th bet that ChatGPT will turn out to be a dud, but I will tell you one thing, it’s not obvious I am wrong.
Five observations, eleven days later:
Axios just reported that a lot of companies are struggling to actually deploy generative AI, “due to high costs and confusion”. No deployment, no revenue. Quoting further, “Nearly 70% of respondents to the S&P Global survey said they have at least one AI project in production... but about half of those companies (31% of respondents) are still in pilot or proof-of-concept stage, outnumbering those who have reached enterprise scale with an AI project (28% of total respondents).” And of course not every enterprise-scale experiment will be successful.
The backlash as begun. Earlier today, the cultural critic Ted Gioia was even harsher than I was, in a new essay, entitled “Ugly Numbers from Microsoft and ChatGPT Reveal that AI Demand is Already Shrinking”, subtitled “The only areas where AI is flourishing are shamming, spamming & scamming”. A few days earlier Financial Times columnist John Thornhill quoted this very newsletter, concluding with a zinger: “Doubtless, Marcus will also be proved right that much of the corporate money thrown at the technology will be wasted and most start-ups will fail. But who knows what new stuff will be invented and endure? That is why God invented bubbles.” Ouch!.
Some of the problems we’ve known about for a long time appear to persist. Google’s latest LLM “includes Hitler, Stalin and Mussolini on a list of "greatest" leaders and Hitler also makes its list of "most effective leaders””, according to a news report yesterday.
OpenAI has been sued a lot for a lot of reasons, but earlier this week ArsTechnica reported that the latest potential lawsuit — this time from the New York Times — could “force OpenAI to wipe ChatGPT and start over”, with OpenAI potentially “fined up to $150,000 for each piece of infringing content”, serious money even for OpenAI, and a potential challenge to the economics of the whole enterprise. This could be particularly devastating because Large Language Models aren’t like classical databases in which individual pieces of data can be removed at will; if any content is removed, the entire model must (so far as I understand it) be retrained, at great expense.
Meanwhile, I am pretty sure Chris Christie wasn’t aiming to be kind to Vivek Ramaswamy tonight at the Republican debate when he likened Ramaswamy to “a guy who sounds like ChatGPT”.
In less than a year, ChatGPT has gone from being mistaken for AGI to being the butt of a joke, and an insulting shorthand for robotic, incoherent, unreliable, and untrustworthy.
And the financial challenges are starting to mount up.
It ain’t over yet, but what a stunning reversal of fortune.
Gary Marcus hopes you will listen to his eight part podcast Humans versus Machines; the final episode, on regulating AI, featuring Alondra Nelson and Brian Merchant, with archival footage from Sam Altmann and Senators Hawley, Blumenthal and Kennedy, drops Tuesday.
I enjoy following your blog but it generally feels like you are cynical and would not change your mind even if given ample evidence that undermines your views. Also, just for fun, here is ChatGPT's reply to your blog post: Here are four counterarguments to the points made in the blog post:
1. **Struggles in Early Adoption Do Not Equate to Long-Term Failure**:
- The fact that many companies are struggling to deploy generative AI is not uncommon for any transformative technology in its early stages. Remember, the early days of the internet, cloud computing, and even e-commerce faced similar adoption hurdles. Challenges around cost and confusion can be temporary and often decrease as the technology matures and becomes more widely understood and accessible.
2. **Backlash and Criticism Can Lead to Improvement**:
- Every groundbreaking technology faces criticism. However, it's important to differentiate between constructive criticism, which can lead to improvement and iteration, and general skepticism. Moreover, linking AI’s future to a few negative headlines might be myopic. Just as ChatGPT and similar models have their detractors, they also have a vast number of supporters and users who find value in them.
3. **Missteps and Controversies Do Not Undermine the Entire Potential of AI**:
- The issue regarding Google’s LLM pointing out controversial figures as "greatest" leaders is a flaw, but it's crucial to separate the limitations of one model from the vast potential of the technology as a whole. AI models can and will be improved over time, and the emphasis should be on progress and refinement.
4. **Legal Issues and Economic Challenges are Part of Tech Evolution**:
- Many transformative technologies face legal challenges, especially in their early stages. This isn’t unique to AI. These challenges can lead to improved guidelines and practices for the industry. Furthermore, the mention of potential lawsuits is speculative. Even if OpenAI faces challenges, this does not mean that the entire field of generative AI will be rendered obsolete.
Lastly, on a broader note, technology's real value is often realized in the long run. Immediate setbacks or challenges do not necessarily predict a technology's long-term viability or success.
Elegant, eloquent, right on!