The Rise and Fall of ChatGPT?
August has not been kind to generative AI
Maybe it’s too soon to declare victory on my August 12th bet that ChatGPT will turn out to be a dud, but I will tell you one thing, it’s not obvious I am wrong.
Five observations, eleven days later:
Axios just reported that a lot of companies are struggling to actually deploy generative AI, “due to high costs and confusion”. No deployment, no revenue. Quoting further, “Nearly 70% of respondents to the S&P Global survey said they have at least one AI project in production... but about half of those companies (31% of respondents) are still in pilot or proof-of-concept stage, outnumbering those who have reached enterprise scale with an AI project (28% of total respondents).” And of course not every enterprise-scale experiment will be successful.
The backlash as begun. Earlier today, the cultural critic Ted Gioia was even harsher than I was, in a new essay, entitled “Ugly Numbers from Microsoft and ChatGPT Reveal that AI Demand is Already Shrinking”, subtitled “The only areas where AI is flourishing are shamming, spamming & scamming”. A few days earlier Financial Times columnist John Thornhill quoted this very newsletter, concluding with a zinger: “Doubtless, Marcus will also be proved right that much of the corporate money thrown at the technology will be wasted and most start-ups will fail. But who knows what new stuff will be invented and endure? That is why God invented bubbles.” Ouch!.
Some of the problems we’ve known about for a long time appear to persist. Google’s latest LLM “includes Hitler, Stalin and Mussolini on a list of "greatest" leaders and Hitler also makes its list of "most effective leaders””, according to a news report yesterday.
OpenAI has been sued a lot for a lot of reasons, but earlier this week ArsTechnica reported that the latest potential lawsuit — this time from the New York Times — could “force OpenAI to wipe ChatGPT and start over”, with OpenAI potentially “fined up to $150,000 for each piece of infringing content”, serious money even for OpenAI, and a potential challenge to the economics of the whole enterprise. This could be particularly devastating because Large Language Models aren’t like classical databases in which individual pieces of data can be removed at will; if any content is removed, the entire model must (so far as I understand it) be retrained, at great expense.
Meanwhile, I am pretty sure Chris Christie wasn’t aiming to be kind to Vivek Ramaswamy tonight at the Republican debate when he likened Ramaswamy to “a guy who sounds like ChatGPT”.
In less than a year, ChatGPT has gone from being mistaken for AGI to being the butt of a joke, and an insulting shorthand for robotic, incoherent, unreliable, and untrustworthy.
And the financial challenges are starting to mount up.
It ain’t over yet, but what a stunning reversal of fortune.
Gary Marcus hopes you will listen to his eight part podcast Humans versus Machines; the final episode, on regulating AI, featuring Alondra Nelson and Brian Merchant, with archival footage from Sam Altmann and Senators Hawley, Blumenthal and Kennedy, drops Tuesday.
If you enjoy these posts, consider becoming a free or paid subscriber.