The bad news: AI is going pretty much as I expected
I wish the news were better
A brief update:
Some battles no longer need fighting. The notion on which this Substack was founded in May 2022 — that LLM scaling would not bring us to AGI — has gone mainstream.
Even Palantir’s CEO Alex Karp, has apparently gone full Gary Marcus:
Perhaps I also no longer need to warn people that the idea that coding would disappear soon is bullshit. Anyone remember this prediction from Anthropic CEO Dario Amodei back in March? Some journalists I could name seem to take it seriously at the time. (I didn’t, instead calling it fantasy.)
It was. Even the generally pro industry The Information now sees that:
The concerns about potential copyright violations that Reid Southen and I expressed here in and at greater length in the IEEE Spectrum in January 2024 continue almost two years later to be a problem.
Disney cited our work extensively in their June lawsuit, premised on exactly our concerns. Yesterday Warner filed a similar suit (also drawing heavily on my work with Southen).
As far as I know, the problem of wildly derivative outputs has not gone away. (All the replies we got on Twitter at the time about how the problem could be easily rectified were, as usual, nonsense.)
Meanwhile, more darkly, the Jurassic Park moment of mass-produced AI-generated misinformation that I projected in December 2022 has arrived.
If there is any true exponential nowadays, I fear it is this: the rapid doubling of AI-generated misinformation.
Sad also to say that my December 2022 predictions of deaths by chatbot have now also been confirmed, more than once. (Then there is also the correlated issue of AI-induced delusions, which I did not foresee.)
My darkest projections, though, were about the rise of techno-fascism. You can hear Garry Kasparov and I discuss them in a podcast just dropped today. (There is also a transcript at the same link).
Here’s a snippet:
I am truly sorry to say that so much seems to be playing out exactly as I anticipated.
Let us all hope that we can learn something from the mistakes of the last few years, and find a new path — to an AI that better serves humanity.









I think all computer science (and journalism) degrees (bachelors, masters, phd) should require a history of computer science course. This would solve so many problems.
The same shit happens every cycle. So many people just lie. And with even just the most basic understanding of the history of the field would make such lies obvious. Like everyone should have known at the time it was said that Dario was lying. Dario knew what he was saying had a 0% chance of occurring.
It is infuriating
I just wish the current state of AI wasn’t pretty obvious from the start. Like from the start of the current trend, it was entirely foreseeable that we’d be here at this point