Erdosgate
“extraordinary claims require extraordinary evidence” — or at least they used to?
OpenAI’s Sebastien Bubeck (first author, earlier, on the oversold paper Sparks of Artificial General Intelligence, which dubiously alleged that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system”) dropped a HUGE claim on Friday:
Solving a whole bunch of unsolved Erdös problems (a famous set of mathematical conjectures) would indeed be a big deal.
A lot of people were excited; 100,000 people viewed his post on X.
Alas, “found a solution” didn’t mean what people thought it did. People imagined that the system had discovered original solutions to “open problems” All that really happened is that GPT-5 (oversimplifying slightly) crawled the web for solutions to already-solved problems.
Within hours, the math and AI communities revolted:
Sir Demis Hassabis called it “embarrassing”.
The next day, Bubeck tried to backtrack, deleting the original tweet and claiming he was misunderstood:
Yeah, right. I don’t know anybody who believes his retrenchment.
A friend emailed me, “It’s sort of like when you tell your girlfriend that you’ve “figured out” a problem when you just googled it.”
§
I would hope that the whole thing would be seen as kind of teachable moment. Some people (I won’t name the guilty) were extremely quick to take Bubeck at his word. But why? The claim would have been extraordinary, and should have been vetted closely. I smell a really big dose of people believing what they want to believe.
All of this gave me a bad case of deja vu, back to 2019, when OpenAI claimed that they had a robot that had “solved” the Rubik’s cube. That was kind of the beginning of the end of my relationship with then, because when I probed, I found that the claim of “solution” was pretty misleading, as I summed up in a tweet, and they refused to correct their misleading presentation:
Some things never change.
Update: Sebastien Bubeck wrote a long, detailed tweet explaining his perspective, which begins “My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to explain why I was so excited. In short, it’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were never possible before (or at least much much more time consuming)” and ends “When I said in the October 11 tweet that “it solved [a problem] by realizing that it had actually been solved 20 years ago”, this was obviously meant as tongue-in-cheek. However, I now recognize that this moment calls for a more serious tone.” You can read his full post here.






Hey everyone, last night I used a single Google Search query and discovered quantum mechanics. I find this very accelerating.
So found means literally searched and found / so AGI is really AGS. Advanced General Search.
disappointed, again : ho hum.
6-5-4 months to go….