Something’s in the air — I think it is Saudi money.
My stream yesterday was absolutely filled with weirdly precise AI butt facts. Butt facts, in case you don’t know the term, are random bits of knowledge that a speaker pulls out of their, well, … you get it.
Here are some that I saw yesterday, in roughly the order that I heard them, starting with a wildly improbable prediction from Elon Musk, master of wildly improbable predictions:
At the annual Future Investment Initiative in Riyadh. Saudi Arabia, Elon Musk said, “I think by 2040 probably there are more humanoid robots than there are people”. Fat chance. As I told an outlet called Decrypt that asked me for comment, “Elon has a track record of overoptimistic predictions about AI, and this one is no different….There are only about 1.5 billion cars on the road; many people can’t afford one or don’t see the need. The same will be true for humanoid robots, and we aren’t going to see six humanoid robots for every car anytime soon. … Roomba, the best-selling consumer robot of all time, sells for a few hundred dollars and has sold around 50 million units. It’s just fantasy to imagine selling 200 times as many humanoid robots in the nearish term when nobody knows how to build a single safe, reliable, generally useful humanoid right now, at any price.”
At the same venue, Elon Musk said confidently (after acknowledging that the future that is not fully knowable), “I feel comfortable saying that that AI is getting 10 times better each year” – never specifying any measure that supports this claim (is OpenAI’s o1 100 times better than GPT4? Certainly not by their own data), nor pointing to any source. He then went on to extrapolate that it would be 10,000 times better in 4 years, never considering potential bottlenecks around data, or compute, nor inherent problems with LLMs. From all this he went “I think it will be able to do anything that any human can do, possibly within the next year or two.” My offer to bet him a million dollars on this nonsense stands.
Famed investor Masayoshi Son confidently told the crowd that “Artificial Super Intelligence” which he asserted would be 10,000 times smarter than humans, would come in 2035. “10,000 times smarter than humans“? What does that even mean? It’s a made up round number, with no real meaning; pure hype. (And how do his numbers square with Elon’s? Oh, never mind.)
As I put it on X last night, quantitative vibes are still just vibes.
§
An hour later, I saw that on the same day, at the same event in Saudi Arabia, Son, riffing on his own 10,000x number1 that he introduced a moment earlier, had actually achieved a trifecta of weirdly precise statements about things are impossible to know with precision all in one go, forecasting that it would require 400 gigawatts, 200 million chips, and $9 trillion capital.
§
Any serious scientist or engineer knows you can’t possibly predict the future with that kind of precision, especially when there are so many unknowns. For robots, for example we don’t know the cost of materials, where battery science will be in 2040, how good the software will be, etc.
Even if you believed scaling was infinite, that we wouldn’t run out of enough useful data, etc, you still wouldn’t know exactly how efficient future models would be, in terms of energy and chips. There is just no way that we can predict all this out to 2040 with any certainty.
When Elon told us in October 2019 that there would be a fleet of a million robotaxis a year later, and didn’t show his work, nobody should have believed him for instant. Nor should they take anything that he or Son said yesterday seriously.
§
In my opinion the audience of investors at the forum was being lied to: they were being assured that everything in the field of AI is under control and well plotted out when in fact science is hard and the field is in its infancy.
But that’s ok, I doubt the investors mind. If they can find a plausible story to invest a huge amount of other people’s money in, they make huge fees (typically 2% of whatever is invested), immediately.
Sure, those investments may not pan out (pity for whoever put up the capital), but the investors still get paid, a lot. That’s why so many investors working with other people’s money love plausible stories with big numbers. 2% of a $9 trillion investment is a heck of a lot more than 2% of a million investment. Basic math. (Side bonus: GenAI uses a massive amount of power, to the point of keeping fossil fuels in vogue, and that’s good for oil-rich countries, even if GenAI never becomes reliable.)
To sweeten it all, they get free press for all these outlandish claims, and that drives up the whole thing.
At least for a while.
Gary Marcus gets a lot of hostile pushback these days for his dark predictions about GenAI, but expects to die laughing.
As my friend Phil Libin points out, these weirdly specific numbers are also weirdly vague on upsides. What good is 10,000x smarter machine exactly? What do we even want a fleet of 10 billion humanoid robots for?
I love this so much because you’re pointing out so perfectly the self-referential delusion that has befallen the field and the financial system that supports it. Thanks for calling it out - consistently and honestly.
Perhaps most importantly, is your last point in parenthesis. The irresponsibility is so profound, it’s hard to conceive.
Sam Altman, Elon Musk, et al, might be decent or even great engineers, but they are clearly also marketers trying to strike while the iron is hot. I never met a good entrepreneur who wasn't also totally full of s!@# half the time.