I love this so much because you’re pointing out so perfectly the self-referential delusion that has befallen the field and the financial system that supports it. Thanks for calling it out - consistently and honestly.
Perhaps most importantly, is your last point in parenthesis. The irresponsibility is so profound, it’s hard to conceive.
The upsides question is the biggy here : sure the CEO of NVIDIA thinks we need to invest $9 trillion in AI compute. But to what end? Somewhere I think Douglas Adams is having a chuckle to himself about all this stuff. He saw it so clearly and early
The genuine suffering that irresponsible statements like these cause in the general public is deeply frustrating. ("deeply frustrating" is not strong enough but I'm trying to be less mad on the internet.)
Appreciate the good work of the sanity checks as always. Putting folks who would make statements like this on stage makes me strongly question the credibility of the venue.
"The issue is not artificial intelligence, it is *human* intelligence, specifically the limitations of which keeps such nonsense alive." [repeat ad nauseam]
Hard for my pessimistic brain to not immediately think of the tech waste and wobbly manufacturing ethics at play in creating whatever hardware comes along with, “humanoid robots.” Feels like we’re valuing, “progress” over the actual human lives impacted by mining precious metals needed to produce most products and the environmental fallout of all of it from production to disposal. Suffice it to say that exists currently, but the lack of need or desire expressed for, “humanoid robots” makes the situation feel more dire to me.
There are two types of naysayers. Those who don’t think something can be done and those who don’t think it (or the approach to accomplishing it) is a good idea.
And on a directly related note, there is a difference between being smart and being wise (and not in the “wise”cracks on X sense😊)
"Elon Musk, master of wildly improbable predictions." To be fair he is also the master of wildly improbable accomplishments such as dropping the cost of launching things into space by 90% (so far).
the real fallacy (which i see often) is in assuming that because some are true all must be. my sense is that he understands rockets and manufacturing better than AI.
Rockets are also better defined. They go up and, he insists, they come back down. We have known how to make rockets since 1940s. Elon got a team of people to solve that problem really well. With AI, we don't know what the problem is and he is selling the solution.
My read of the "simple problems with lots of resources" is that the system is over fitting to known solutions rather than actually problem solving. I wonder what you mean about waymo progress? They have more remote drivers than cars in the fleet... They got a system to recognise straight lines and more or less reliably read road signs (including when they are printed on a t-shirt). The system ultimately still has no idea about momentum, anticipation and preemptive situation awareness, or people trying to be funny and putting STOP on their clothes. It will have better reaction times to sudden stops, I'll give you that.
Incidentally, you gotta love the way technicians (as Freeman Dyson would have referred to them) describe their faulty “engineering”, with terminology like “phantom braking“ and “hallucination”.
I do not understand, and this could be my shortcoming, how waymo could be uniquely different/better with a very similar approach to the problem.
Conceptually my scepticism lies in the fundamental approach to solving the problem: I can drive because my lizard brain understands physics, continuity of objects, momentum, and behaviour of other primates with comparable lizard brains. I have an inherently self-consistent model of the world that lets me draw conclusions from very limited input data: two low resolution narrow fov eyeballs stuck in a skull inside the cabin.
I am more than certain that this instance of Waymo collective stupidity (and it was waymo this time) had to be resolved by remote drivers:
The original video is more amusing, but I'm struggling to find it immediately right now.
Bottom line is I think that we are not only far from good self driving, but also that our current approach is insufficient to get us there. Bigger "how to human" problems have to be solved first.l, in my opinion.
Every age gets the oligarchs it deserves. The Athenians had their ολιγαρχία, the Romans had theirs. The Gilded Age had Carnegie, Gould, the Rockefellers. They weren't nice people but they weren't moral simpletons, either.
Smart people say remarkably stupid things sometimes. They must realize that the actual trend that we are seeing is that AI needs exponential data and compute to show linear returns on arbitrary benchmarks. My best guess: they know it's untenable, but they can't stop dancing because then the music stops.
I also suspect they know it's nonsense. But there's a lot of Saudi money at stake so these folks are prepared to exaggerate, flatter and outright lie. Musk is no exception. He's got the moral intelligence of a horny 13 year old.
AI butt facts . . . not sure I like the name, but yeah, there are some wild predictions out there.
If you take a step back though, and look at the progress in AI since GPT-4, what do you think about the predictions inherent in the subtext of things like the 6-month pause on AI training suggested over a year and a half ago? Or the predictions of AI progress inherent in a meaningfully high P-doom in the near term? I understand that these predictions are probabilistic, and even strongly contingently so, but for example, what conclusions (if any?) can be drawn from the progress in the over year and a half since GPT-4 was release? How should priors be updated? What real progress over what time frame would be an indicator of continued exponential growth vs saturation?
I love this so much because you’re pointing out so perfectly the self-referential delusion that has befallen the field and the financial system that supports it. Thanks for calling it out - consistently and honestly.
Perhaps most importantly, is your last point in parenthesis. The irresponsibility is so profound, it’s hard to conceive.
The upsides question is the biggy here : sure the CEO of NVIDIA thinks we need to invest $9 trillion in AI compute. But to what end? Somewhere I think Douglas Adams is having a chuckle to himself about all this stuff. He saw it so clearly and early
The genuine suffering that irresponsible statements like these cause in the general public is deeply frustrating. ("deeply frustrating" is not strong enough but I'm trying to be less mad on the internet.)
Appreciate the good work of the sanity checks as always. Putting folks who would make statements like this on stage makes me strongly question the credibility of the venue.
Son forecast that “it would require 400 gigawatts, 200 million chips, and $9 trillion capital”
He neglected to mention the most important requirements : the flux capacitor and the DeLorean.
actually laughed out loud
Great Scott!
For perspective, the current total United States electrical production is about 1,300 gigawatts (if my math is correct).
"The issue is not artificial intelligence, it is *human* intelligence, specifically the limitations of which keeps such nonsense alive." [repeat ad nauseam]
Hard for my pessimistic brain to not immediately think of the tech waste and wobbly manufacturing ethics at play in creating whatever hardware comes along with, “humanoid robots.” Feels like we’re valuing, “progress” over the actual human lives impacted by mining precious metals needed to produce most products and the environmental fallout of all of it from production to disposal. Suffice it to say that exists currently, but the lack of need or desire expressed for, “humanoid robots” makes the situation feel more dire to me.
Moving Elon and Sam to Mars would be a propitious start.
There are two types of naysayers. Those who don’t think something can be done and those who don’t think it (or the approach to accomplishing it) is a good idea.
And on a directly related note, there is a difference between being smart and being wise (and not in the “wise”cracks on X sense😊)
And my suggestion to move Elon and Sam to Mars should at least make it clear that I’m not a doubting Thomas on that issue.
Sooner rather than later.
Moon is too close
I'm glad I was sitting down (on my human posterior) for this one. What do you mean Elon said something outlandish?? Why, he'd never—
Self-interested con-artists will say anything to get gullible people to give them what they want.
"Elon Musk, master of wildly improbable predictions." To be fair he is also the master of wildly improbable accomplishments such as dropping the cost of launching things into space by 90% (so far).
the real fallacy (which i see often) is in assuming that because some are true all must be. my sense is that he understands rockets and manufacturing better than AI.
Rockets are also better defined. They go up and, he insists, they come back down. We have known how to make rockets since 1940s. Elon got a team of people to solve that problem really well. With AI, we don't know what the problem is and he is selling the solution.
My read of the "simple problems with lots of resources" is that the system is over fitting to known solutions rather than actually problem solving. I wonder what you mean about waymo progress? They have more remote drivers than cars in the fleet... They got a system to recognise straight lines and more or less reliably read road signs (including when they are printed on a t-shirt). The system ultimately still has no idea about momentum, anticipation and preemptive situation awareness, or people trying to be funny and putting STOP on their clothes. It will have better reaction times to sudden stops, I'll give you that.
“the system is over fitting to known solutions“
As the mathematician John von Neumann is purported to have said
“With four parameters I can fit an elephant , with five I can make him wiggle his trunk, and with 175 billion, I can make him hallucinate like an LLM”
Incidentally, who needs billions of parameters when just one (LSD) would do the trick?
Incidentally, you gotta love the way technicians (as Freeman Dyson would have referred to them) describe their faulty “engineering”, with terminology like “phantom braking“ and “hallucination”.
My apologies, it was Cruise. 1.5 drivers per vehicle. Gary described it a year ago here: https://garymarcus.substack.com/p/could-cruise-be-the-theranos-of-ai
I do not understand, and this could be my shortcoming, how waymo could be uniquely different/better with a very similar approach to the problem.
Conceptually my scepticism lies in the fundamental approach to solving the problem: I can drive because my lizard brain understands physics, continuity of objects, momentum, and behaviour of other primates with comparable lizard brains. I have an inherently self-consistent model of the world that lets me draw conclusions from very limited input data: two low resolution narrow fov eyeballs stuck in a skull inside the cabin.
I am more than certain that this instance of Waymo collective stupidity (and it was waymo this time) had to be resolved by remote drivers:
https://youtu.be/7b_GtLcdUXM?si=NunfBpZgYYgKKhfE
The original video is more amusing, but I'm struggling to find it immediately right now.
Bottom line is I think that we are not only far from good self driving, but also that our current approach is insufficient to get us there. Bigger "how to human" problems have to be solved first.l, in my opinion.
But his initial success was in software design--long before he got into rockets!
Every age gets the oligarchs it deserves. The Athenians had their ολιγαρχία, the Romans had theirs. The Gilded Age had Carnegie, Gould, the Rockefellers. They weren't nice people but they weren't moral simpletons, either.
We have Musk. Sad, but true.
Wise skepticism, as always from you :-)
And re the ability to be precise with AGI goals, good luck:
The Frustrating Quest to Define AGI (2024) https://curriculumredesign.org/wp-content/uploads/The-Frustrating-Quest-to-Define-AGI-1.pdf This paper discusses whether Artificial General Intelligence (AGI) can ever be defined properly by reviewing the various approaches, identifying their validity, and proposing alternatives.
Tha gap between rhetoric and reality is close to unsustainable. These made-up numbers only underscore how big the problem has become.
Something has to give.
For all the improbable predictions above, I have the following quote:
“Never let the truth get in the way of a good story.” - Mark Twain
Smart people say remarkably stupid things sometimes. They must realize that the actual trend that we are seeing is that AI needs exponential data and compute to show linear returns on arbitrary benchmarks. My best guess: they know it's untenable, but they can't stop dancing because then the music stops.
I also suspect they know it's nonsense. But there's a lot of Saudi money at stake so these folks are prepared to exaggerate, flatter and outright lie. Musk is no exception. He's got the moral intelligence of a horny 13 year old.
WHY IS THERE THIS HEADLONG RUSH INTO NEW & UNPROVEN TECHNOLOGIES?
That also use horrendous amounts of energy and natural resources‼️
WHY?
This is the best explanation I have seen (short answer is $$$):
https://www.wheresyoured.at/tss/
AI butt facts . . . not sure I like the name, but yeah, there are some wild predictions out there.
If you take a step back though, and look at the progress in AI since GPT-4, what do you think about the predictions inherent in the subtext of things like the 6-month pause on AI training suggested over a year and a half ago? Or the predictions of AI progress inherent in a meaningfully high P-doom in the near term? I understand that these predictions are probabilistic, and even strongly contingently so, but for example, what conclusions (if any?) can be drawn from the progress in the over year and a half since GPT-4 was release? How should priors be updated? What real progress over what time frame would be an indicator of continued exponential growth vs saturation?
“Every prediction is an operation on the past.”
― Norbert Wiener (1894-1964)