I love this so much because you’re pointing out so perfectly the self-referential delusion that has befallen the field and the financial system that supports it. Thanks for calling it out - consistently and honestly.
Perhaps most importantly, is your last point in parenthesis. The irresponsibility is so profound, it’s hard to conceive.
Sam Altman, Elon Musk, et al, might be decent or even great engineers, but they are clearly also marketers trying to strike while the iron is hot. I never met a good entrepreneur who wasn't also totally full of s!@# half the time.
The upsides question is the biggy here : sure the CEO of NVIDIA thinks we need to invest $9 trillion in AI compute. But to what end? Somewhere I think Douglas Adams is having a chuckle to himself about all this stuff. He saw it so clearly and early
The genuine suffering that irresponsible statements like these cause in the general public is deeply frustrating. ("deeply frustrating" is not strong enough but I'm trying to be less mad on the internet.)
Appreciate the good work of the sanity checks as always. Putting folks who would make statements like this on stage makes me strongly question the credibility of the venue.
"The issue is not artificial intelligence, it is *human* intelligence, specifically the limitations of which keeps such nonsense alive." [repeat ad nauseam]
Hard for my pessimistic brain to not immediately think of the tech waste and wobbly manufacturing ethics at play in creating whatever hardware comes along with, “humanoid robots.” Feels like we’re valuing, “progress” over the actual human lives impacted by mining precious metals needed to produce most products and the environmental fallout of all of it from production to disposal. Suffice it to say that exists currently, but the lack of need or desire expressed for, “humanoid robots” makes the situation feel more dire to me.
There are two types of naysayers. Those who don’t think something can be done and those who don’t think it (or the approach to accomplishing it) is a good idea.
And on a directly related note, there is a difference between being smart and being wise (and not in the “wise”cracks on X sense😊)
I have great respect for Elon, and hope he continues to do well. I talked to him shortly after he got back from his African vacation when he caught malaria 20 years ago or so. Didn't suffer nay-sayers then, and doesn't now. He is very bright, a bit aspie, and he accomplishes great things. He does listen if you just cut to the chase and explain things. His Achilles heel is that he's very smart, smart enough to hack through almost anything, but it gives him the confidence to jump into things that a bit of domain knowledge would be very helpful for.
He would respond to me saying, "Yeah, but then I wouldn't be able to see everything with fresh eyes." He likes to go back to first principles.
Take another look. He changes his mind. What you see is not fixed.
"Elon Musk, master of wildly improbable predictions." To be fair he is also the master of wildly improbable accomplishments such as dropping the cost of launching things into space by 90% (so far).
the real fallacy (which i see often) is in assuming that because some are true all must be. my sense is that he understands rockets and manufacturing better than AI.
Elon's career has been a lot of learning by collision. He's famous for tolerating it in rocket development. I remember the the billion dollars he spent on the giant liquid oxygen tank. He abandoned it and never looked back.
Why?
A. Carbon-fiber has a horrible failure mode and it fatigues invisibly (without special instruments). The fatigue is not repairable. The Oceangate submarine crew found this out. It seems perfect until it disintegrates.
B. Carbon-fiber itself gets stronger at ultra cold temps. But the epoxy that holds it together doesn't necessarily.
C. In space, one of the components of solar wind is oxygen nuclei. Those will penetrate and steal electrons. Carbon is a good moderator. As that builds up in a 3-6 month transit to Mars, when will it becom enough to spark a burn of the carbon? If that starts liquid O2 is not going to stop the burn.
Elon also tried a major factory automation project in Fremont without reading Shigeo Shingo or any of the Japan Management Association translations by Productivity Press. So he did it wrong. And learned, but I don't know if he even knows about those books.
Elon will eventually learn the issues with AI. He will be really angry that he was lied to by AI people. (Unless he's the one with rose colored glasses on and they have tried to tell him.)
Rockets are also better defined. They go up and, he insists, they come back down. We have known how to make rockets since 1940s. Elon got a team of people to solve that problem really well. With AI, we don't know what the problem is and he is selling the solution.
AI is getting to the point where with a lot of resources the simpler problems are solvable. Waymo's progress is very encouraging. But need good engineering and not have fixed ideas, so I am not sure Musk himself can pull it off. He did stumble with Tesla's FSD.
My read of the "simple problems with lots of resources" is that the system is over fitting to known solutions rather than actually problem solving. I wonder what you mean about waymo progress? They have more remote drivers than cars in the fleet... They got a system to recognise straight lines and more or less reliably read road signs (including when they are printed on a t-shirt). The system ultimately still has no idea about momentum, anticipation and preemptive situation awareness, or people trying to be funny and putting STOP on their clothes. It will have better reaction times to sudden stops, I'll give you that.
Incidentally, you gotta love the way technicians (as Freeman Dyson would have referred to them) describe their faulty “engineering”, with terminology like “phantom braking“ and “hallucination”.
"They have more remote drivers than cars in the fleet"
I would like a citation for that.
At the scale Waymo is working now, adding 1 million miles each week, and given that one needs instant reaction to any road situation, that suggests that they are doing things right. Can't fake it at that level.
Not saying it is all in the bag. There have been videos that show that the machine does have pre-emptive situational awareness, such as anticipating a biker showing up from behind a truck.
I do not understand, and this could be my shortcoming, how waymo could be uniquely different/better with a very similar approach to the problem.
Conceptually my scepticism lies in the fundamental approach to solving the problem: I can drive because my lizard brain understands physics, continuity of objects, momentum, and behaviour of other primates with comparable lizard brains. I have an inherently self-consistent model of the world that lets me draw conclusions from very limited input data: two low resolution narrow fov eyeballs stuck in a skull inside the cabin.
I am more than certain that this instance of Waymo collective stupidity (and it was waymo this time) had to be resolved by remote drivers:
The original video is more amusing, but I'm struggling to find it immediately right now.
Bottom line is I think that we are not only far from good self driving, but also that our current approach is insufficient to get us there. Bigger "how to human" problems have to be solved first.l, in my opinion.
Elon is using LLM tech with multiple sensors and some "other" software for driving assistance.
My belief is that he hasn't dug into LLMs yet and how they work or the architectural issues. He has been seduced by AI people because an LLM will do things in the early stages that are amazing. If you assume that linear improvement or exponential improvement should happen, Wow. Except AI responses look to me more like a log power law on errors with a non-zero asymptote.
Every age gets the oligarchs it deserves. The Athenians had their ολιγαρχία, the Romans had theirs. The Gilded Age had Carnegie, Gould, the Rockefellers. They weren't nice people but they weren't moral simpletons, either.
Having done the rounds with venture capital... they want and expect hype. They want to see insane dedication. Which Sam Altman fits that bill.
A truth. Most people in venture capital are not smart. They are there for the same reason Willie Sutton robbed banks. They can make a good living by lying and swindling. The dunces are lucky sometimes, but it's rare. There is a good reason why all of the earnings in VC are in the top 25%.
Most of those earnings are in the top 5% of VC firms. There's a few strategies that work.
1. Shoot fish in a crowded barrel.
2. Triage your seed investments and pour money into the ones that are good, and have managers like Elon. (But people can, and have, faked that for a while using cocaine up the nose. Better VC's check out and check up on what their entrepreneurs are doing, surreptitiously even.)
3. Invest in what you know, and carefully nurture your companies - all of them. This requires people at the VC firm to actually work. It requires understanding the stages your investments will go through, and showing up to make sure it goes well and your people have the resources that they need, when they need it. (For instance, a common one in biotech pharma is lack of knowledge about regulatory matters. So have staff who do, and haul your entrepreneurs through that for their product. Kicking and screaming if necessary.)
That's the basics of how to make money in VC.
Raising money from limited partners (LPs) is something else. Those guys are virtually clueless, and quite impressed with themselves. So... we see spectacles like this one. The Saudis are used to it. I heard through the grapevine that if they find out you're actually scamming them, it's quite hard to find a hiding place. Think Khashoggi, except you nobody. Not advised. But hype? Yeah. So you have people like Elon and Son, who have proven they can deliver things, make the big hype speeches. They won't be hunted down if things don't work exactly as stated.
Smart people say remarkably stupid things sometimes. They must realize that the actual trend that we are seeing is that AI needs exponential data and compute to show linear returns on arbitrary benchmarks. My best guess: they know it's untenable, but they can't stop dancing because then the music stops.
I also suspect they know it's nonsense. But there's a lot of Saudi money at stake so these folks are prepared to exaggerate, flatter and outright lie. Musk is no exception. He's got the moral intelligence of a horny 13 year old.
I love this so much because you’re pointing out so perfectly the self-referential delusion that has befallen the field and the financial system that supports it. Thanks for calling it out - consistently and honestly.
Perhaps most importantly, is your last point in parenthesis. The irresponsibility is so profound, it’s hard to conceive.
Sam Altman, Elon Musk, et al, might be decent or even great engineers, but they are clearly also marketers trying to strike while the iron is hot. I never met a good entrepreneur who wasn't also totally full of s!@# half the time.
altman is not engineer at all; elon arguably is
certainly not decent and mostly not engineers.
Some of them are half-full of shit all the time!
The upsides question is the biggy here : sure the CEO of NVIDIA thinks we need to invest $9 trillion in AI compute. But to what end? Somewhere I think Douglas Adams is having a chuckle to himself about all this stuff. He saw it so clearly and early
The genuine suffering that irresponsible statements like these cause in the general public is deeply frustrating. ("deeply frustrating" is not strong enough but I'm trying to be less mad on the internet.)
Appreciate the good work of the sanity checks as always. Putting folks who would make statements like this on stage makes me strongly question the credibility of the venue.
🤣🤣🤣🫠
Son forecast that “it would require 400 gigawatts, 200 million chips, and $9 trillion capital”
He neglected to mention the most important requirements : the flux capacitor and the DeLorean.
actually laughed out loud
Great Scott!
For perspective, the current total United States electrical production is about 1,300 gigawatts (if my math is correct).
"The issue is not artificial intelligence, it is *human* intelligence, specifically the limitations of which keeps such nonsense alive." [repeat ad nauseam]
Hard for my pessimistic brain to not immediately think of the tech waste and wobbly manufacturing ethics at play in creating whatever hardware comes along with, “humanoid robots.” Feels like we’re valuing, “progress” over the actual human lives impacted by mining precious metals needed to produce most products and the environmental fallout of all of it from production to disposal. Suffice it to say that exists currently, but the lack of need or desire expressed for, “humanoid robots” makes the situation feel more dire to me.
Those points about resources are excellent reasons to move that sort of thing to Mars, asteroids, and the moon.
Moving Elon and Sam to Mars would be a propitious start.
There are two types of naysayers. Those who don’t think something can be done and those who don’t think it (or the approach to accomplishing it) is a good idea.
And on a directly related note, there is a difference between being smart and being wise (and not in the “wise”cracks on X sense😊)
And my suggestion to move Elon and Sam to Mars should at least make it clear that I’m not a doubting Thomas on that issue.
Sooner rather than later.
I have great respect for Elon, and hope he continues to do well. I talked to him shortly after he got back from his African vacation when he caught malaria 20 years ago or so. Didn't suffer nay-sayers then, and doesn't now. He is very bright, a bit aspie, and he accomplishes great things. He does listen if you just cut to the chase and explain things. His Achilles heel is that he's very smart, smart enough to hack through almost anything, but it gives him the confidence to jump into things that a bit of domain knowledge would be very helpful for.
He would respond to me saying, "Yeah, but then I wouldn't be able to see everything with fresh eyes." He likes to go back to first principles.
Take another look. He changes his mind. What you see is not fixed.
And he loves Mariachi. :-D
Moon is too close
I'm glad I was sitting down (on my human posterior) for this one. What do you mean Elon said something outlandish?? Why, he'd never—
Self-interested con-artists will say anything to get gullible people to give them what they want.
"Elon Musk, master of wildly improbable predictions." To be fair he is also the master of wildly improbable accomplishments such as dropping the cost of launching things into space by 90% (so far).
the real fallacy (which i see often) is in assuming that because some are true all must be. my sense is that he understands rockets and manufacturing better than AI.
Elon's career has been a lot of learning by collision. He's famous for tolerating it in rocket development. I remember the the billion dollars he spent on the giant liquid oxygen tank. He abandoned it and never looked back.
Why?
A. Carbon-fiber has a horrible failure mode and it fatigues invisibly (without special instruments). The fatigue is not repairable. The Oceangate submarine crew found this out. It seems perfect until it disintegrates.
B. Carbon-fiber itself gets stronger at ultra cold temps. But the epoxy that holds it together doesn't necessarily.
C. In space, one of the components of solar wind is oxygen nuclei. Those will penetrate and steal electrons. Carbon is a good moderator. As that builds up in a 3-6 month transit to Mars, when will it becom enough to spark a burn of the carbon? If that starts liquid O2 is not going to stop the burn.
Elon also tried a major factory automation project in Fremont without reading Shigeo Shingo or any of the Japan Management Association translations by Productivity Press. So he did it wrong. And learned, but I don't know if he even knows about those books.
Elon will eventually learn the issues with AI. He will be really angry that he was lied to by AI people. (Unless he's the one with rose colored glasses on and they have tried to tell him.)
Rockets are also better defined. They go up and, he insists, they come back down. We have known how to make rockets since 1940s. Elon got a team of people to solve that problem really well. With AI, we don't know what the problem is and he is selling the solution.
AI is getting to the point where with a lot of resources the simpler problems are solvable. Waymo's progress is very encouraging. But need good engineering and not have fixed ideas, so I am not sure Musk himself can pull it off. He did stumble with Tesla's FSD.
My read of the "simple problems with lots of resources" is that the system is over fitting to known solutions rather than actually problem solving. I wonder what you mean about waymo progress? They have more remote drivers than cars in the fleet... They got a system to recognise straight lines and more or less reliably read road signs (including when they are printed on a t-shirt). The system ultimately still has no idea about momentum, anticipation and preemptive situation awareness, or people trying to be funny and putting STOP on their clothes. It will have better reaction times to sudden stops, I'll give you that.
“the system is over fitting to known solutions“
As the mathematician John von Neumann is purported to have said
“With four parameters I can fit an elephant , with five I can make him wiggle his trunk, and with 175 billion, I can make him hallucinate like an LLM”
Incidentally, who needs billions of parameters when just one (LSD) would do the trick?
Incidentally, you gotta love the way technicians (as Freeman Dyson would have referred to them) describe their faulty “engineering”, with terminology like “phantom braking“ and “hallucination”.
"They have more remote drivers than cars in the fleet"
I would like a citation for that.
At the scale Waymo is working now, adding 1 million miles each week, and given that one needs instant reaction to any road situation, that suggests that they are doing things right. Can't fake it at that level.
Not saying it is all in the bag. There have been videos that show that the machine does have pre-emptive situational awareness, such as anticipating a biker showing up from behind a truck.
Overall, they do well and will get better.
My apologies, it was Cruise. 1.5 drivers per vehicle. Gary described it a year ago here: https://garymarcus.substack.com/p/could-cruise-be-the-theranos-of-ai
I do not understand, and this could be my shortcoming, how waymo could be uniquely different/better with a very similar approach to the problem.
Conceptually my scepticism lies in the fundamental approach to solving the problem: I can drive because my lizard brain understands physics, continuity of objects, momentum, and behaviour of other primates with comparable lizard brains. I have an inherently self-consistent model of the world that lets me draw conclusions from very limited input data: two low resolution narrow fov eyeballs stuck in a skull inside the cabin.
I am more than certain that this instance of Waymo collective stupidity (and it was waymo this time) had to be resolved by remote drivers:
https://youtu.be/7b_GtLcdUXM?si=NunfBpZgYYgKKhfE
The original video is more amusing, but I'm struggling to find it immediately right now.
Bottom line is I think that we are not only far from good self driving, but also that our current approach is insufficient to get us there. Bigger "how to human" problems have to be solved first.l, in my opinion.
Elon is using LLM tech with multiple sensors and some "other" software for driving assistance.
My belief is that he hasn't dug into LLMs yet and how they work or the architectural issues. He has been seduced by AI people because an LLM will do things in the early stages that are amazing. If you assume that linear improvement or exponential improvement should happen, Wow. Except AI responses look to me more like a log power law on errors with a non-zero asymptote.
Every age gets the oligarchs it deserves. The Athenians had their ολιγαρχία, the Romans had theirs. The Gilded Age had Carnegie, Gould, the Rockefellers. They weren't nice people but they weren't moral simpletons, either.
We have Musk. Sad, but true.
But his initial success was in software design--long before he got into rockets!
Wise skepticism, as always from you :-)
And re the ability to be precise with AGI goals, good luck:
The Frustrating Quest to Define AGI (2024) https://curriculumredesign.org/wp-content/uploads/The-Frustrating-Quest-to-Define-AGI-1.pdf This paper discusses whether Artificial General Intelligence (AGI) can ever be defined properly by reviewing the various approaches, identifying their validity, and proposing alternatives.
Tha gap between rhetoric and reality is close to unsustainable. These made-up numbers only underscore how big the problem has become.
Something has to give.
For all the improbable predictions above, I have the following quote:
“Never let the truth get in the way of a good story.” - Mark Twain
Having done the rounds with venture capital... they want and expect hype. They want to see insane dedication. Which Sam Altman fits that bill.
A truth. Most people in venture capital are not smart. They are there for the same reason Willie Sutton robbed banks. They can make a good living by lying and swindling. The dunces are lucky sometimes, but it's rare. There is a good reason why all of the earnings in VC are in the top 25%.
Most of those earnings are in the top 5% of VC firms. There's a few strategies that work.
1. Shoot fish in a crowded barrel.
2. Triage your seed investments and pour money into the ones that are good, and have managers like Elon. (But people can, and have, faked that for a while using cocaine up the nose. Better VC's check out and check up on what their entrepreneurs are doing, surreptitiously even.)
3. Invest in what you know, and carefully nurture your companies - all of them. This requires people at the VC firm to actually work. It requires understanding the stages your investments will go through, and showing up to make sure it goes well and your people have the resources that they need, when they need it. (For instance, a common one in biotech pharma is lack of knowledge about regulatory matters. So have staff who do, and haul your entrepreneurs through that for their product. Kicking and screaming if necessary.)
That's the basics of how to make money in VC.
Raising money from limited partners (LPs) is something else. Those guys are virtually clueless, and quite impressed with themselves. So... we see spectacles like this one. The Saudis are used to it. I heard through the grapevine that if they find out you're actually scamming them, it's quite hard to find a hiding place. Think Khashoggi, except you nobody. Not advised. But hype? Yeah. So you have people like Elon and Son, who have proven they can deliver things, make the big hype speeches. They won't be hunted down if things don't work exactly as stated.
Make some sense of it?
Gary. You would be safe giving a talk there if someone would let you in. They would probably appreciate it.
Smart people say remarkably stupid things sometimes. They must realize that the actual trend that we are seeing is that AI needs exponential data and compute to show linear returns on arbitrary benchmarks. My best guess: they know it's untenable, but they can't stop dancing because then the music stops.
I also suspect they know it's nonsense. But there's a lot of Saudi money at stake so these folks are prepared to exaggerate, flatter and outright lie. Musk is no exception. He's got the moral intelligence of a horny 13 year old.
WHY IS THERE THIS HEADLONG RUSH INTO NEW & UNPROVEN TECHNOLOGIES?
That also use horrendous amounts of energy and natural resources‼️
WHY?
This is the best explanation I have seen (short answer is $$$):
https://www.wheresyoured.at/tss/