By any reasonable account the worst investment in the history of AI has to be the over $100 billion that have been invested in “driverless” cars. There may be a payoff someday, but thus far there has not been a lot. By many accounts Waymo rides are still a net loss on a per ride basis, and the number of competitors even left in the full autonomy game has dwindled. Waymo runs in a few extremely well-mapped cities with mostly very good weather, but to my knowledge it’s never been tested in places with bad weather, poorly mapped roads, alternative driving patterns, etc. The generality of Waymo’s approach is very much still in question, and I am not sure anyone else is still seriously in the full autonomy game. (Tesla, for example, always requires a human in the loop.) Most of those who invested in driverless cars lost money, unless they invested early and cashed out, before patience began to run out.
History repeats itself again. During the previous cycle in the 80ies-90ies, big money were spent on infrastructure and research programs, at that time by governments - Japan's 5th gen computers and the US Strategic Computing Initiative. Just like today, they didn't have a clear strategy back then, the hope was that AI would somehow "emerge" :)
Blind faith in scaling laws that they don't understand plus the promise of owning the means of production is an irresistible combination. Losing their $100B is the only way that they'll learn.
A bad thing about this prospect of a plan is the signal it might send to competitors and companies that are trying (in vain?) to capitalize on LLMs, AI, or whatever the name. Money that companies could spend on useful endeavors will probably be squandered. A technology that will likely cost jobs (implemented successfully or not), that treats humans as externalities, that will likely not result in a more stable food supply, quality of life for the masses and so on, sounds to me like a shortsighted investment driven by greed, narcissism, and perhaps idiocy, resulting in a positive feedback loop that won't end well I'm afraid. But if it can happen, it will. These companies have the deeper coffers to steer and manipulate public opinion and buy or nudge politicians.
"Presumably the operating idea here is that (a) $100B will equal AGI and (b) that customers will pay literally anything for AGI. I doubt either premise is correct." It just struck me how important that second part is. If AGI amounts to 'another human' then its value is equivalent to 'another human' and for that we already know 'pay anything for' is nonsense.
And if an octillion parameter LLM in 2031 ruins lives in a similar manner to the tragedy of subpostmasters and subpostmistresses (“Mr Bates vs The Post Office”) - in this case, the fault is LLM hallucination - will there be outrage?
Or will critics, this time, because the sunk cost is so deep, have their mouths taped up, their bodies broken and their families and lives torn asunder by a sophisticated army of lawyers, corrupt politicians, troll farms and other thugs (maybe literal thugs, and autonomous machines)?
Time to cash in on this next blockbuster movie idea… does anyone have James Cameron’s number?! 😂
The whole thing with AI seems like pure hype. No one is investing the kind of money that would be needed to really rip apart the brain and work out how intelligence emerges. Why bother when you can churn out another cruddy LLM and just hype it TF?
Capitalism and market forces killed the search for AGİ as soon as a minimal viable fake became available.
Quite right. This 'single approach' (token/pixel statistics) is going to fail regarding AGI.
Taking OpenAI's own last published scaling numbers (for GPT3) and using the Winogrande benchmark (some real understanding challenges), I have calculated (quick and dirty) from those numbers that we may need something in the order of ~3,500,000,000,000,000 *times* as much as we currently have. See https://ea.rna.nl/2024/02/13/will-sam-altmans-7-trillion-ai-plan-rescue-ai/ the effect of scaling is something like the log of the log of (the log of) size. Going from 10B to 100B investment is going to do nothing.
Looking at self-driving cars from the lens of investor ROI is short-sighted, especially coming from somebody who cares so much for societal well-being as yourself.
Self-driving cars are hard. Harder than any hype people predicted. But you have no workable ideas for what could have been done differently.
The progress in self-driving cars is very good. Think of them as the James Webb telescope. It will get done. It will be good. It will be very much worth it. And investors will get their money. The patient ones.
Gary, I'd suggest a bit more research on self-driving before you write more. Tesla requires a human in the loop currently as do almost all companies except just a few that have obtained permits from the CPUC in California and some other jurisdictions to operate without human drivers. Tesla is pursuing a far more difficult non-sandboxed approach and that effort, while long and arduous, is about to pay off as v12 FSD is just released and by all accounts is 99% FSD.
We can expect permits for no human supervision in the next year or two, at which point the robotaxi revolution will begin in earnest. While I am strongly opposed to AGI I support narrow AI applications like FSD as being in general a benefit to the environment, human safety, and perhaps even productivity (or at least good sleep), while of course we will see a massive amount of job loss even from this narrow type of AI being rolled out globally in the coming decade.
I wondered if anyone remembered this historical example of AI Fakes from the 1980s and a reference to it.
In the 1980s a Nobel Prize economist Winner Herbert A Simon wrote AI program named Newton that was able to curve fit the motion of planets to a inverse squared law. Simon concluded that his program was as smart as human physicist Isaac Newton. Some physicists remarked that Newton didn't invent the inverse squared law of attraction, he invented mass. Does anyone remember this ai program.
I agree that customers wouldn’t pay *literally* anything for AGI, but they’d probably pay an awful lot. If you could replace any remote worker in your company - from a software tester up to an executive - with a bot that never sleeps and never takes a vacation, what would that be worth?
I have no idea what the cost to OpenAI would be to run a single instance of an AI worker, but if it’s less than (to make up a number) $100,000 a year, I’m guessing they can find enough customers to make a nice profit.
"A $100B LLM is still an LLM"
100% this. There have been no significant paradigm shifts that move us away from hallucinations and similar problems with LLMs.
History repeats itself again. During the previous cycle in the 80ies-90ies, big money were spent on infrastructure and research programs, at that time by governments - Japan's 5th gen computers and the US Strategic Computing Initiative. Just like today, they didn't have a clear strategy back then, the hope was that AI would somehow "emerge" :)
Blind faith in scaling laws that they don't understand plus the promise of owning the means of production is an irresistible combination. Losing their $100B is the only way that they'll learn.
A bad thing about this prospect of a plan is the signal it might send to competitors and companies that are trying (in vain?) to capitalize on LLMs, AI, or whatever the name. Money that companies could spend on useful endeavors will probably be squandered. A technology that will likely cost jobs (implemented successfully or not), that treats humans as externalities, that will likely not result in a more stable food supply, quality of life for the masses and so on, sounds to me like a shortsighted investment driven by greed, narcissism, and perhaps idiocy, resulting in a positive feedback loop that won't end well I'm afraid. But if it can happen, it will. These companies have the deeper coffers to steer and manipulate public opinion and buy or nudge politicians.
Just think about the public transit system all that money could have built---.
"Presumably the operating idea here is that (a) $100B will equal AGI and (b) that customers will pay literally anything for AGI. I doubt either premise is correct." It just struck me how important that second part is. If AGI amounts to 'another human' then its value is equivalent to 'another human' and for that we already know 'pay anything for' is nonsense.
And if an octillion parameter LLM in 2031 ruins lives in a similar manner to the tragedy of subpostmasters and subpostmistresses (“Mr Bates vs The Post Office”) - in this case, the fault is LLM hallucination - will there be outrage?
Or will critics, this time, because the sunk cost is so deep, have their mouths taped up, their bodies broken and their families and lives torn asunder by a sophisticated army of lawyers, corrupt politicians, troll farms and other thugs (maybe literal thugs, and autonomous machines)?
Time to cash in on this next blockbuster movie idea… does anyone have James Cameron’s number?! 😂
The whole thing with AI seems like pure hype. No one is investing the kind of money that would be needed to really rip apart the brain and work out how intelligence emerges. Why bother when you can churn out another cruddy LLM and just hype it TF?
Capitalism and market forces killed the search for AGİ as soon as a minimal viable fake became available.
Quite right. This 'single approach' (token/pixel statistics) is going to fail regarding AGI.
Taking OpenAI's own last published scaling numbers (for GPT3) and using the Winogrande benchmark (some real understanding challenges), I have calculated (quick and dirty) from those numbers that we may need something in the order of ~3,500,000,000,000,000 *times* as much as we currently have. See https://ea.rna.nl/2024/02/13/will-sam-altmans-7-trillion-ai-plan-rescue-ai/ the effect of scaling is something like the log of the log of (the log of) size. Going from 10B to 100B investment is going to do nothing.
Looking at self-driving cars from the lens of investor ROI is short-sighted, especially coming from somebody who cares so much for societal well-being as yourself.
Self-driving cars are hard. Harder than any hype people predicted. But you have no workable ideas for what could have been done differently.
The progress in self-driving cars is very good. Think of them as the James Webb telescope. It will get done. It will be good. It will be very much worth it. And investors will get their money. The patient ones.
hi gary; forget about all the ‘hopehype’ and write your ‘positive’ essays. we all need them!
Gary, I'd suggest a bit more research on self-driving before you write more. Tesla requires a human in the loop currently as do almost all companies except just a few that have obtained permits from the CPUC in California and some other jurisdictions to operate without human drivers. Tesla is pursuing a far more difficult non-sandboxed approach and that effort, while long and arduous, is about to pay off as v12 FSD is just released and by all accounts is 99% FSD.
We can expect permits for no human supervision in the next year or two, at which point the robotaxi revolution will begin in earnest. While I am strongly opposed to AGI I support narrow AI applications like FSD as being in general a benefit to the environment, human safety, and perhaps even productivity (or at least good sleep), while of course we will see a massive amount of job loss even from this narrow type of AI being rolled out globally in the coming decade.
I don't know where to post this.
I wondered if anyone remembered this historical example of AI Fakes from the 1980s and a reference to it.
In the 1980s a Nobel Prize economist Winner Herbert A Simon wrote AI program named Newton that was able to curve fit the motion of planets to a inverse squared law. Simon concluded that his program was as smart as human physicist Isaac Newton. Some physicists remarked that Newton didn't invent the inverse squared law of attraction, he invented mass. Does anyone remember this ai program.
I agree that customers wouldn’t pay *literally* anything for AGI, but they’d probably pay an awful lot. If you could replace any remote worker in your company - from a software tester up to an executive - with a bot that never sleeps and never takes a vacation, what would that be worth?
I have no idea what the cost to OpenAI would be to run a single instance of an AI worker, but if it’s less than (to make up a number) $100,000 a year, I’m guessing they can find enough customers to make a nice profit.
If, that is, it’s actually AGI.
LLMs still can't plan. https://www.linkedin.com/feed/update/urn:li:activity:7180185792835731457/
Colorless green ideas sleep furiously.