So we've got Masayoshi Son, who lost over $50 billion during the dot com crisis. This should be understood as: "this man has no fucking clue about the future, he just gets hyped easily and throws money like it's shit at a wall". Softbank's market cap is around $100 billion.
Then we've got Sam Altman, who, c'mon, we should know is a cunning liar whose whole shtick is that his face always looks concerned, and this tricks idiots into thinking he's NOT one of the bad guys exploiting humanity. Open AI is valued at around $150 billion and had an operating loss of $5 billion in 2024, projected to something like $30 billion by 2028.
Finally, Larry Ellison of Oracle, market cap $400 billion. Maybe the only credible person in the room, I'm not sure I don't know about him.
These three are supposed to invest $400 billion in America. These are the men who, we are told, have the trust of investors to gather that money. On the one hand, it's ludicrous and laughable. On the other hand, investors have piled so much into AI on this hype train that I think it just might work. It's pathetic. We should be building homes.
Masayoshi Son, who lost over $50 billion during the dot com crisis - so, a perfect pairing for Sam Altman. These men know how to set money on fire. They should get Michael Saylor involved as well.
read, watched, listened to everything currently available. there is good news and bad news.
Good News: that kind of money will have secondary-effects that will be positive as the G-2 battle for the next step-function of: (1) power projection, (2) wealth (mostly displacement). it's "table-stakes" for the USA to push its odds at dominance out another few decades. given a (2023) GDP of $27T it's a fair ticket price, to continue the ride ;>
Bad News: IF $500B is possible, the USA will be lucky IF 50% of that spend makes it to the anticipated goal. so much money has a funny way to getting lost, and there's no Luca Pacioli on the payroll keeping these books.
Lastly, it's the wrong problem from the start. IF you want to be "G1" you need to lead in: ENERGY and INFORMATION. this sort-of hit's the second mark, but the path(s) that OpenAI and others in the LLM space is so wrong, that in the words of the great Wolfgang Pauli - "It's not even wrong."
My hope is, a few researchers will get some bread-crumbs from this bakery of wealth, and using constraint-based-reasoning, come up with something so radical for say $100M that it obsoletes the other $499,900M wasted on "buggy whips" and graft while something new/new emerges (biological computing or weird zero-point energy). this announcement, with these players, represents more the end of an era, than the beginning.
speaking of 100 million investment relative to 500 billion investment (leaving the more exotic tech technologies (bio computing, zero point) off the table for the moment), curious to know your thoughts on Startup CEO Will Bryk’s AI-based search engine solution for enterprise, called Exa (exa_ai).
by radical, I mean some amazing practical solutions are going to come out of the media frenzy for much less money invested
i use Exa. brilliant work, team. nice API "first" focus. big fan. would back off "RAG" lean and develop a new term as (imho) the use of RAG is inversely proportional to the quality of your corpus (AI) in terms of drift, focus, etc. i understand the need, but have "always" (short-term) considered RAG more a sign of weakness than of strength, but that's just my AGI confirmation bias voice talking ;> Exa team are rock-stars and surprised they have not been snapped up already. thanks!
In terms of "not snapped up already", you're assuming the Exa team would be interested in selling.
In fact, it's my experience that not every Founder wants to be bought, Ken
More than a few want to see how far they can take it themselves - without having a bigger org calling the shots, or interfering with direction.
Then there's the whole earn-out period, where you're still working like a nutter, but the business is no longer yours and you have to jump through hoops over a multi-year period to unlock the full amount that you sold the business for...
100% agree. having said that, they've raised $22.1M (Series A) and by the time you reach Series C it's often less the team's decision. earn outs suck (hard)! core reason to sell is to get on to your next greater idea; core reason not to sell is that you have no more ;>
Honestly, if I were working with the Exa Founding Team at the mo', I'd be saying great Series A raise, but let's now focus on consolidation, delivery, and organic growth.
Don't chase any more funding because you think you should, or because someone else tells you to.
Future proof the business, and focus on nailing Strategy, Structure and Leadership (the Three Pillars of Growth™️)
For me, the main takeaway is that there is nothing new here. Softbank had made these verbal commitments to invest $100B and increasing that number is really just words. Larry talked about some medical records-based work that Oracle is apparently doing, which is also not new. I did think it was hilarious that he highlighted the development of a cancer vaccine given the RFK Jr skepticism. And finally, Sam did not appear to know very much about the healthcare applications of LLMs. It seemed like someone asked him to do this five minutes before he went on stage.
I suspect that the healthcare use case focus was a last-minute change. The problem is that some of the most promising advances are not foundation model/LLM-based (think AlphaFold and all of the rapidly improving image models).
I am not a big Elon fan, but he was correct in his take. There is no evidence that this team has anywhere near the kind of financing they implied they have.
I can't make up my mind whether we are watching the food fight scene from "Animal House" or the the massive pie fight in the "The Great Race" (for those who haven't seen it https://www.youtube.com/watch?v=Y4Q7hZcx_iw).
I was kind of hoping we'd hit peak AI BS in 2024. Apparently not. I just can't watch any more. All these insanely greedy people. All that money wasted on dead end tech. What's happened to AI?
Oh, haven't you heard? The really amazing AI that will change the world for the better is on track to show up next year.
You don't want to be so short sighted as to judge the field on technology that actually exists, do you? Think about the all the wonderful imaginary technology that's just around the corner!
I fail to see how a plagiarizing chatbot is ever going to be worth half a trillion dollars. I also haven't yet seen how it benefits humanity more than it just turns our brains into mush and pollutes the internet with slop. The 100,000 jobs is a bold-faced lie just to blow some air up the orange one's ass. And Masayoshi Son has a terrible track record as an investor - this is the same guy who compared himself to Jesus and lost $32B on the Metaverse 😂
Pretty straight forward, the hyperscalers are already spending real money - $250B. Even IF, and that's a big if, SoftBank can provide $100B in financing where are the 100,000 jobs promised, shouldn't we be seeing that already? And shouldn't AI already be benefitting all humanity in a measurable way rather than intangible assertions?
I'm still not clear on what bad thing is supposed happen if we "lose the AI race" to China. Will they produce cooler spam that us? Derail more existing industries with their ability to shove more chatbots where they don't belong? Dedicate a greater portion of their national energy consumption to a machine that performs the parlor trick of generating statistical brute-force solutions to logic puzzles via a bazillion-step chain of matrix multiplication?
Nah, it's probably something about them getting AGI first, isn't it?
Exactly. I've been wondering the same. AGI/ASI is really nowhere in sight along this current trajectory so they're fighting a war over who can produce the most AI slop? Ridiculous reason to burn the planet faster!
Satya Nadella was asked in an interview about it, and didn't really answer other than to say that Microsoft will invest $80B this year. It wasn't clear whether any of that would go toward this.
It's a pity money is the only thing they understand. But money will not get them to reliable intelligence if everything is centered around llm/genai. This makes me think of this scene from Indianna Jones Raiders of the Lost Ark. "They are digging in the wrong place." https://www.youtube.com/watch?v=Pk-B0s0jOwE
DeepSeek created much of the training data for their open-weight models by using existing frontier LLMs. Thus DeepSeek is a fast follower but this approach will not necessarily advance beyond the capabilities of the existing frontier LLMs.
By open sourcing DeepSeek, they potentially dramatically increase the number of smart folks who can now have the same tools as the proprietary companies. It will be very interesting to see what comes of this.
Furthermore, DeepSeek LLMs may be the most powerful for their respective sizes that can be run on certain local computers, as opposed to using a hosted service.
Very likely indeed the investments are a lot more gradual than publicly stated.
I don't think AI leaders drink their own cool-aid. The competition is intense and stakes are high. Being bold and calculated with risks is usually rewarded. Being reckless is not.
I think you are vastly underestimating the heads of Google, etc. The dot-com bubble and bust worked well for Google, Amazon, etc. Worked badly for the greedy foolish leaders, and worked badly for Microsoft which missed the boat.
These are people that are very good at doing strategic bets and managing risk. There is risk, of course.
There is a thing of being a hostage of your own success. OpenAI started as a non-profit. And what it achieved as a non-profit was truly remarkable. Who would have thought that a gaming PC could write high-school essays on any topic so smoothly? (I'm not being sarcastic, it is an impressive break-through.) However, high-school essays (produced at a very high cost) have zero economic value, and these companies pushed this technology to commercialization far too early, they made ridiculous promises, inflated the market with hot air, and there's no way back to sanity.
I think you are underestimating the power of what a step-by-step reasoning agent can do, if it can check its own work.
That's how AlphaGo solved Go, and we are getting close to AlphaMath solving math (within say 2 years).
I surely understand that no magic trick exists. Everything that agents will do they will have to be taught in painstaking detail. There's no "emergence". But we have the resources and the market to make it work.
Given the relationship between OpenAI and Microsoft, is it something with OpenAI that caused Microsoft to abandon phase 2 of its data center in Wisconsin? They cited “… evaluate scope and recent changes in technology.”
So we've got Masayoshi Son, who lost over $50 billion during the dot com crisis. This should be understood as: "this man has no fucking clue about the future, he just gets hyped easily and throws money like it's shit at a wall". Softbank's market cap is around $100 billion.
Then we've got Sam Altman, who, c'mon, we should know is a cunning liar whose whole shtick is that his face always looks concerned, and this tricks idiots into thinking he's NOT one of the bad guys exploiting humanity. Open AI is valued at around $150 billion and had an operating loss of $5 billion in 2024, projected to something like $30 billion by 2028.
Finally, Larry Ellison of Oracle, market cap $400 billion. Maybe the only credible person in the room, I'm not sure I don't know about him.
These three are supposed to invest $400 billion in America. These are the men who, we are told, have the trust of investors to gather that money. On the one hand, it's ludicrous and laughable. On the other hand, investors have piled so much into AI on this hype train that I think it just might work. It's pathetic. We should be building homes.
Yep. Too big to fail. And for every pitfall that AI might posses or will keep having in the future is very hard to imagine a world without ChatGPT
Masayoshi Son, who lost over $50 billion during the dot com crisis - so, a perfect pairing for Sam Altman. These men know how to set money on fire. They should get Michael Saylor involved as well.
"Neither is known for absolute candor" is a very generous way of putting it.
read, watched, listened to everything currently available. there is good news and bad news.
Good News: that kind of money will have secondary-effects that will be positive as the G-2 battle for the next step-function of: (1) power projection, (2) wealth (mostly displacement). it's "table-stakes" for the USA to push its odds at dominance out another few decades. given a (2023) GDP of $27T it's a fair ticket price, to continue the ride ;>
Bad News: IF $500B is possible, the USA will be lucky IF 50% of that spend makes it to the anticipated goal. so much money has a funny way to getting lost, and there's no Luca Pacioli on the payroll keeping these books.
Lastly, it's the wrong problem from the start. IF you want to be "G1" you need to lead in: ENERGY and INFORMATION. this sort-of hit's the second mark, but the path(s) that OpenAI and others in the LLM space is so wrong, that in the words of the great Wolfgang Pauli - "It's not even wrong."
My hope is, a few researchers will get some bread-crumbs from this bakery of wealth, and using constraint-based-reasoning, come up with something so radical for say $100M that it obsoletes the other $499,900M wasted on "buggy whips" and graft while something new/new emerges (biological computing or weird zero-point energy). this announcement, with these players, represents more the end of an era, than the beginning.
speaking of 100 million investment relative to 500 billion investment (leaving the more exotic tech technologies (bio computing, zero point) off the table for the moment), curious to know your thoughts on Startup CEO Will Bryk’s AI-based search engine solution for enterprise, called Exa (exa_ai).
by radical, I mean some amazing practical solutions are going to come out of the media frenzy for much less money invested
i use Exa. brilliant work, team. nice API "first" focus. big fan. would back off "RAG" lean and develop a new term as (imho) the use of RAG is inversely proportional to the quality of your corpus (AI) in terms of drift, focus, etc. i understand the need, but have "always" (short-term) considered RAG more a sign of weakness than of strength, but that's just my AGI confirmation bias voice talking ;> Exa team are rock-stars and surprised they have not been snapped up already. thanks!
In terms of "not snapped up already", you're assuming the Exa team would be interested in selling.
In fact, it's my experience that not every Founder wants to be bought, Ken
More than a few want to see how far they can take it themselves - without having a bigger org calling the shots, or interfering with direction.
Then there's the whole earn-out period, where you're still working like a nutter, but the business is no longer yours and you have to jump through hoops over a multi-year period to unlock the full amount that you sold the business for...
100% agree. having said that, they've raised $22.1M (Series A) and by the time you reach Series C it's often less the team's decision. earn outs suck (hard)! core reason to sell is to get on to your next greater idea; core reason not to sell is that you have no more ;>
Honestly, if I were working with the Exa Founding Team at the mo', I'd be saying great Series A raise, but let's now focus on consolidation, delivery, and organic growth.
Don't chase any more funding because you think you should, or because someone else tells you to.
Future proof the business, and focus on nailing Strategy, Structure and Leadership (the Three Pillars of Growth™️)
Excellent point about future proofing AI businesses, Carri - what a great reminder, thank you! 👍🏾
i'm more of the "Seven Pillars of Wisdom" - T. E. Lawrence kind ;>
Great to hear your take, thanks!
For me, the main takeaway is that there is nothing new here. Softbank had made these verbal commitments to invest $100B and increasing that number is really just words. Larry talked about some medical records-based work that Oracle is apparently doing, which is also not new. I did think it was hilarious that he highlighted the development of a cancer vaccine given the RFK Jr skepticism. And finally, Sam did not appear to know very much about the healthcare applications of LLMs. It seemed like someone asked him to do this five minutes before he went on stage.
I suspect that the healthcare use case focus was a last-minute change. The problem is that some of the most promising advances are not foundation model/LLM-based (think AlphaFold and all of the rapidly improving image models).
I am not a big Elon fan, but he was correct in his take. There is no evidence that this team has anywhere near the kind of financing they implied they have.
I can't make up my mind whether we are watching the food fight scene from "Animal House" or the the massive pie fight in the "The Great Race" (for those who haven't seen it https://www.youtube.com/watch?v=Y4Q7hZcx_iw).
My first time watching that hilarious scene - thank you Fred!🤣
Or the duel scene in Highlander...
"Shoot him! Shoot him now, Sir!"
I was kind of hoping we'd hit peak AI BS in 2024. Apparently not. I just can't watch any more. All these insanely greedy people. All that money wasted on dead end tech. What's happened to AI?
The world has gone crazy with this llm/genai garbage, among US, China, EU, and whoever the next wannabe. It is really just a clown show now.
Oh, haven't you heard? The really amazing AI that will change the world for the better is on track to show up next year.
You don't want to be so short sighted as to judge the field on technology that actually exists, do you? Think about the all the wonderful imaginary technology that's just around the corner!
Word has it that robotaxis driven by AGI are just around the coroner.
I fail to see how a plagiarizing chatbot is ever going to be worth half a trillion dollars. I also haven't yet seen how it benefits humanity more than it just turns our brains into mush and pollutes the internet with slop. The 100,000 jobs is a bold-faced lie just to blow some air up the orange one's ass. And Masayoshi Son has a terrible track record as an investor - this is the same guy who compared himself to Jesus and lost $32B on the Metaverse 😂
https://www.businessinsider.com/softbank-ceo-likened-to-jesus-suffers-32-billion-fund-loss-2023-5
Pretty straight forward, the hyperscalers are already spending real money - $250B. Even IF, and that's a big if, SoftBank can provide $100B in financing where are the 100,000 jobs promised, shouldn't we be seeing that already? And shouldn't AI already be benefitting all humanity in a measurable way rather than intangible assertions?
I'm still not clear on what bad thing is supposed happen if we "lose the AI race" to China. Will they produce cooler spam that us? Derail more existing industries with their ability to shove more chatbots where they don't belong? Dedicate a greater portion of their national energy consumption to a machine that performs the parlor trick of generating statistical brute-force solutions to logic puzzles via a bazillion-step chain of matrix multiplication?
Nah, it's probably something about them getting AGI first, isn't it?
We will not lose the race to torch truckloads of hundred dollar bills, that’s for sure.
Exactly. I've been wondering the same. AGI/ASI is really nowhere in sight along this current trajectory so they're fighting a war over who can produce the most AI slop? Ridiculous reason to burn the planet faster!
cAItfight
Satya Nadella was asked in an interview about it, and didn't really answer other than to say that Microsoft will invest $80B this year. It wasn't clear whether any of that would go toward this.
It's a pity money is the only thing they understand. But money will not get them to reliable intelligence if everything is centered around llm/genai. This makes me think of this scene from Indianna Jones Raiders of the Lost Ark. "They are digging in the wrong place." https://www.youtube.com/watch?v=Pk-B0s0jOwE
When you have a pack of wolves, there can be only one alpha.
Pass the popcorn.
Neuron had a great newsletter yesterday about the cheap and fast and open source LLM out of China. If real - game-changer.
DeepSeek created much of the training data for their open-weight models by using existing frontier LLMs. Thus DeepSeek is a fast follower but this approach will not necessarily advance beyond the capabilities of the existing frontier LLMs.
By open sourcing DeepSeek, they potentially dramatically increase the number of smart folks who can now have the same tools as the proprietary companies. It will be very interesting to see what comes of this.
Furthermore, DeepSeek LLMs may be the most powerful for their respective sizes that can be run on certain local computers, as opposed to using a hosted service.
Very likely indeed the investments are a lot more gradual than publicly stated.
I don't think AI leaders drink their own cool-aid. The competition is intense and stakes are high. Being bold and calculated with risks is usually rewarded. Being reckless is not.
AI leaders may know they're full of shit, but at this stage they can't do anything else but to continue the charade.
I think you are vastly underestimating the heads of Google, etc. The dot-com bubble and bust worked well for Google, Amazon, etc. Worked badly for the greedy foolish leaders, and worked badly for Microsoft which missed the boat.
These are people that are very good at doing strategic bets and managing risk. There is risk, of course.
There is a thing of being a hostage of your own success. OpenAI started as a non-profit. And what it achieved as a non-profit was truly remarkable. Who would have thought that a gaming PC could write high-school essays on any topic so smoothly? (I'm not being sarcastic, it is an impressive break-through.) However, high-school essays (produced at a very high cost) have zero economic value, and these companies pushed this technology to commercialization far too early, they made ridiculous promises, inflated the market with hot air, and there's no way back to sanity.
I think you are underestimating the power of what a step-by-step reasoning agent can do, if it can check its own work.
That's how AlphaGo solved Go, and we are getting close to AlphaMath solving math (within say 2 years).
I surely understand that no magic trick exists. Everything that agents will do they will have to be taught in painstaking detail. There's no "emergence". But we have the resources and the market to make it work.
Given the relationship between OpenAI and Microsoft, is it something with OpenAI that caused Microsoft to abandon phase 2 of its data center in Wisconsin? They cited “… evaluate scope and recent changes in technology.”
Everything is always continuously evaluated and recalibrated. Not just business plans, but also AI methods.