I guess there’s always a rich guy with heroism delusions happy to risk the world. Is it ok for one guy to have this much influence? How do we as a society guard against following one guy down a flop? Worrying.
Yes - watch Sam's Cambridge interview, where it is obvious he thinks OpenAI is the second coming of Christ and he is convinced everyone must embrace ChatGPT
Is one reason the cost parameters are so large because they have already scaled way past their baseline for competence? When I think of the intense work needed to fix explainability (if it can be), bias (if it can be), accuracy (if it can be)— just take those three necessities to make anything of real value—it already seems like they have long passed any point in which that effort will surpass demonstrated value (and value can’t be demonstrated without those, at a minimum). Recall that they never expected the release of ChatGPT to be as popular as it was—but millions of users of a mediocre product is actually not enough to justify the far more serious business case they are making. They are clearly punch-drunk.
I wonder what sort of compute power is needed / environmental impact OpenAI Sora video creations have. Images are now one-frame Sora videos - is there a need to reencode everything? (Oh silly me, they are already doing it...)
Earlier this week, almost half a million homes in Melbourne, Australia went without power because a 15 minute storm levelled power transmission towers. Of course, that was nothing compared to the August '03 US blackouts.
I shudder when I think about the strain on the energy grid is when untold numbers of concurrent Sora, Dall-E3, GPT 4-Turbo/5 and Gemini Ultra requests hit the series of tubes.
What a post-Valentine's Day gift to hostile nation states.
Do you know if you could replace the ai-generated images with human art eventually? I am sure quite a few artists would be happy to share work that doesn't portray Sam Altman in the best of light ;)
You say that GPT-3 training consumed 700,000 liters of water, as if that is a large amount. With five seconds of research, I found that the global average water footprint for beef is around 15,415 liters of water per kilogram of beef produced, so an average cow costs >4.6 million liters of water. For a single cow. I am disappointed in your inability to contextualize the numbers you use.
Cows might produce literal bullshit that at least can be repurposed as fertilizer, along with their other byproducts. AI produces only bullshit that can't even feed a blade of grass.
Watch out for the anchoring effect! Now that Altman has asked for $7T, even if no one gives it to him, if he now says "well alright, how about $700B then" it will sound much more reasonable than it would have if he had started there.
Oh, I don't think so. He can say, This will be enough to get us started, and once we demonstrate some progress, we'll raise more.
Anyway, assuming he doesn't get his $7T, as I certainly hope he won't, he will come back with some smaller number — whether it's $700B, $2T, or something else — and that number will sound more reasonable than it would have on its own. Even if he winds up with "only" $70B — "only"! — that's still a mind-bogglingly large investment. He may look like a fool, but he'll be laughing all the way to the bank.
I particularly liked the sign-off ‘tired of the narcissism of Silicon Valley’. Sam Altman is not just the quintessential product of Silicon Valley but increasingly the embodiment of the bankrupt mores of social media and its backers. To understand Altman, you have to understand the narcissism, hubris and opportunism that powers many of the power-brokers, enablers and senior executives in Silicon Valley. Merely the latest Silicon Valley poster-boy, Altman has characteristically displayed his penchant for hubris and opportunism: $7 Trillion, why not. But the tune is tiring. The coruscating hearing of the Senate Judiciary Committee on January 31st showed that our legislators are beginning to stir. Despite the efforts of powerful lobbyists, regulation of social media will follow. This will be dire news for the business model underpinning social media. And by extension, associates. Slowly, people will realise that OpenaAi and other AI ‘labs’ were largely spawned by an influx of funding, ethos, culture and talent from social media leaders and Silicon Valley friends. The writing is on the wall. Or alternatively you could say it’s an alignment problem.
(1) You don't need $7 trillion to (safely) build >= human-level AGI, you need (a) $100 million per year for 50-100 years, and (b) the necessary KNOWLEDGE AND UNDERSTANDING required to design and build it. (2) Even if Altman/OpenAI had $7 trillion, they still wouldn't know what to do with it (other than spend it on shiny things), because they still wouldn't have (b). I can't help but think that this is a diversion somehow. I'm fully expecting for GPT-5 to fall way short of human-level AGI, which Altman probably already knows, and yet he's somehow got to explain his failure to deliver on all the prior hype. Perhaps this is one way to do it: "OK, so I couldn't deliver human-level AGI with the $10 billion you gave me, but I will be able to do so with $7 trillion, HONEST!"
Is taking all of this seriously not akin to taking the Taylor-Swift-endorsing-Joe-Biden-at-the-superbowl-PsyOps seriously? I mean: this is so utterly crazy, I have a hard time believing that this has any chance of going anywhere. Even is Altman is deluded enough to have floated this seriously, why should such craziness get the airtime it gets now? If he has floated this seriously, we should ignore his embarrassing act and quietly try to give him psychological/medical help. There isn't a real chance this is going anywhere, right? So why act as if it does? Aren't we embarrassing ourselves by discussing this?
great stuff.....have you seen this 4 minute video of a letter to a creative musician who thinks he can write quicker better lyrics with aI?..Honestly the tools aren't there yet and until they are no money will solve some of the problems in making AI humans
I agree with Gary Marcus. And looking from Peter Thiel's viewpoint, it's rather ironic.
Thiel had always urged entrepreneurs to have a massive moat. Because the alternative is to have competition eat your lunch. He compared Google to the aviation industry because, while aviation has a much bigger TAM than web search, he pointed out that aviation's profit percentage is exactly zero - while Google rakes in insane profits.
While OpenAI had a huge launch last year, its competitive edge had been eroded by the competition. Mistral7b, Falcon180b, Llama2... to me it is unclear what OpenAI now brings to the table. Weighed down by extremely high expenditure levels and the commoditization of LLMs, to me it looks like pure generative AI companies are in a cut throat space not that dissimilar to the aviation industry.
The $10 billion dollar handout from MSFT is shrinking fast each day.
The irony is also quite rich because Altman opened Thiel's lecture at Stanford nearly a decade back on exactly this issue (the moat). Altman appears in the first 18 seconds! https://youtu.be/3Fx5Q8xGU8k?si=i9RQU2RM49vkbri7
I guess there’s always a rich guy with heroism delusions happy to risk the world. Is it ok for one guy to have this much influence? How do we as a society guard against following one guy down a flop? Worrying.
that was indeed the subtext here
Yes - watch Sam's Cambridge interview, where it is obvious he thinks OpenAI is the second coming of Christ and he is convinced everyone must embrace ChatGPT
https://youtu.be/NjpNG0CJRMM?t=1814
Too much reliance on "genius", too little reason.
Altman contradicted himself. He already stated that scaling up LLMs isn't the way to go and here it is, asking for $7B scale-out! Did he lie back then, or is he lying now? https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
or just throwing spaghetti against the wall?
Look at this. He's not "planning" anything except just asking for a lot of money.
https://www.scienceforums.net/uploads/monthly_2024_02/image.jpeg.b8a9a6b98a3bcd2189a88a7d5f2ee40e.jpeg
Is one reason the cost parameters are so large because they have already scaled way past their baseline for competence? When I think of the intense work needed to fix explainability (if it can be), bias (if it can be), accuracy (if it can be)— just take those three necessities to make anything of real value—it already seems like they have long passed any point in which that effort will surpass demonstrated value (and value can’t be demonstrated without those, at a minimum). Recall that they never expected the release of ChatGPT to be as popular as it was—but millions of users of a mediocre product is actually not enough to justify the far more serious business case they are making. They are clearly punch-drunk.
Interesting. When I first read Altman's statement I was convinced it was an AI generated parody of Altman.
indeed a few seconds ago i posted this:
https://x.com/garymarcus/status/1756403603072524729?s=61
I wonder what sort of compute power is needed / environmental impact OpenAI Sora video creations have. Images are now one-frame Sora videos - is there a need to reencode everything? (Oh silly me, they are already doing it...)
Earlier this week, almost half a million homes in Melbourne, Australia went without power because a 15 minute storm levelled power transmission towers. Of course, that was nothing compared to the August '03 US blackouts.
I shudder when I think about the strain on the energy grid is when untold numbers of concurrent Sora, Dall-E3, GPT 4-Turbo/5 and Gemini Ultra requests hit the series of tubes.
What a post-Valentine's Day gift to hostile nation states.
Do you know if you could replace the ai-generated images with human art eventually? I am sure quite a few artists would be happy to share work that doesn't portray Sam Altman in the best of light ;)
was kind of enjoying satirizing the genai art, but fair point
I'm enjoying the satirizing.
You say that GPT-3 training consumed 700,000 liters of water, as if that is a large amount. With five seconds of research, I found that the global average water footprint for beef is around 15,415 liters of water per kilogram of beef produced, so an average cow costs >4.6 million liters of water. For a single cow. I am disappointed in your inability to contextualize the numbers you use.
dude the context is that it will be way more for gpt-4, gpt-5 etc, but maybe you were unable to read that far.
Okay, so can you speculate as to how much more water? Three cows worth? Ten cows worth? A hundred cows worth of water to train GPT-5?
Turns out we kill 900,000 cows every day, so around four trillion liters of water are used for beef production for a single day.
Do you expect GPT-5 to use more than this?
Otherwise why ever would you mention it other than because it looks like a large number to the uninformed?
Cows might produce literal bullshit that at least can be repurposed as fertilizer, along with their other byproducts. AI produces only bullshit that can't even feed a blade of grass.
One is life, one is anti-life.
I think he has a fair point, and the snark is inappropriate.
Where did you get the numbers from?
I just went to this site: https://beef-cattle.extension.org/how-much-water-do-cows-drink-per-day/, 9 gallons per day roughly. 9 gallons is approx 34l . Cattle for beef/dairy live approx 6 years https://sentientmedia.org/how-long-do-cows-live/ . So, 34 * 6 * 365 = 74,460l. That's a far cry from 4.6 millions of l. Even if you take cost of killing the cow etc. I don't think you'll ever reach your numbers.
You have to add the water which was used to raise crops that the cattle eat. It will roughly double your figure but it is still far from millions. In the article I refer to water consumption per kg of beef meat is estimated at 550-700 L and not at 15,000 L. Reference : https://www.inrae.fr/actualites/quelques-idees-fausses-viande-lelevage#:~:text=%2D%20L%27eau%20consomm%C3%A9e%20par%20l,produire%201%20kg%20de%20viande.
Good - yes, I didn't take into account the water for the crops. Regardless, it is crazy to compare to decry the water needed by cows.
Maybe a more direct comparison, 700,000 liters of water flows through the Niagara in about 250 milliseconds (https://www.niagarafallsstatepark.com/niagara-falls-state-park/amazing-niagara-facts).
Watch out for the anchoring effect! Now that Altman has asked for $7T, even if no one gives it to him, if he now says "well alright, how about $700B then" it will sound much more reasonable than it would have if he had started there.
possibly true but he does look like a fool dropping the ask by 90%, so risky strategy
Oh, I don't think so. He can say, This will be enough to get us started, and once we demonstrate some progress, we'll raise more.
Anyway, assuming he doesn't get his $7T, as I certainly hope he won't, he will come back with some smaller number — whether it's $700B, $2T, or something else — and that number will sound more reasonable than it would have on its own. Even if he winds up with "only" $70B — "only"! — that's still a mind-bogglingly large investment. He may look like a fool, but he'll be laughing all the way to the bank.
I particularly liked the sign-off ‘tired of the narcissism of Silicon Valley’. Sam Altman is not just the quintessential product of Silicon Valley but increasingly the embodiment of the bankrupt mores of social media and its backers. To understand Altman, you have to understand the narcissism, hubris and opportunism that powers many of the power-brokers, enablers and senior executives in Silicon Valley. Merely the latest Silicon Valley poster-boy, Altman has characteristically displayed his penchant for hubris and opportunism: $7 Trillion, why not. But the tune is tiring. The coruscating hearing of the Senate Judiciary Committee on January 31st showed that our legislators are beginning to stir. Despite the efforts of powerful lobbyists, regulation of social media will follow. This will be dire news for the business model underpinning social media. And by extension, associates. Slowly, people will realise that OpenaAi and other AI ‘labs’ were largely spawned by an influx of funding, ethos, culture and talent from social media leaders and Silicon Valley friends. The writing is on the wall. Or alternatively you could say it’s an alignment problem.
CEOs have an alignment problem for sure
Altman is well on the way to becoming the Doomsday AI that Yudkowsky fears.
maybe he’s faking it all to get EY some cred?
LOL! Gives new meaning to the cliche "opposites attract."
(1) You don't need $7 trillion to (safely) build >= human-level AGI, you need (a) $100 million per year for 50-100 years, and (b) the necessary KNOWLEDGE AND UNDERSTANDING required to design and build it. (2) Even if Altman/OpenAI had $7 trillion, they still wouldn't know what to do with it (other than spend it on shiny things), because they still wouldn't have (b). I can't help but think that this is a diversion somehow. I'm fully expecting for GPT-5 to fall way short of human-level AGI, which Altman probably already knows, and yet he's somehow got to explain his failure to deliver on all the prior hype. Perhaps this is one way to do it: "OK, so I couldn't deliver human-level AGI with the $10 billion you gave me, but I will be able to do so with $7 trillion, HONEST!"
At least it makes my constantly asking friends for $7 to pay for a taco look more reasonable.
Is taking all of this seriously not akin to taking the Taylor-Swift-endorsing-Joe-Biden-at-the-superbowl-PsyOps seriously? I mean: this is so utterly crazy, I have a hard time believing that this has any chance of going anywhere. Even is Altman is deluded enough to have floated this seriously, why should such craziness get the airtime it gets now? If he has floated this seriously, we should ignore his embarrassing act and quietly try to give him psychological/medical help. There isn't a real chance this is going anywhere, right? So why act as if it does? Aren't we embarrassing ourselves by discussing this?
great stuff.....have you seen this 4 minute video of a letter to a creative musician who thinks he can write quicker better lyrics with aI?..Honestly the tools aren't there yet and until they are no money will solve some of the problems in making AI humans
https://aeon.co/videos/why-strive-stephen-fry-reads-nick-caves-letter-on-the-threat-of-computed-creativity
I agree with Gary Marcus. And looking from Peter Thiel's viewpoint, it's rather ironic.
Thiel had always urged entrepreneurs to have a massive moat. Because the alternative is to have competition eat your lunch. He compared Google to the aviation industry because, while aviation has a much bigger TAM than web search, he pointed out that aviation's profit percentage is exactly zero - while Google rakes in insane profits.
While OpenAI had a huge launch last year, its competitive edge had been eroded by the competition. Mistral7b, Falcon180b, Llama2... to me it is unclear what OpenAI now brings to the table. Weighed down by extremely high expenditure levels and the commoditization of LLMs, to me it looks like pure generative AI companies are in a cut throat space not that dissimilar to the aviation industry.
The $10 billion dollar handout from MSFT is shrinking fast each day.
The irony is also quite rich because Altman opened Thiel's lecture at Stanford nearly a decade back on exactly this issue (the moat). Altman appears in the first 18 seconds! https://youtu.be/3Fx5Q8xGU8k?si=i9RQU2RM49vkbri7
hilarious!!!!