49 Comments

I guess there’s always a rich guy with heroism delusions happy to risk the world. Is it ok for one guy to have this much influence? How do we as a society guard against following one guy down a flop? Worrying.

Expand full comment

Too much reliance on "genius", too little reason.

Expand full comment
Feb 10Liked by Gary Marcus

I think if he just waited a bit longer for the hype to level out and to see where the actual use cases of these products fall, he could get more people to take this seriously. But then again, he probably couldn’t ask for 7 trillion dollars if he waits for AI discourse to become rational…

In terms of cash-grab strategy, I’ll be flabbergasted if this works. If he wants to squeeze more personal wealth out of the AI hype train, he should ask for smaller amounts of money for specific projects, not just putting it into the ether that he’d like $7 trillion…this hurts his credibility

Expand full comment

Altman contradicted himself. He already stated that scaling up LLMs isn't the way to go and here it is, asking for $7B scale-out! Did he lie back then, or is he lying now? https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

Expand full comment
Feb 10Liked by Gary Marcus

Is one reason the cost parameters are so large because they have already scaled way past their baseline for competence? When I think of the intense work needed to fix explainability (if it can be), bias (if it can be), accuracy (if it can be)— just take those three necessities to make anything of real value—it already seems like they have long passed any point in which that effort will surpass demonstrated value (and value can’t be demonstrated without those, at a minimum). Recall that they never expected the release of ChatGPT to be as popular as it was—but millions of users of a mediocre product is actually not enough to justify the far more serious business case they are making. They are clearly punch-drunk.

Expand full comment
Feb 10Liked by Gary Marcus

Interesting. When I first read Altman's statement I was convinced it was an AI generated parody of Altman.

Expand full comment
Feb 16Liked by Gary Marcus

I wonder what sort of compute power is needed / environmental impact OpenAI Sora video creations have. Images are now one-frame Sora videos - is there a need to reencode everything? (Oh silly me, they are already doing it...)

Earlier this week, almost half a million homes in Melbourne, Australia went without power because a 15 minute storm levelled power transmission towers. Of course, that was nothing compared to the August '03 US blackouts.

I shudder when I think about the strain on the energy grid is when untold numbers of concurrent Sora, Dall-E3, GPT 4-Turbo/5 and Gemini Ultra requests hit the series of tubes.

What a post-Valentine's Day gift to hostile nation states.

Expand full comment
Feb 10Liked by Gary Marcus

Do you know if you could replace the ai-generated images with human art eventually? I am sure quite a few artists would be happy to share work that doesn't portray Sam Altman in the best of light ;)

Expand full comment

You say that GPT-3 training consumed 700,000 liters of water, as if that is a large amount. With five seconds of research, I found that the global average water footprint for beef is around 15,415 liters of water per kilogram of beef produced, so an average cow costs >4.6 million liters of water. For a single cow. I am disappointed in your inability to contextualize the numbers you use.

Expand full comment

Watch out for the anchoring effect! Now that Altman has asked for $7T, even if no one gives it to him, if he now says "well alright, how about $700B then" it will sound much more reasonable than it would have if he had started there.

Expand full comment

I particularly liked the sign-off ‘tired of the narcissism of Silicon Valley’. Sam Altman is not just the quintessential product of Silicon Valley but increasingly the embodiment of the bankrupt mores of social media and its backers. To understand Altman, you have to understand the narcissism, hubris and opportunism that powers many of the power-brokers, enablers and senior executives in Silicon Valley. Merely the latest Silicon Valley poster-boy, Altman has characteristically displayed his penchant for hubris and opportunism: $7 Trillion, why not. But the tune is tiring. The coruscating hearing of the Senate Judiciary Committee on January 31st showed that our legislators are beginning to stir. Despite the efforts of powerful lobbyists, regulation of social media will follow. This will be dire news for the business model underpinning social media. And by extension, associates. Slowly, people will realise that OpenaAi and other AI ‘labs’ were largely spawned by an influx of funding, ethos, culture and talent from social media leaders and Silicon Valley friends. The writing is on the wall. Or alternatively you could say it’s an alignment problem.

Expand full comment

Altman is well on the way to becoming the Doomsday AI that Yudkowsky fears.

Expand full comment

(1) You don't need $7 trillion to (safely) build >= human-level AGI, you need (a) $100 million per year for 50-100 years, and (b) the necessary KNOWLEDGE AND UNDERSTANDING required to design and build it. (2) Even if Altman/OpenAI had $7 trillion, they still wouldn't know what to do with it (other than spend it on shiny things), because they still wouldn't have (b). I can't help but think that this is a diversion somehow. I'm fully expecting for GPT-5 to fall way short of human-level AGI, which Altman probably already knows, and yet he's somehow got to explain his failure to deliver on all the prior hype. Perhaps this is one way to do it: "OK, so I couldn't deliver human-level AGI with the $10 billion you gave me, but I will be able to do so with $7 trillion, HONEST!"

Expand full comment

At least it makes my constantly asking friends for $7 to pay for a taco look more reasonable.

Expand full comment

Is taking all of this seriously not akin to taking the Taylor-Swift-endorsing-Joe-Biden-at-the-superbowl-PsyOps seriously? I mean: this is so utterly crazy, I have a hard time believing that this has any chance of going anywhere. Even is Altman is deluded enough to have floated this seriously, why should such craziness get the airtime it gets now? If he has floated this seriously, we should ignore his embarrassing act and quietly try to give him psychological/medical help. There isn't a real chance this is going anywhere, right? So why act as if it does? Aren't we embarrassing ourselves by discussing this?

Expand full comment

great stuff.....have you seen this 4 minute video of a letter to a creative musician who thinks he can write quicker better lyrics with aI?..Honestly the tools aren't there yet and until they are no money will solve some of the problems in making AI humans

https://aeon.co/videos/why-strive-stephen-fry-reads-nick-caves-letter-on-the-threat-of-computed-creativity

Expand full comment