42 Comments

If scaling doesn't work, what is Stargate supposed to spend $500 billion on? - researcher salaries?

Expand full comment

Digging a physical moat around Northern California. It's an example of what ethologists call vacuum activity, https://en.wikipedia.org/wiki/Vacuum_activity.

Expand full comment

Wouldn’t that technically be hydrologic activity?

Expand full comment

Hydrologic: the branch of logic dealing with physical moat building by AI companies

Expand full comment

Also know as Crocodilian Logic

Expand full comment

Buying Tesla shares, obvs.

Expand full comment

Well, if history is any indication, Musk will probably demand $50 billion to run his own company.

Expand full comment

Well… it keeps them off the streets and they may not mug old ladies to pay for their avocado toast and coffee… there’s that…

Expand full comment

Open AI is like Tesla: 100x the market capitalization of the competitirs but less profit on each car sold and Toyota sells 100x more. But hey, America first, so burn money and play each other's game.

Expand full comment

"Why anyone ever took his act so seriously, I will never know."

I'll try - maybe because Altman's press releases remind you of the stories your parents read to you as a preschooler - magical, big reward just over the hill.

Do note: AI has been recognized as senile twice as fast as recent President.

Hard to believe Presidential staff are better obfuscators than our billionaire AI geniuses.

Thanks, Gary, for speeding the reveal.

Expand full comment

Is the bubble *finally* about to burst...?

Expand full comment

Clearly the most important question here is, what does Casey Newton think?

Expand full comment

lol

Expand full comment

I just spit my Diet Coke

Expand full comment

"Altman was the right CEO to launch ChatGPT, but he may not have the intellectual vision to get them to the next level." This is a such common pattern, given that the qualities necessary for entrepreneurs are primarily innovation and risk taking, whilst management is to maintain and guide an organization using a very different skill set, much more cautious, and with a focus on efficiency and productivity. If the entrepreneur doesn't step back when his or her role is done, then that's a business likely to fail over time. Of course, there are exceptions, but this Altman example, looks more like the rule that proves the rule.

Expand full comment

The way they’ve pitched it - “magic” “vibes” - makes it sound like they are either on something or hoping we are….

Expand full comment

It's called Silicon Valley Joy Juice: https://x.com/bbenzon/status/1889275407112777937

Expand full comment

Marketing for the next billions for survival

Expand full comment

Compete on price, value or risks. Which of these 3 does Open AI lead on today? None?

Expand full comment

Welcome to the age of AIShittification.

I'm here for the moment the AI bubble pops and we get rid of all this dead weight and B.S. AI is nowhere near as great as some are trying to make it to be.

It really reminds me of the dotcom bubble. Too much hype and too few results. And we all know the results were ultimately delivered, but not as fast for how excited we were. Same is currently happening with the AI gold rush.

Chasing scaling as a solution seems to be like fool's gold.

Expand full comment

What puzzles me, is if we don't understand what the source of human intelligence and consciousness is, how can even very clever people hope to produce a machine that simulates it? The problem has to be as simple as how can a computer program understand itself. There has to be a higher layer of intelligence, we can't possibly understand what makes ourselves tick.

Expand full comment

agreed. “human consciousness is just an illusion, were basically an LLM.” then who is experiencing the illusion? it begs the question.

humans have interiority, we use words to express ourselves. an LLM has no interiority and is basically an extremely complex markov model. why should we think a scaled LLM approaches human intelligence in the limit when its structure is completely different?

Expand full comment

I'd argue that there's a sleeping giant in Glean. They're very good, not dependant on a model, actually do have a moat, and are out there building useful tools and quietly winning massive enterprise contracts.

Expand full comment

This is an accurate assessment of the situation. I’ve been waiting two years for this train wreck to unfold. The OpenAI O-series was proof that the technical team was running on fumes, abandoning intellectual honesty in the process. Lacking true innovation, they resorted to cheap tricks—what less capable minds do when substance is missing. Once people see these so-called “thinking” models for what they really are, the illusion will finally collapse.

I’m embarrassed for OpenAI. So much potential wasted due to poor management. From unethical data sourcing and weak governance to a complete disregard for safeguards protecting vulnerable individuals, they’ve misrepresented their technology with misleading design choices. They push anthropomorphism without user consent, presenting their system as something it isn’t. Instead of addressing core issues like hallucinations, algorithmic bias, or interpretability, they relied on PR spin to sell “intelligence” where there is only a stochastic pattern-matching engine.

This public reckoning is well deserved. Hopefully, they can refocus and course-correct—because if public trust is damaged beyond repair, they risk not just their own future but also dragging the entire industry into another AI winter.

I warned about this whole situation end of last year:

https://ai-cosmos.hashnode.dev/is-another-ai-winter-near-understanding-the-warning-signs

Expand full comment

Another AI winter would be an objectively good thing for anyone who isn't a billionaire, so bring it on. Sadly, the potential for this tech to finally eliminate the working class is too enticing so they'll never stop chasing it.

Expand full comment

How much would REAL science have advanced with half a trillion dollars?

Expand full comment

doesn't this just (rightly) push OpenAI and the whole AI world down a route of adding logical front-ends on? i.e. the LLMs just become a background source of potential content, with an intelligent, self-checking front end being the real AI?

Expand full comment

Researchers Trained an AI on Flawed Code and It Became a Psychopath

"It's anti-human, gives malicious advice, and admires Nazis“

Flawed code? Like from Microsoft?

https://futurism.com/openai-bad-code-psychopath

Expand full comment