Open AI is like Tesla: 100x the market capitalization of the competitirs but less profit on each car sold and Toyota sells 100x more. But hey, America first, so burn money and play each other's game.
"Why anyone ever took his act so seriously, I will never know."
I'll try - maybe because Altman's press releases remind you of the stories your parents read to you as a preschooler - magical, big reward just over the hill.
Do note: AI has been recognized as senile twice as fast as recent President.
Hard to believe Presidential staff are better obfuscators than our billionaire AI geniuses.
"Altman was the right CEO to launch ChatGPT, but he may not have the intellectual vision to get them to the next level." This is a such common pattern, given that the qualities necessary for entrepreneurs are primarily innovation and risk taking, whilst management is to maintain and guide an organization using a very different skill set, much more cautious, and with a focus on efficiency and productivity. If the entrepreneur doesn't step back when his or her role is done, then that's a business likely to fail over time. Of course, there are exceptions, but this Altman example, looks more like the rule that proves the rule.
I'm here for the moment the AI bubble pops and we get rid of all this dead weight and B.S. AI is nowhere near as great as some are trying to make it to be.
It really reminds me of the dotcom bubble. Too much hype and too few results. And we all know the results were ultimately delivered, but not as fast for how excited we were. Same is currently happening with the AI gold rush.
Chasing scaling as a solution seems to be like fool's gold.
What puzzles me, is if we don't understand what the source of human intelligence and consciousness is, how can even very clever people hope to produce a machine that simulates it? The problem has to be as simple as how can a computer program understand itself. There has to be a higher layer of intelligence, we can't possibly understand what makes ourselves tick.
agreed. “human consciousness is just an illusion, were basically an LLM.” then who is experiencing the illusion? it begs the question.
humans have interiority, we use words to express ourselves. an LLM has no interiority and is basically an extremely complex markov model. why should we think a scaled LLM approaches human intelligence in the limit when its structure is completely different?
I'd argue that there's a sleeping giant in Glean. They're very good, not dependant on a model, actually do have a moat, and are out there building useful tools and quietly winning massive enterprise contracts.
This is an accurate assessment of the situation. I’ve been waiting two years for this train wreck to unfold. The OpenAI O-series was proof that the technical team was running on fumes, abandoning intellectual honesty in the process. Lacking true innovation, they resorted to cheap tricks—what less capable minds do when substance is missing. Once people see these so-called “thinking” models for what they really are, the illusion will finally collapse.
I’m embarrassed for OpenAI. So much potential wasted due to poor management. From unethical data sourcing and weak governance to a complete disregard for safeguards protecting vulnerable individuals, they’ve misrepresented their technology with misleading design choices. They push anthropomorphism without user consent, presenting their system as something it isn’t. Instead of addressing core issues like hallucinations, algorithmic bias, or interpretability, they relied on PR spin to sell “intelligence” where there is only a stochastic pattern-matching engine.
This public reckoning is well deserved. Hopefully, they can refocus and course-correct—because if public trust is damaged beyond repair, they risk not just their own future but also dragging the entire industry into another AI winter.
I warned about this whole situation end of last year:
Another AI winter would be an objectively good thing for anyone who isn't a billionaire, so bring it on. Sadly, the potential for this tech to finally eliminate the working class is too enticing so they'll never stop chasing it.
doesn't this just (rightly) push OpenAI and the whole AI world down a route of adding logical front-ends on? i.e. the LLMs just become a background source of potential content, with an intelligent, self-checking front end being the real AI?
If scaling doesn't work, what is Stargate supposed to spend $500 billion on? - researcher salaries?
Digging a physical moat around Northern California. It's an example of what ethologists call vacuum activity, https://en.wikipedia.org/wiki/Vacuum_activity.
Wouldn’t that technically be hydrologic activity?
Hydrologic: the branch of logic dealing with physical moat building by AI companies
Also know as Crocodilian Logic
Buying Tesla shares, obvs.
Well, if history is any indication, Musk will probably demand $50 billion to run his own company.
Well… it keeps them off the streets and they may not mug old ladies to pay for their avocado toast and coffee… there’s that…
Open AI is like Tesla: 100x the market capitalization of the competitirs but less profit on each car sold and Toyota sells 100x more. But hey, America first, so burn money and play each other's game.
"Why anyone ever took his act so seriously, I will never know."
I'll try - maybe because Altman's press releases remind you of the stories your parents read to you as a preschooler - magical, big reward just over the hill.
Do note: AI has been recognized as senile twice as fast as recent President.
Hard to believe Presidential staff are better obfuscators than our billionaire AI geniuses.
Thanks, Gary, for speeding the reveal.
Is the bubble *finally* about to burst...?
Clearly the most important question here is, what does Casey Newton think?
lol
I just spit my Diet Coke
"Altman was the right CEO to launch ChatGPT, but he may not have the intellectual vision to get them to the next level." This is a such common pattern, given that the qualities necessary for entrepreneurs are primarily innovation and risk taking, whilst management is to maintain and guide an organization using a very different skill set, much more cautious, and with a focus on efficiency and productivity. If the entrepreneur doesn't step back when his or her role is done, then that's a business likely to fail over time. Of course, there are exceptions, but this Altman example, looks more like the rule that proves the rule.
The way they’ve pitched it - “magic” “vibes” - makes it sound like they are either on something or hoping we are….
It's called Silicon Valley Joy Juice: https://x.com/bbenzon/status/1889275407112777937
Marketing for the next billions for survival
Compete on price, value or risks. Which of these 3 does Open AI lead on today? None?
Welcome to the age of AIShittification.
I'm here for the moment the AI bubble pops and we get rid of all this dead weight and B.S. AI is nowhere near as great as some are trying to make it to be.
It really reminds me of the dotcom bubble. Too much hype and too few results. And we all know the results were ultimately delivered, but not as fast for how excited we were. Same is currently happening with the AI gold rush.
Chasing scaling as a solution seems to be like fool's gold.
What puzzles me, is if we don't understand what the source of human intelligence and consciousness is, how can even very clever people hope to produce a machine that simulates it? The problem has to be as simple as how can a computer program understand itself. There has to be a higher layer of intelligence, we can't possibly understand what makes ourselves tick.
agreed. “human consciousness is just an illusion, were basically an LLM.” then who is experiencing the illusion? it begs the question.
humans have interiority, we use words to express ourselves. an LLM has no interiority and is basically an extremely complex markov model. why should we think a scaled LLM approaches human intelligence in the limit when its structure is completely different?
I'd argue that there's a sleeping giant in Glean. They're very good, not dependant on a model, actually do have a moat, and are out there building useful tools and quietly winning massive enterprise contracts.
This is an accurate assessment of the situation. I’ve been waiting two years for this train wreck to unfold. The OpenAI O-series was proof that the technical team was running on fumes, abandoning intellectual honesty in the process. Lacking true innovation, they resorted to cheap tricks—what less capable minds do when substance is missing. Once people see these so-called “thinking” models for what they really are, the illusion will finally collapse.
I’m embarrassed for OpenAI. So much potential wasted due to poor management. From unethical data sourcing and weak governance to a complete disregard for safeguards protecting vulnerable individuals, they’ve misrepresented their technology with misleading design choices. They push anthropomorphism without user consent, presenting their system as something it isn’t. Instead of addressing core issues like hallucinations, algorithmic bias, or interpretability, they relied on PR spin to sell “intelligence” where there is only a stochastic pattern-matching engine.
This public reckoning is well deserved. Hopefully, they can refocus and course-correct—because if public trust is damaged beyond repair, they risk not just their own future but also dragging the entire industry into another AI winter.
I warned about this whole situation end of last year:
https://ai-cosmos.hashnode.dev/is-another-ai-winter-near-understanding-the-warning-signs
Another AI winter would be an objectively good thing for anyone who isn't a billionaire, so bring it on. Sadly, the potential for this tech to finally eliminate the working class is too enticing so they'll never stop chasing it.
How much would REAL science have advanced with half a trillion dollars?
This just in: "AI.com Is for Sale. Asking Price? $100 Million" https://www.theinformation.com/articles/ai-com-is-for-sale-asking-price-100-million?utm_campaign=%5BREBRAND%5D+RTSU+-+Aut&utm_content=1109&utm_medium=email&utm_source=cio&utm_term=129
How much for AI.bom?
doesn't this just (rightly) push OpenAI and the whole AI world down a route of adding logical front-ends on? i.e. the LLMs just become a background source of potential content, with an intelligent, self-checking front end being the real AI?
Researchers Trained an AI on Flawed Code and It Became a Psychopath
"It's anti-human, gives malicious advice, and admires Nazis“
Flawed code? Like from Microsoft?
https://futurism.com/openai-bad-code-psychopath