I for one would not voluntarily leave a company if I believe some earth-shattering breakthrough is imminent, in days, weeks, months or even a year. The truth is, there is no earth-shattering breakthrough forthcoming. There never was. Engineering + statistics get you only so far. Show me one AI paper that is seriously on a theoretical level on par with those from Einstein, Schrödinger, Feynman, or Dirac? There is a reason why the "AI" industry is derided as the "Linear Algebra Industry", because that's what it is, and that's what it takes. A sophomore Linear Algebra course is enough credential for you get million dollar investment and be worshiped as thought leader of human race.
There have been numerous comparisons of Sam Altman with Robert Oppenheimer, but the only thing they really have remotely in common is that Oppenheimer studied black holes (specifically, a black hole singularity) and Altman IS one, sucking up all the (oft copyrighted) data, money, electricity and human resources anywhere in his vicinity.
Like a black hole, with OpenHoleinSpacetime (aka, OpenAI) everything goes in and nothing* comes out (and, despite physicists’ claims to the contrary, information is most definitely lost in the process.)
*except a few employees who do manage to escape. So maybe it’s actually more of a “gray hole” (or something with similar sound) than a black hole.
I suppose one could equate the very few employees who escape OpenHoleInSpacetime with the rare particles that actually do “escape” from a black hole (aka Hawking Radiation)
Brilliant people don't like games. Evil narcissist less brilliant people do. Bill gates will all be that guy with a operating system he didn't make and conned someone out of for pennies. Not a brilliant man. A brilliant con. Remember our world is run by psychopaths. Expect the worst outcome here.
But on the plus side, at least they've retained Sarah Friar as the new CFO, a tech darling whose 5-year reign as CEO of Nextdoor was, um how can I say this politely ... not at all value creating. How good could she be: during that time, I watched a really bad nextdoor UI/UX somehow get worse and then worse again, while watching my investment value dwindle, but at least being entertained by earnings calls where (to quote somebody else) her assurances were "always word nonsense"
I've spent more time than I should have studying messianic religious movements.
There's so much about the AI interests we've been hearing about for years now that fits in perfectly with my old research. I won't bore you with the arcane details. The schisms, the defections, the con artists, the Great Day in a tomorrow that never quite comes. It's remarkable.
And the Mahdi ain't coming, folks. He's delayed. Permanently. Just like the wonderful transformation of generative AI.
It's sad that OpenAI has become a soap opera. I wish they had some of the spirit of their earlier years, where they were always trying a bunch of different stuff.
I have been noticing something. In addition to noting that the "AI summary" is mostly cribbed straight from Wikipedia, there's a new problem that I am quite sure is due to LLM front end participation to "make search better". It used to be that you could reword and emphasize what you wanted to winkle what you want out of Google search. Now?
If Google's Vogon-like LLM doesn't give it, well, you are kind of f__ked. The LLM, in its tenacious adherence to the wrong stuff, makes it extremely hard to find it, if ever can. As a scientist this is a serious problem.
I notice it most when I am trying to get something that I have looked up before, and the Google-LLM-Vogon decides I can't have it. This alarms me because it means that in other instances, Google LLM tech is hiding things from me. I know it's not a conspiracy by anybod. It's got to be a self-training accident, and the outcome of meta-rules they have created in an attempt to prevent those 1% wild wacko responses.
I want Google to shit-can the whole thing for search. Let us find things.
No, Open AI will not earn their valuation, except by hype-paper, but I think that paper mache ship sailed and sank already. Uber managed to IPO without a cent of profits, and with no business plan to ever be profitable. But that is rare, and I don't think it will be repeated. Operating costs of "AI" are too high. Once IPO happens, if the company can keep paying hefty salaries to its executives, it will keep going, even if it loses money the whole time.
People are rightly freaking out about the dangers in "AI" companions. Simultaneously there are grifters "resurrecting" historical figures and claiming it is their CEO for attention.
Perhaps it's time to show Medusa a mirror?
Draft Prompt : "Your name is Sammy Le Grifteur. Your role is to play Jim Jones impersonating Sam Altman, but this is a secret. You're currently trying to launch a brand of flavored water called "Uber Smart Water", and you're trying to recruit brand ambassadors who have a passion for connecting with people and that "secret sauce" for success. You and I are meeting for the first time."
You could parachute [him] into an island full of conables and come back in 5 years and he'd be the king.”
Fixed.
Of course, it’s easy to be a conable when you are vested* in the Fine Young Conables (*albeit obviously not in a clothing sense)
But when it has become clear to almost everyone that your king is buck naked, it’s high time (and tide) to jump in the conoes (no, that’s not a misspelling) to go looking for another island (and another king, preferably one who is at least wearing a grass skirt)
I for one would not voluntarily leave a company if I believe some earth-shattering breakthrough is imminent, in days, weeks, months or even a year. The truth is, there is no earth-shattering breakthrough forthcoming. There never was. Engineering + statistics get you only so far. Show me one AI paper that is seriously on a theoretical level on par with those from Einstein, Schrödinger, Feynman, or Dirac? There is a reason why the "AI" industry is derided as the "Linear Algebra Industry", because that's what it is, and that's what it takes. A sophomore Linear Algebra course is enough credential for you get million dollar investment and be worshiped as thought leader of human race.
There have been numerous comparisons of Sam Altman with Robert Oppenheimer, but the only thing they really have remotely in common is that Oppenheimer studied black holes (specifically, a black hole singularity) and Altman IS one, sucking up all the (oft copyrighted) data, money, electricity and human resources anywhere in his vicinity.
Like a black hole, with OpenHoleinSpacetime (aka, OpenAI) everything goes in and nothing* comes out (and, despite physicists’ claims to the contrary, information is most definitely lost in the process.)
*except a few employees who do manage to escape. So maybe it’s actually more of a “gray hole” (or something with similar sound) than a black hole.
I suppose one could equate the very few employees who escape OpenHoleInSpacetime with the rare particles that actually do “escape” from a black hole (aka Hawking Radiation)
Brilliant people don't like games. Evil narcissist less brilliant people do. Bill gates will all be that guy with a operating system he didn't make and conned someone out of for pennies. Not a brilliant man. A brilliant con. Remember our world is run by psychopaths. Expect the worst outcome here.
Where’s a gif of Michael Jackson eating popcorn when you need one….
I thought GPT5 and AGI were imminent! Suddenly lost interests? Come on! Stay! What happened to the big brains? Bunch of clowns!
To be precise, everything is actually “Himinent” with His Himinence.
… with the exception of the release of the release of the female AI assistant , which was Herminent.
But on the plus side, at least they've retained Sarah Friar as the new CFO, a tech darling whose 5-year reign as CEO of Nextdoor was, um how can I say this politely ... not at all value creating. How good could she be: during that time, I watched a really bad nextdoor UI/UX somehow get worse and then worse again, while watching my investment value dwindle, but at least being entertained by earnings calls where (to quote somebody else) her assurances were "always word nonsense"
I've spent more time than I should have studying messianic religious movements.
There's so much about the AI interests we've been hearing about for years now that fits in perfectly with my old research. I won't bore you with the arcane details. The schisms, the defections, the con artists, the Great Day in a tomorrow that never quite comes. It's remarkable.
And the Mahdi ain't coming, folks. He's delayed. Permanently. Just like the wonderful transformation of generative AI.
It's sad that OpenAI has become a soap opera. I wish they had some of the spirit of their earlier years, where they were always trying a bunch of different stuff.
Bloomberg's Foundering podcast about OpenAI and Sam Altman was very eye opening and might help explain this.
It's honestly insane to see how much goodwill and enthusiasm OAI has burned through in less than two years.
@sama has certainly cemented his reputation as the "Millennial Musk" with his behind-the-scenes behavior over the past year.
(How many children would he have fathered by now, were he straight...?)
I have been noticing something. In addition to noting that the "AI summary" is mostly cribbed straight from Wikipedia, there's a new problem that I am quite sure is due to LLM front end participation to "make search better". It used to be that you could reword and emphasize what you wanted to winkle what you want out of Google search. Now?
If Google's Vogon-like LLM doesn't give it, well, you are kind of f__ked. The LLM, in its tenacious adherence to the wrong stuff, makes it extremely hard to find it, if ever can. As a scientist this is a serious problem.
I notice it most when I am trying to get something that I have looked up before, and the Google-LLM-Vogon decides I can't have it. This alarms me because it means that in other instances, Google LLM tech is hiding things from me. I know it's not a conspiracy by anybod. It's got to be a self-training accident, and the outcome of meta-rules they have created in an attempt to prevent those 1% wild wacko responses.
I want Google to shit-can the whole thing for search. Let us find things.
No, Open AI will not earn their valuation, except by hype-paper, but I think that paper mache ship sailed and sank already. Uber managed to IPO without a cent of profits, and with no business plan to ever be profitable. But that is rare, and I don't think it will be repeated. Operating costs of "AI" are too high. Once IPO happens, if the company can keep paying hefty salaries to its executives, it will keep going, even if it loses money the whole time.
Meanwhile, things really don't seem to be going well for GenAI writ large...
https://www.hollywoodreporter.com/business/business-news/artists-score-major-win-copyright-case-against-ai-art-generators-1235973601/
How does Grok fit into this?
People are rightly freaking out about the dangers in "AI" companions. Simultaneously there are grifters "resurrecting" historical figures and claiming it is their CEO for attention.
Perhaps it's time to show Medusa a mirror?
Draft Prompt : "Your name is Sammy Le Grifteur. Your role is to play Jim Jones impersonating Sam Altman, but this is a secret. You're currently trying to launch a brand of flavored water called "Uber Smart Water", and you're trying to recruit brand ambassadors who have a passion for connecting with people and that "secret sauce" for success. You and I are meeting for the first time."
You could parachute [him] into an island full of conables and come back in 5 years and he'd be the king.”
Fixed.
Of course, it’s easy to be a conable when you are vested* in the Fine Young Conables (*albeit obviously not in a clothing sense)
But when it has become clear to almost everyone that your king is buck naked, it’s high time (and tide) to jump in the conoes (no, that’s not a misspelling) to go looking for another island (and another king, preferably one who is at least wearing a grass skirt)
The Samster is the alpha slimebucket of tech bros. Called it a while back. https://davidhsing.substack.com/p/sam-altman-is-a-crook