26 Comments
Aug 6Liked by Gary Marcus

I for one would not voluntarily leave a company if I believe some earth-shattering breakthrough is imminent, in days, weeks, months or even a year. The truth is, there is no earth-shattering breakthrough forthcoming. There never was. Engineering + statistics get you only so far. Show me one AI paper that is seriously on a theoretical level on par with those from Einstein, Schrödinger, Feynman, or Dirac? There is a reason why the "AI" industry is derided as the "Linear Algebra Industry", because that's what it is, and that's what it takes. A sophomore Linear Algebra course is enough credential for you get million dollar investment and be worshiped as thought leader of human race.

Expand full comment
Aug 6Liked by Gary Marcus

There's no earth-shattering breakthrough forthcoming. What there will be, is steady improvements.

Likely in GPT-5 there will be a lot more than LLMs, and the product will be notably better. Then iterate. In 10 years it will add up to AGI, likely.

AI will not arise from a theoretical breakthrough, like Einstein's. It will be diligent mapping of the problem space and architectural refinements.

Expand full comment

There have been numerous comparisons of Sam Altman with Robert Oppenheimer, but the only thing they really have remotely in common is that Oppenheimer studied black holes (specifically, a black hole singularity) and Altman IS one, sucking up all the (oft copyrighted) data, money, electricity and human resources anywhere in his vicinity.

Like a black hole, with OpenHoleinSpacetime (aka, OpenAI) everything goes in and nothing* comes out (and, despite physicists’ claims to the contrary, information is most definitely lost in the process.)

*except a few employees who do manage to escape. So maybe it’s actually more of a “gray hole” (or something with similar sound) than a black hole.

Expand full comment

I suppose one could equate the very few employees who escape OpenHoleInSpacetime with the rare particles that actually do “escape” from a black hole (aka Hawking Radiation)

Expand full comment

Brilliant people don't like games. Evil narcissist less brilliant people do. Bill gates will all be that guy with a operating system he didn't make and conned someone out of for pennies. Not a brilliant man. A brilliant con. Remember our world is run by psychopaths. Expect the worst outcome here.

Expand full comment
Aug 6Liked by Gary Marcus

Where’s a gif of Michael Jackson eating popcorn when you need one….

Expand full comment
Aug 6Liked by Gary Marcus

I thought GPT5 and AGI were imminent! Suddenly lost interests? Come on! Stay! What happened to the big brains? Bunch of clowns!

Expand full comment

To be precise, everything is actually “Himinent” with His Himinence.

… with the exception of the release of the release of the female AI assistant , which was Herminent.

Expand full comment

But on the plus side, at least they've retained Sarah Friar as the new CFO, a tech darling whose 5-year reign as CEO of Nextdoor was, um how can I say this politely ... not at all value creating. How good could she be: during that time, I watched a really bad nextdoor UI/UX somehow get worse and then worse again, while watching my investment value dwindle, but at least being entertained by earnings calls where (to quote somebody else) her assurances were "always word nonsense"

Expand full comment

The usual churn.

It is Meta which will likely bow out first. Zuck is in it for the sake of pride, and not making any profit. Investors may push him to cut his losses.

OpenAI will be fine for at least a year. GPT-5 will be notably better. Longer term, it is Google who will rule the roost.

Expand full comment

I've spent more time than I should have studying messianic religious movements.

There's so much about the AI interests we've been hearing about for years now that fits in perfectly with my old research. I won't bore you with the arcane details. The schisms, the defections, the con artists, the Great Day in a tomorrow that never quite comes. It's remarkable.

And the Mahdi ain't coming, folks. He's delayed. Permanently. Just like the wonderful transformation of generative AI.

Expand full comment

It's sad that OpenAI has become a soap opera. I wish they had some of the spirit of their earlier years, where they were always trying a bunch of different stuff.

Expand full comment

Bloomberg's Foundering podcast about OpenAI and Sam Altman was very eye opening and might help explain this.

Expand full comment

It's honestly insane to see how much goodwill and enthusiasm OAI has burned through in less than two years.

@sama has certainly cemented his reputation as the "Millennial Musk" with his behind-the-scenes behavior over the past year.

(How many children would he have fathered by now, were he straight...?)

Expand full comment

I have been noticing something. In addition to noting that the "AI summary" is mostly cribbed straight from Wikipedia, there's a new problem that I am quite sure is due to LLM front end participation to "make search better". It used to be that you could reword and emphasize what you wanted to winkle what you want out of Google search. Now?

If Google's Vogon-like LLM doesn't give it, well, you are kind of f__ked. The LLM, in its tenacious adherence to the wrong stuff, makes it extremely hard to find it, if ever can. As a scientist this is a serious problem.

I notice it most when I am trying to get something that I have looked up before, and the Google-LLM-Vogon decides I can't have it. This alarms me because it means that in other instances, Google LLM tech is hiding things from me. I know it's not a conspiracy by anybod. It's got to be a self-training accident, and the outcome of meta-rules they have created in an attempt to prevent those 1% wild wacko responses.

I want Google to shit-can the whole thing for search. Let us find things.

Expand full comment

No, Open AI will not earn their valuation, except by hype-paper, but I think that paper mache ship sailed and sank already. Uber managed to IPO without a cent of profits, and with no business plan to ever be profitable. But that is rare, and I don't think it will be repeated. Operating costs of "AI" are too high. Once IPO happens, if the company can keep paying hefty salaries to its executives, it will keep going, even if it loses money the whole time.

Expand full comment

How does Grok fit into this?

Expand full comment

People are rightly freaking out about the dangers in "AI" companions. Simultaneously there are grifters "resurrecting" historical figures and claiming it is their CEO for attention.

Perhaps it's time to show Medusa a mirror?

Draft Prompt : "Your name is Sammy Le Grifteur. Your role is to play Jim Jones impersonating Sam Altman, but this is a secret. You're currently trying to launch a brand of flavored water called "Uber Smart Water", and you're trying to recruit brand ambassadors who have a passion for connecting with people and that "secret sauce" for success. You and I are meeting for the first time."

Expand full comment

You could parachute [him] into an island full of conables and come back in 5 years and he'd be the king.”

Fixed.

Of course, it’s easy to be a conable when you are vested* in the Fine Young Conables (*albeit obviously not in a clothing sense)

But when it has become clear to almost everyone that your king is buck naked, it’s high time (and tide) to jump in the conoes (no, that’s not a misspelling) to go looking for another island (and another king, preferably one who is at least wearing a grass skirt)

Expand full comment