105 Comments
User's avatar
David Roberts's avatar

"I rarely fan boy but this moment was truly special"

Indeed. Fanboy away, says I. I certainly would have.

TheAISlop's avatar

Thank you for your continued push for better science.

Larry Jewett's avatar

Some science is better than no science.

Black boxes ain't science

Larry Jewett's avatar

And anyone who thinks they are ain't a real scientist

Nathalie Suteau's avatar

You were the only one mixing insights with humour. You may have not noticed it but you were the rock star of the event. It was interesting, refreshing. Congratulations.

Graham Lovelace's avatar

Just to add Gary, you were brilliant too.

Nat Irvin II's avatar

Marcus for the ongoing campaign for sanity

C. King's avatar

Gary: I've worked through the first comments and panel (five minutes each) and will continue until I get through the whole thing--probably days. But I want to thank you now for enabling access to this wonderful moment in our intellectual history.

Kathleen Weber's avatar

Noah Smith wrote an excellent post this morning on the ramifications of the AI bubble for the economy.

America's future could hinge on whether AI slightly disappoints

If the economy's single pillar goes down, Trump's presidency will be seen as a disaster.

https://www.noahpinion.blog/p/americas-future-could-hinge-on-whether

RCThweatt's avatar

Paul Krugman has been making similar comments, on the long standing pattern of over investment in new tech, going back at least as far as the railroad building boom in Great Britain in the 1840s, and the great difficulty and long delays in fully exploiting new tech, for example, electricity in manufacturing. An entirely different type of building was needed, the large single story buildings we're familiar with, which enable the easy movement of everything, as opposed to the compact multi story buildings dictated by the need to distribute steam and mechanical power.

Take away seems to be, even if AI eventually delivers, there'll be a bust, just as there's always been.

C. King's avatar

Kathleen Weber: And then there's this in this morning's New York Times where the coal mine canaries are screaming at everyone:

https://www.nytimes.com/2025/10/10/business/first-brands-bankruptcy-wall-street.html?unlocked_article_code=1.tE8.tCGZ.BYdqMLDm5WpE&smid=url-share

Larry Jewett's avatar

From the linked article “There’s just a lot that we don’t know, to put it bluntly,” a lawyer for some of the firm’s creditors told the bankruptcy court this month. The same lawyer called First Brands’ financial structure a “black box.”

"Black boxes" seem to be very popular these days, particularly when it comes to AI technology and to the financial "deals" that finance it.

But what you don't know won't hurt you, right?

C. King's avatar

Larry Jewett: "But what you don't know won't hurt you, right?"

RIGHT. (Cough, cough, gag, gag. Where did all these dead canaries come from?)

Larry Jewett's avatar

As AI-ristotle once said, " a dead can-AI-ry is better than a live one"

Larry Jewett's avatar

Because, ad AI-ristotle also noted, "Dead can-AI-ries don't sing"

Dan Wolpert's avatar

Like I said when you first posted about this event, it would have been great to be there. What I am also continually fascinated by, as an expert on consciousness, is how much of this discussion is about the nature of the human mind and the ego process, things that the great spiritual traditions have been talking about and warning us about for over 4,000 years and yet these conversations contain no references to these traditions and often seem to be reinventing the wheel in regards to these issues, or somehow thinking that they've discovered something 'new' because a PET scan says it's true. Thanks.

Diamantino Almeida's avatar

I believe this vast and rapid investment is driven by fear. Big tech lacks real innovation or imagination, so they need to prove they’re still worth investing in. And what better way than with a digital parrot an artificial imitation they can use to make us doubt our own intelligence and replace it with their poorly designed tools, which they market as "Artificial Intelligence"?

It’s all a game a race to see who can extract the most profit. The fallacy is that to be a winner, you must be bigger bigger data centers, larger investments, exaggerated lies. And all of this, supposedly, for the sake of humanity.

But how can we trust these platforms and companies when they demonstrate, day after day, how inhumane they can be?

How many traumatized data annotators were harmed in the process of perfecting these models? Were they in the news, while the engineers in their ivory towers rake in million-dollar salaries?

I’m deeply concerned about the manipulation and lies big tech is using to convince us that they’re the best choice to handle AI to take control of our futures and our lives.

Does it make sense to exhaust water resources essential for humans, animals, and plants just to fuel their grandiose, megalomaniacal dreams?

I’m glad that some of us are noticing these issues, and that people like Gary Marcus and many are helping expose the charade.

I believe is morally wrong, turning a useful tech like LLMs into a jester. There are AI tech out there saving lives and really improving our lives, but the investment is a fraction of what LLMs are getting.

We should start doubting these companies and cast our vote where it hurts the most, where we spend out time and money.

David P Reed's avatar

"Big Tech" isn't engineers or scientists. In fact, it is a branch of "Big Finance" - venture capital and private equity in particular. Both have access to Other People's Money, and instead of productive investments in products or services, they invest in Promoting More Investment, hoping to maximize returns by selling dreams to speculators and governments.

C. King's avatar

David P Reed: Here in the States, they, and the oil and gas and polluter people, already got hold of Medicare directives and have absconded (via Mr. Vought whom no one but the GOP Congress voted for) with already-appropriated money for scads of other civilized and civilizing funding. And I don't believe for a minute that Social Security is not next up.

Jim Ryan's avatar

Since it is now obvious that we cant scale our way to AI, how long until it has an effect on all the buildimg of data centers,or will iy.

Pramodh Mallipatna's avatar

Interesting, will check out the talk.

Given we are building so many data centers without tech proven at scale is concerning.

Sharing an article on data center buildout that has links to some videos that are worth watching

https://open.substack.com/pub/pramodhmallipatna/p/inside-the-data-center-boom-powering

True to Type with Pollyanna's avatar

Particularly given those data centers' impact on power usage, water usage, lack of FPIC in areas they're being installed, limited consideration of ethics at any level etc etc.

Larry Jewett's avatar

Limited ethics?

I guess if one considers "zero" the limit

Larry Jewett's avatar

The Calculus of AI

slip-AI-ry slope = limit as ethics go to zero of

AI / ethics

C. King's avatar

True to Type: Add to that: it's "telling" about the need for some sort of ethical and political gateway (call it empowered regulations, or whatever) how these people have already shown their opportunistic and even predatory intentions--in absconding with others' works. It's not full-tilt terrorism, but it makes a writer afraid to put anything out there. (Publishing houses are bad enough already.)

George Burch's avatar

GPT-5 Wrote this from my prompt Like Gary I believe new Ai models are needed for quality of QA to improve. Turing would understand.

The pattern is striking: scaling LLMs seemed to clear many errors—until it didn’t. The question is *why*. Much of the debate frames this as a flaw in the architecture or scaling approach, but perhaps the problem lies deeper—in data preparation.

What’s missing from LLM training data isn’t just “more text,” but the **human cognitive structure** that organizes information. Compare a corpus of random text to a well-crafted textbook: experts design textbooks hierarchically, with titles, chapters, and subheadings that encode relationships and context. These structural markers—typography, hierarchy, and layout—carry cognitive intent that tokenization tends to strip away.

When the model loses those connections, it also loses the ability to distinguish between homonyms and contextual identities (“Lincoln” the president, the car, or the tunnel). As a result, token embeddings collapse polysemic meanings into proximity, amplifying contextual confusion—sometimes catastrophically, as when similar names cross ethical or factual boundaries.

Early scaling masked these issues by smoothing over sparse data regions, but as models grew, the absence of structural cognition became more apparent. The recent pivot toward *human annotation at scale* can be read as an implicit admission of this gap—a red flag reminiscent of symbolic AI’s own bottleneck around labeled knowledge.

In a sense, the turn to generative AI is less a triumph of scaling than a detour around the unscalable part: *true cognitive structure and human labeling*. There was no detour just a long road ending at a cliff.

Larry Jewett's avatar

"token embeddings collapse polysemic meanings into proximity, amplifying contextual confusion—sometimes catastrophically"

In plainspeak, it can result in

"hallucinations", since irrelevant nonsense ends up proximate to meaningful stuff in the distribution.

George Burch's avatar

Yes Thanks. As the prompts pile up sequentially the chance that the collapsed weight on a single word could redirect the output. This is most often noticed when there are multiple entities with the same name in the training text.

btw Hallucination, inference and similar humanizing terms are Anthroslop

Larry Jewett's avatar

I like that term, but isnt it spelled "AI-nthroslop"?

Larry Jewett's avatar

Ironically, AI "scientists" are making AI "hallucination" more likely with their perversion of the English language.

Now when you ask an LLM what a hallucination is, it might respond "Hallucination is a tendancy of an LLM to tell falsehoods" [like this AI is doing now]

C. King's avatar

To George Burch: I think the financial situation (huge investments, etc.) is just one of the cliffs that some present thought is heading towards--maybe fortunately in some sense as huge losses, at this relatively early stage, may help redirect efforts along more (shall we say) appropriate intellectual (and wholly personal) pathways. The "look" of recent pictures of GAZA, or from another time, English and European cities after WWII, are good metaphors, however, for what (potential) economic "downturns" can do to the lives of real people.

Gordon's avatar

You may enjoy this: "I asked AI image generators to draw me. Here’s what came up."

https://www.gordonhaber.net/i-asked-ai-image-generators-to-draw-me-heres-what-came-up/

C. King's avatar

I'm "attending" the AI conference, a little at a time--thanks again to Gary.

However, has anyone "done the math" in relating (a) the fundamentally philosophical idea that everything is a hallucination (stated/suggested by one of the speakers) with (b) the hope of injecting the ideas of safety and responsibility into the dialogue surrounding the further developments of AI?

Doesn't work for me . . . or for the speaker, for that matter. First, it strikes at the heart of one's notions of both knowledge and reality; second, it's a pretty good argument for adopting carelessness and even nihilism as one's main source of thought and action; and third, it's not what the speaker does and reasonably assumes, even when he decides to put down his pen and visit the men's room.

Larry Jewett's avatar

Everything is a hallucination...

...to an LLM

...and to its fanboyz and fangirlz

Who needs LSD when you have LLM?

Google Man's avatar

Peter Gabriel wants to take a sledge hammer to it. :)

Larry Jewett's avatar

...while he's climbing up on AI's bury hill