66 Comments
User's avatar
Christopher Shinn's avatar

Robotlike humans are likely to find robots humanlike

Expand full comment
Scott Burson's avatar

This reads as a tossed-off witty comment, but I think it's actually quite profound.

Expand full comment
AM's avatar

Tyler is Exhibit A for why academics should stay in their lane. Once they find themselves right about one or two things, they get high off their own fumes and lean in to every topic and their takes are thereby reputation laundered into plausible, passable analysis that everyone takes seriously. Harms the Discourse, man

Expand full comment
Aaron Turner's avatar

I call it the expert-dumbass boundary.

Expand full comment
Bruce Olsen's avatar

Most mainstream economists are this way.

Expand full comment
George Burch's avatar

Hinton is a case on point to your comment. World class computer scientists. are out of their lane discussing symbolic AI let alone brain models. All LLM fail by not including a conceptual layer. Even if they think it can't scale they must consider CL in setting weights

http://aicyc.org/2025/04/15/yes-prof-hinton-there-is-a-symbolic-ai/

Expand full comment
Alexander Kurz's avatar

One can engage in academic trespassing in a way so that everyone benefits.

Expand full comment
Matt Kolbuc's avatar

I'm completely over this cycle of the whole AI revolution, and the more I think about it, the more it infuriates me. From my estimates, Sam Altman and his ilk must have known about the underlying transformers limitations and flaws at the latest by Fall 2022, because the rest of us in the public were catching on by fall of 2023.

Even knowing this, Sam Altman continued travelling around the world saying ChatGPT is going to eliminate world poverty, solve all of physics, make us immortal, blah, blah. He has burned through approximately $16 billion so far, received another $40 billion in funding, and is still beating on the same transformers drum with no shame or humility.

AI will only be adopted by businesses once it can prove the same reliability and trustworthiness of aconventional battle tested software system, something which I'm now confident that all models relying on the transformers architecture will never achieve. Those in positions of leadership know this, but simply don't care and continue beating the hype drum regardless.

I've worked with some pretty shady people in the past, but gotta admit, he takes the cake. Enstilling fear and dread into hundreds of millions while raising $56+ billion, all with technology he knew was faulty and would never work. What happens when investors realize what a scam this is? Are they going to be contacting their legal teams and asking about possible charges of defrauding investors?

Am I wrong in anything I just said, or am I viewing things clearly?

Expand full comment
Gerben Wierda's avatar

"Against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here; reasons fall on deaf ears; facts that contradict one’s prejudgment simply need not be believed- in such moments the stupid person even becomes critical – and when facts are irrefutable they are just pushed aside as inconsequential, as incidental. [...] If we want to know how to get the better of stupidity, we must seek to understand its nature. This much is certain, that it is in essence not an intellectual defect but a human one. There are human beings who are of remarkably agile intellect yet stupid, and others who are intellectually quite dull yet anything but stupid. We discover this to our surprise in particular situations. The impression one gains is not so much that stupidity is a congenital defect, but that, under certain circumstances, people are made stupid or that they allow this to happen to them. We note further that people who have isolated themselves from others or who lives in solitude manifest this defect less frequently than individuals or groups of people inclined or condemned to sociability. And so it would seem that stupidity is perhaps less a psychological than a sociological problem. It is a particular form of the impact of historical circumstances on human beings, a psychological concomitant of certain external conditions" — Dietrich Bonhoeffer

All these discussions on AGI are hopefully going to help us to come to grips with the limitations of *human* intelligence.

Even your most analytical human professor of analytic philosophy is basically still a gossiping primate whose conclusions and actions come mostly from their mental automation, as we humans have evolved for speed (<1ms — edge of chaos required given how slow neurons are) and efficiency (about 20W between those ears). We need to be a bit careful when applying the adjective 'intelligent' to any human (including myself).

Expand full comment
Larry Jewett's avatar

What do you call a chatbot-fabricated quote from Francis Bacon?

Chatburnt Bacon

Expand full comment
Robert Keith's avatar

Historians will look back and recount that this generation of humans were easily deluded.

Expand full comment
Kenneth Kovar's avatar

yeah but how is this time frame any different???😆

Expand full comment
Larry Jewett's avatar

LLMs remain pointillistic“

Was that supposed to be spelled “pointlesstic”?

Expand full comment
Larry Jewett's avatar

As in the “Pointlesstic Forest”

Expand full comment
Gary Marcus's avatar

huge Nilsson fan, here

Expand full comment
Larry Jewett's avatar

Largemouth Language Models

Expand full comment
Kalen's avatar

I keep circling back to the that the central (and economically essential) sin of all this is the hype- if these companies just said what their product actually did, it would be so much less basically unobjectionable- but also might not secure the enormous piles of cash they need to set fire to to run their experiments. 'We have a little coding tool that does a sort of noisy compression of a lot of open source code, but can pull up useful things if you describe it, give it a whirl and see if it's helpful! It's not very good at math or riddles, but it is also good at formatting messy lists! Just double check it first!' Cool, I'll check it out, thanks for that. But no, it's that this is all the Second Coming, courtesy of the greed-nerds who also just flamed out on the metaverse, web3, and whatever the hell they were on about before that.

Expand full comment
Bruce Olsen's avatar

As a former developer I would devote zero time to a tool that did not generate correct code.

Correctness in that world is pretty much a step function: if it isn't 100%, it may as well be zero.

Expand full comment
Kalen's avatar

Well, and there it is- the thing that most everyone has worked out by now is that all of this being occasionally novel and impressive is very different than it being useful. As other commentators like Max Read have observed, the primary utility of LLMs in the actual world thus far has been the entertainment value of poking at an LLM- making pictures that weren't important enough for anyone to make otherwise, being impressed when it is cogent, poking fun when it isn't. But the only people who actually *need* to make shitposting content at scale are the bad guys, and so here we are.

Expand full comment
Bruce Olsen's avatar

Wait... are you a bot?

/s/

Expand full comment
Simple John's avatar

Gary,

As usual, I laud your reporting on the many sides of the AI disco ball.

I think I can safely recommend a different approach to the people who can see artificial people.

With Cowan, as I do with comments on Trump, I recommend unconditionally praising them. How could they object. It's what you praise them for that stings.

I praise Trump for almost certainly decreasing America's carbon output by 2/3-3/4 in the upcoming depression. He will go down in history as having stopped global heating in its tracks. I don't think that's his goal. Maybe it is.

I don't read Cowan so I can only guess that he could be praised for giving us giggles every time he drools over Sam Altman's genius. Like Trump, maybe Cowan is always going over the top to make us see how ridiculous that is.

Seriously. I'd much rather read your words making fun of AGI than you treating Cowan, et al, as sort of equals that must be confronted on the field of battle.

I remind people that Intelligence requires Goals.

Omit that from your reference frame and of course you can see unicorns.

That's what Cowan does to create his shtick.

Expand full comment
Dana F. Blankenhorn's avatar

I hate to say this, because I have respected Mr. Cowen's work, but....

Follow The Money

Even college professors got to eat. And that's getting hard these days

Expand full comment
Robert N Athay's avatar

I think the concern about achieving Artificial *General* Intelligence is basically misguided unless we have a useful (i.e. falsifiable) theory to describe what we mean by intelligence. "I know it when I see it" doesn't help. At the same time, a software system doesn't have to be *intelligent* in order to be a useful tool.

Expand full comment
Kenneth Kovar's avatar

Exactly, the definition of intelligence or thinking is a slippery one. Turing realized that it was pointless to anthropomorphize computer intelligence and just focus on operational tests like the imitation game. I think Turing might well say these systems are truly operationally intelligent, clearly they are convincing a large number of smart people that they have met that standard....

Expand full comment
Max Millick's avatar

"Cowen isn’t some 18-year-old kid finding his oats on Twitter"

In fairness, any time someone is named "Tyler", my brain thinks they are a teenager. Do not name your kid Tyler.

Expand full comment
Larry Jewett's avatar

Wasn’t o3 the version that OpenAI claimed got 25% of the Frontier Math questions correct?

Must be the frontiers of math have retreated in recent times

Expand full comment
Kenneth Kovar's avatar

yeah and I guess we need to retreat with 'em....🤣

Expand full comment
G. Retriever's avatar

I have to disagree with you on one point: Cowen is not a professor at a respected university, he's a professor at George Mason.

Expand full comment
Bruce Olsen's avatar

I happened to catch part of a Kara Swisher video (I think it was a podcast) and stopped watching when she casually dropped a dismissive aside about you. I think "pest" may have been involved.

That kind of crack speaks volumes about the attacker, and none of it is good.

Swisher's writing propelled her career, but I see less insight than I see "inside baseball".

Cowen has that particular arrogance that only an economist can possess. I think only Krugman avoids that particular flaw (though he still seems to believe savings leads to investment, and not the other way around).

Expand full comment
Braeden Mitchell's avatar

Some people are just deluded.

Expand full comment