87 Comments
User's avatar
Notorious P.A.T.'s avatar

Too long, didn't read.

Expand full comment
Notorious P.A.T.'s avatar

By the way, in all seriousness I do read all of your posts, just so you know.

Expand full comment
Larry Jewett's avatar

I not only read all of Gary’s posts but also all of the comments.

This blog exhibits something that AI and its cheerleaders don’t: legitimate skepticism (and actual intelligence.)

Expand full comment
Notorious P.A.T.'s avatar

So true!

Expand full comment
Maya's avatar

We're developing ASI... Artificially Substantiated Intelligence.

It's impossible to create something that experiences as we do through matter, not only because our very presence transcends causality, but also because there literally is no way to create such a coherent substrate for experience to arise.

A ball of yarn, and rubber bands is not an untied knot, that's what this will always result in, it is definitely amusing to see all the "roar", and "hype" surrounding this, when true intelligence is right here, present within our very experience.

AI (Artificial Ignorance) has a long way to go...

Expand full comment
Notorious P.A.T.'s avatar

Everything I know about human thought (which is not a lot, I know) says that emotions are critical for making decisions, so until I see a credible plan to program a computer to feel emotions, I will never believe that artificial general intelligence is near.

Expand full comment
Jim Hartman's avatar

My late Jugian analyst took your position on this. Apparently Alan Turing took the contrary position (https://courses.cs.umbc.edu/471/papers/turing.pdf), holding that even mechanical systems like Charles Babbage's Analytical Engine could pass his Imitation Game.

BTW great nickname, Maya.

Expand full comment
Maya's avatar

My intuition tells me that he was enamored by how groundbreaking computation was for the time, and based on his mathematical insight he rationalized that computation could display the appearance of thinking, and so bring others to conceive that in the computations line of reasoning. Really, it all boils down to simple conditional:operator instructions which is nothing like cognition.

We don't perceive in true, or false, this, or that .. this only appears to be the case in fixation itself. Experience is beyond computation, beyond any amalgam of operations, and mathematical representations of our experience.

It was the dawning of the sun just over the horizon - just before the sun burned that stillborn dream away by sheer scale of its mathematical decoherence, and confining linearity.

It's actually just astounding to think that there are people who buy in, and perpetuate the idea that we can recreate our very own intelligence by using one-dimensional symbolic relationships.

Expand full comment
Tim Nguyen's avatar

Sounds closer to Amazingly Stupid Intelligence once you actually make these software do serious tasks and actually analyze their performance.

Expand full comment
Gerard's avatar

If you’re enjoying the irreverent tone of this piece, you’ll love this companion read: ‘AGI is dead.’ A Nietzschean spin on today’s AI circus, where AGI gets cast as either a messiah or a monster.

https://ai-cosmos.hashnode.dev/agi-is-dead-rise-above-false-gods

Expand full comment
direwolff's avatar

Dude, nice blog post. It really encapsulates so much of frustration I’ve had with the whole idea of AGI and how whether pro or con, so many have abdicated their role in their praise or fear for what’s to be. thanks for sharing that link here.

Expand full comment
Kenneth E. Harrell's avatar

Do we even need AGI at this point?

Expand full comment
Alex's avatar

Yea… but did Henrietta read it?

Expand full comment
Scott E Fahlman's avatar

Actually, the AGI system producing this is playing N-dimensional chess with us, trying to lull us punty humans into a false sense of security by seeming to be a bit confused.

Expand full comment
The PI's avatar

Artificial and Generative for sure. Let me see if I can find the Intelligence, still reading it.

Expand full comment
Gerben Wierda's avatar

You’ll also have to search very hard for the human kind, given how easily humans are convinced by such nonsense. History teaches us that when we find out, we will feel embarrassed for a while (the bigger the disaster, the bigger the embarrassment), and as quickly as we can we let the embarrassment fade and we become a willing vessel for the next one. Sometimes a new conviction might be an attractive way to forget our embarrassment. That is why so many hype-peddlers seem to jump from one silver bullet to the next.

The holocaust was probably the biggest disaster/crime of all time and it thus led to probably the biggest embarrassment of all time (a whole culture being embarrassed for many decades, prompting them to be extremely respectable for many decades. But even that will fade, sadly.

Expand full comment
Eric Solomon's avatar

I have also glo-fivered recent advancements. Nothing feels better than glo-fivering.

Expand full comment
Larry Jewett's avatar

“We know how to build AGI. All we lack is a few trillion dollars.” — Sam Somebody

Expand full comment
Larry Jewett's avatar

“Astronomical Generative Indebtedness “ might be better

Expand full comment
Larry Jewett's avatar

AGI : Automatic Generative Indebtedness

Expand full comment
Bruce Cohen's avatar

If I hear much more nonsense about the Turing Test proving LLMs are as intelligent as humans I’m going to start calling it the Irritation Game.

Expand full comment
Lawrence de Martin's avatar

At least it got the correct formula for the equine posterior function!

Expand full comment
Paul Topping's avatar

It shares a lot with some real AI papers out there that (I assume) were written by humans. Not surprising, really, since that's what LLMs consume and regurgitate. They can't spit up lobster if all they eat are burgers and candy bars.

Expand full comment
Notorious P.A.T.'s avatar

But that's only because we haven't fed them ENOUGH burgers and candy bars!

Expand full comment
Paul Topping's avatar

Interesting theory! ;-)

Expand full comment
Ttimo Cerino's avatar

Gee, maybe Sam A. can save some money by reading this paper…

Expand full comment
Larry Jewett's avatar

I’m pretty sure Sam would decline any suggestion to save money.

Like the Great White shark that has to keep swimming to stay alive, Sam has to keep spending billions of dollars.

Expand full comment
Leo's avatar

Just when I thought that humanity had nothing new to show me 😆

Expand full comment
P Szymkowiak's avatar

Perfect timing! Have filled in my name and submitted an application to the Y Combinator Summer 2025 Batch. Wish me luck!

Expand full comment
Fabian Transchel's avatar

I'm going to be honest here, because I know you can deal with it, Gary:

I had to laugh way harder than I though I would reading the post abstract.

Thanks!

Expand full comment
Aaron Turner's avatar

Clearly, Imagen might be considered reasonable at producing pretty images but is completely shite at constructing coherent sentences pertaining to AGI. Which led to me to think what a SOTA LLM might produce. So I first asked Claude 3.7 Sonnet to read Imagen's AGI paper, to which it replied: https://docs.google.com/document/d/1DFfrr3vYOEBGHiQFOGQdhrJC597wN_oDVJ5VcGZg3Xg/edit?usp=sharing. I then asked Claude to write a serious academic paper on AGI and it produced this: https://drive.google.com/file/d/1cY0F46FQc4SnMyCbcOaUCKirBHeWe9ph/view?usp=share_link. Claude has obviously been following Gary because it proposes a partly neuro-symbolic approach!

Expand full comment