It's impossible to create something that experiences as we do through matter, not only because our very presence transcends causality, but also because there literally is no way to create such a coherent substrate for experience to arise.
A ball of yarn, and rubber bands is not an untied knot, that's what this will always result in, it is definitely amusing to see all the "roar", and "hype" surrounding this, when true intelligence is right here, present within our very experience.
Everything I know about human thought (which is not a lot, I know) says that emotions are critical for making decisions, so until I see a credible plan to program a computer to feel emotions, I will never believe that artificial general intelligence is near.
My late Jugian analyst took your position on this. Apparently Alan Turing took the contrary position (https://courses.cs.umbc.edu/471/papers/turing.pdf), holding that even mechanical systems like Charles Babbage's Analytical Engine could pass his Imitation Game.
My intuition tells me that he was enamored by how groundbreaking computation was for the time, and based on his mathematical insight he rationalized that computation could display the appearance of thinking, and so bring others to conceive that in the computations line of reasoning. Really, it all boils down to simple conditional:operator instructions which is nothing like cognition.
We don't perceive in true, or false, this, or that .. this only appears to be the case in fixation itself. Experience is beyond computation, beyond any amalgam of operations, and mathematical representations of our experience.
It was the dawning of the sun just over the horizon - just before the sun burned that stillborn dream away by sheer scale of its mathematical decoherence, and confining linearity.
It's actually just astounding to think that there are people who buy in, and perpetuate the idea that we can recreate our very own intelligence by using one-dimensional symbolic relationships.
If you’re enjoying the irreverent tone of this piece, you’ll love this companion read: ‘AGI is dead.’ A Nietzschean spin on today’s AI circus, where AGI gets cast as either a messiah or a monster.
Dude, nice blog post. It really encapsulates so much of frustration I’ve had with the whole idea of AGI and how whether pro or con, so many have abdicated their role in their praise or fear for what’s to be. thanks for sharing that link here.
Actually, the AGI system producing this is playing N-dimensional chess with us, trying to lull us punty humans into a false sense of security by seeming to be a bit confused.
You’ll also have to search very hard for the human kind, given how easily humans are convinced by such nonsense. History teaches us that when we find out, we will feel embarrassed for a while (the bigger the disaster, the bigger the embarrassment), and as quickly as we can we let the embarrassment fade and we become a willing vessel for the next one. Sometimes a new conviction might be an attractive way to forget our embarrassment. That is why so many hype-peddlers seem to jump from one silver bullet to the next.
The holocaust was probably the biggest disaster/crime of all time and it thus led to probably the biggest embarrassment of all time (a whole culture being embarrassed for many decades, prompting them to be extremely respectable for many decades. But even that will fade, sadly.
It shares a lot with some real AI papers out there that (I assume) were written by humans. Not surprising, really, since that's what LLMs consume and regurgitate. They can't spit up lobster if all they eat are burgers and candy bars.
Too long, didn't read.
By the way, in all seriousness I do read all of your posts, just so you know.
I not only read all of Gary’s posts but also all of the comments.
This blog exhibits something that AI and its cheerleaders don’t: legitimate skepticism (and actual intelligence.)
So true!
We're developing ASI... Artificially Substantiated Intelligence.
It's impossible to create something that experiences as we do through matter, not only because our very presence transcends causality, but also because there literally is no way to create such a coherent substrate for experience to arise.
A ball of yarn, and rubber bands is not an untied knot, that's what this will always result in, it is definitely amusing to see all the "roar", and "hype" surrounding this, when true intelligence is right here, present within our very experience.
AI (Artificial Ignorance) has a long way to go...
Everything I know about human thought (which is not a lot, I know) says that emotions are critical for making decisions, so until I see a credible plan to program a computer to feel emotions, I will never believe that artificial general intelligence is near.
My late Jugian analyst took your position on this. Apparently Alan Turing took the contrary position (https://courses.cs.umbc.edu/471/papers/turing.pdf), holding that even mechanical systems like Charles Babbage's Analytical Engine could pass his Imitation Game.
BTW great nickname, Maya.
My intuition tells me that he was enamored by how groundbreaking computation was for the time, and based on his mathematical insight he rationalized that computation could display the appearance of thinking, and so bring others to conceive that in the computations line of reasoning. Really, it all boils down to simple conditional:operator instructions which is nothing like cognition.
We don't perceive in true, or false, this, or that .. this only appears to be the case in fixation itself. Experience is beyond computation, beyond any amalgam of operations, and mathematical representations of our experience.
It was the dawning of the sun just over the horizon - just before the sun burned that stillborn dream away by sheer scale of its mathematical decoherence, and confining linearity.
It's actually just astounding to think that there are people who buy in, and perpetuate the idea that we can recreate our very own intelligence by using one-dimensional symbolic relationships.
Sounds closer to Amazingly Stupid Intelligence once you actually make these software do serious tasks and actually analyze their performance.
If you’re enjoying the irreverent tone of this piece, you’ll love this companion read: ‘AGI is dead.’ A Nietzschean spin on today’s AI circus, where AGI gets cast as either a messiah or a monster.
https://ai-cosmos.hashnode.dev/agi-is-dead-rise-above-false-gods
Dude, nice blog post. It really encapsulates so much of frustration I’ve had with the whole idea of AGI and how whether pro or con, so many have abdicated their role in their praise or fear for what’s to be. thanks for sharing that link here.
Do we even need AGI at this point?
Yea… but did Henrietta read it?
Actually, the AGI system producing this is playing N-dimensional chess with us, trying to lull us punty humans into a false sense of security by seeming to be a bit confused.
Artificial and Generative for sure. Let me see if I can find the Intelligence, still reading it.
You’ll also have to search very hard for the human kind, given how easily humans are convinced by such nonsense. History teaches us that when we find out, we will feel embarrassed for a while (the bigger the disaster, the bigger the embarrassment), and as quickly as we can we let the embarrassment fade and we become a willing vessel for the next one. Sometimes a new conviction might be an attractive way to forget our embarrassment. That is why so many hype-peddlers seem to jump from one silver bullet to the next.
The holocaust was probably the biggest disaster/crime of all time and it thus led to probably the biggest embarrassment of all time (a whole culture being embarrassed for many decades, prompting them to be extremely respectable for many decades. But even that will fade, sadly.
I have also glo-fivered recent advancements. Nothing feels better than glo-fivering.
“We know how to build AGI. All we lack is a few trillion dollars.” — Sam Somebody
“Astronomical Generative Indebtedness “ might be better
AGI : Automatic Generative Indebtedness
If I hear much more nonsense about the Turing Test proving LLMs are as intelligent as humans I’m going to start calling it the Irritation Game.
At least it got the correct formula for the equine posterior function!
It shares a lot with some real AI papers out there that (I assume) were written by humans. Not surprising, really, since that's what LLMs consume and regurgitate. They can't spit up lobster if all they eat are burgers and candy bars.
But that's only because we haven't fed them ENOUGH burgers and candy bars!
Interesting theory! ;-)
Gee, maybe Sam A. can save some money by reading this paper…
I’m pretty sure Sam would decline any suggestion to save money.
Like the Great White shark that has to keep swimming to stay alive, Sam has to keep spending billions of dollars.
Just when I thought that humanity had nothing new to show me 😆
Perfect timing! Have filled in my name and submitted an application to the Y Combinator Summer 2025 Batch. Wish me luck!
I'm going to be honest here, because I know you can deal with it, Gary:
I had to laugh way harder than I though I would reading the post abstract.
Thanks!
Clearly, Imagen might be considered reasonable at producing pretty images but is completely shite at constructing coherent sentences pertaining to AGI. Which led to me to think what a SOTA LLM might produce. So I first asked Claude 3.7 Sonnet to read Imagen's AGI paper, to which it replied: https://docs.google.com/document/d/1DFfrr3vYOEBGHiQFOGQdhrJC597wN_oDVJ5VcGZg3Xg/edit?usp=sharing. I then asked Claude to write a serious academic paper on AGI and it produced this: https://drive.google.com/file/d/1cY0F46FQc4SnMyCbcOaUCKirBHeWe9ph/view?usp=share_link. Claude has obviously been following Gary because it proposes a partly neuro-symbolic approach!