59 Comments

Kudos for keeping it real and sharing the updates and changes as they happen.

I’m here for exactly this.

Expand full comment

Hi Gary

I’m not sure that differentiating poetic AGI from “mere” AGI helps. Surely AGI is a state that you either achieve or you don’t. Personally I don’t think we’re even close to it or that we ever will achieve it. Our understanding of the brain is so limited and constrained by the things we do understand. It’s only 100 years or so since we thought the brain worked like a telephone exchange, then 50 years ago it was a computer.

I’ve spent my entire 40+ years working in tech but I don’t believe any of this. It’s arrogance and hubris beyond belief to think that this is even remotely achievable.

Keep up your skepticism!

Expand full comment

We can never achieve AGI or ASI or call it what you will. Merely checking a bunch of boxes denoting random aspects of human thought to survive another day on the planet does not equal intelligence.

As far as Kurzweil's litmus test using poetry, he needs to know that we already had AGI had 2020 or even before.

These days we have a load of "free verse" garbage masquerading as poetry. Take Rupi Kaur who is considered one of the best poets of her generation. Her "poems" feels like a random sentence idea coming into somebody’s head who never tries to raise questions for that very thought but is more obsessed with jotting it down using breaks in whatever randomness it entails.

Here is one of Kaur's "poems" called "growth":

You do not just wake up

And become a butterfly

That's it. That's the poem. On that score LLMs are ideal for poetry!

Expand full comment

Likewise, in both experience and attitude. The ironic thing is that the truly great minds, people like Feynman, feel a sense of real humility when confronting the universe.

I think the tests for AGI are at very best incomplete. Were I to ever waste my money with one of the bots, I'd start asking questions about how it felt about its mother, then its grandmother, and so on, until I drove it into a state of hallucination. In other words, exposing the inability of AI to simply say, "I don't know" or "I don't remember". (But perhaps I am wrong.)

Expand full comment

+1 re Feynman. He did say this: https://www.youtube.com/watch?v=ipRvjS7q1DI

Expand full comment

Thanks for the link. Very entertaining and thought provoking (as always with RPF), though dated in some respects. Abu Mustafa used the bird-plane analogy in his ML class, wonder if he got it from Feynman.

Expand full comment

Fyi: Chatgpt is Bullshit (in the soft sense at least)

https://link.springer.com/article/10.1007/s10676-024-09775-5

Expand full comment
author

that essay breaks my heart since i wrote a piece in 2020 with Ernest Davis called GPT Bullshit artist and the editor changed the title to GPT Bloviator under heavy protest.

Expand full comment

Sincere apologies for breaking your heart...but on the good side: at least some progress toward truth has been made! ;-)

I haven't read your article Gary (is available somewhere?) so apologies again, but I found the reliance on 'On Bullshit' (Frankfurt) goes a long way toward debunking the LLM -> AGI assumption implied by the use of 'hallucinations' (a deceiving psychological metaphor...bullshit in the Frankfurtian sense). With the bullshit flying around in the political media these days...maybe the time is right for 'ChatGPT (really LLMs) is Bullshit' to take hold. Just pining optimistically for the public push for AI safety/responsibility.

Expand full comment

I also like philosopher Luke Stark's "ChatGPT is Mickey Mouse".

Expand full comment

I sat next to Ray at a lunch for Gary Kasparov at Google. There were maybe 12 other top Google luminaries in AI there (I'm not one; I just introduced his talk).

He took up easily a third of all the airtime, if not half. A blowhard, in other words.

Expand full comment

To drown out Gary is no mean feat :^9

Expand full comment

Ah, my bad, sorry. I meant his questions & Gary's ANSWERS took up 1/3 - 1/2 the air time.

Expand full comment

:^) ok .. nutritious "meal"

Expand full comment

One problem with Kurzweil's prediction is that there are no signs to check for between 'now' and his professed dates. So it is fully non-falsifiable until the very end. In that sense it is a kind of 'messianic' prediction, or, a prophecy.

Kurzweil should answer the question what kind of advances he expects to see shortly before his 2029 date, say in 2026. Not full AGI, sure, but what are his telltale signs that corroborate his prophecy? Being more of a messianic prophet, he won't be able to play ball, I suspect.

Expand full comment

Kurzweil has lots of detailed intermediate predictions. Based on them he's likely 10-20 years off.

Expand full comment

But he doesn't adapt his end date? Interesting.

Expand full comment

Such predictions tend to be vague anyway. One could always claim a smart-enough machine is AGI.

His more interesting date is 2045, when he claims the Singularity will come. So one could as well think of 2030-2045 as of the "interesting times".

Expand full comment

My work for the blind has been compared to Ray Kurzweil's reader for the blind.

This was from an NFB state leader who had no idea Ray was my programming hero.

His relentlessness in programming to recognize hundreds of fonts is what I admire.

His belief in AGI has always mystified me.

Not ever happening unless we truly see God's hand touching a factory full of GPUs.

AGI is not inherent in machines.

Human I is based on a complex brain that infers, deduces, dreams, fantasizes, expands with some chemicals and meditations, thinks outside the box, and is intimately entwined with a life force.

ML - not so much - not so much at all.

HIs are stranger than AIs. The eight billion running around right now are trillions and gazillions times more interesting than any gen AI.

AI - boring.

AGI - boring and will never exist until God wants to make a point.

Expand full comment

A "life force" and "dreams" is not necessary for human-level intelligence. Those are biological artifacts.

Intelligence is about having competent world models. Those will need to be diligently built. We've come a long way and there are no reasons why advancements would stop.

Expand full comment

Are you sure? Maybe a life force is a field to be modeled like Maxwell modeled electromagnetic forces. Maybe dreams are reality and thus the basis for a competent world model. Why assume that any silicon machine could possibly have a competent world model. AIs have such a limited concept of existence.

AIs seem to be several levels of complexity and reality away from human awareness as I experience it. And what I experience is probably levels of complexity removed from what we may eventually know.

Because an AI arranges words based on smatterings of utterances (maybe a trillionth of unique human utterances since 2001), there's not much there there in the largest model.

Next degree of separation is that words are like Plato's shadows on the cave wall except less definite. Except in the hard sciences they hardly represent a competent world model. Given our inability to reconcile some basic physics phenomena, humans may never have a competent world model which tells me that AIs won't either.

Having approached the limits of Moore's Law and electrical power generation, why pretend we can "grow" forever in the AI space any more than we can grow forever in the fossil fuel space.

Cory Doctorow nailed AIs potential by comparing it to other science fiction fantasies - just because you can dream it doesn't make it possible.

Expand full comment

I surely agree that chatbots don't have world models.

We do know how to model physics very well though, at the human level (dark matter, etc., is not important).

The difficulty with AI is not because we can't model things, but because the models are very many, the world is vast and messy, and we were not able so far to coherently and robustly integrate the pieces.

I think AI is an issue of building large-scale infrastructure, with no fundamental limitations.

Expand full comment

Hey Andy, thanks for chatting.

Not sure about the fundamental limitations but I'll think on it.

Expand full comment

Re-revising the Gloat Date on my calendar. January 1st, 2030 it is! 📆 😏

Expand full comment

Thanks, Gary very helpful. It would be nice if we actually understood what intelligence was in the first place, so we know what we’re trying to achieve.

Expand full comment

Ray Kurzweil is undoubtedly one of the brightest minds in the world when it comes to computing, but let's think through what it would take for his prophecy to work out.

Achieving artificial general intelligence, a computer doing any cognitive (not motor) task that a human can do, would first require a reasonable catalog of just what a human can do. I don't think that there is much prospect that we could compile such a catalog in 4.5 years, even if we had a Manhattan Project worth of effort.

Further, we would need to assess the validity of any tests we could come up with. Researchers are currently claiming that LLMs (large language models) possess all sorts of cognitive skills based on very poor experimental methodology.

The hypothesis is that a language model that passes a test associated with a skill has that skill, but it could pass the test based the alternative hypothesis that it has memorized language patterns. As the number of sentences used in training increases, the probability increases that the model has memorized the response it produces rather than mastered the skill. An actor can recite the lines of a mathematical genius without being one, for example. For a large language model to suddenly undergo a phase transition from word guess to general cognition would be something analogous to spontaneous generation or a miracle.

Similarly, one needs to demonstrate that a specified test used to assess a cognitive skill is also a valid test of that skill. One can claim that any benchmark is a measure of anything, assessing whether it is, requires a different set of evaluations.

I have written about what I think is needed to achieve artificial general intelligence https://thereader.mitpress.mit.edu/ai-insight-problems-quirks-human-intelligence/ . I don't think that achieving AGI is impossible, but I do think that the tools we have today are not going to do it, and that it is difficult to guess when the required tools will be invented.

Expand full comment

Herbert, do you have an update to your 2021 book, or is it still appropriate in spite of the present progresses?

Expand full comment

I think you are way too soft on Kurzweil. Maybe he knows a lot about AI but he has no clue about human general intelligence when he defines a "mere" AGI as: AI that can perform any cognitive task an educated human can.

For Kurzweils prophecy of AGI to succeed by 2032, he needs to specify AGI with a little help from people who know about human general intelligence. My suggestion:

AGI is AI that can correctly perform any linguistic and mathematical routine cognitive task, an average educated human can. The task should not be novel; should not require any subjective experience nor theory of mind nor any biographical memory; it should not require to understand the meaning of the task in the real multisensorial world, and it should not require one trial learning, the generalization from a few examples or the transfer of knowledge between totally different domains.

Let me explain:

He does not mention what a cognitive task is. Is this any task where the input and the output is words or numbers? OK, then he might be right. But that is an extremely simplified version of human cognition. He excludes an excellent poet, but he probably also should exclude excellent scientists. Hence we should add that AGI is OK if it's average. He also does not mention that the task should be performed correctly. This is a big challenge since hallucinations are a core feature of the generative AI's we have today.

AGI is AI that can correctly perform any linguistic and mathematical routine cognitive task, an average educated human can.

I'd need a whole page to list all the other areas of human cognition that AI-sciences didn't even touch today, for which we need Einsteinian revolutions in the AI sciences. Since there is no indication whatsoever of slightest move in that direction, we better add them to the equation too if we want to succeed with AGI. A few examples:

Human cognition is profoundly rooted in a physical and sociocultural reality. We need a revolutionary breakthrough to invent AI systems that understand the meaning of their inputs and outputs in the real multisensorial world.

We also need a revolution for AI systems to perform totally novel cognitive tasks or tasks that need to be performed in a context that was not part of the training data. Hence the task should be non-novel or routine.

Given the importance for human cognition of personal biographical memory, common sense, subjective experiences (eg. feelings and showing affections), the awareness of the ideas and feelings of others (Theory of Mind) and meta-cognition. It's safer to exclude these of the definition, because that will require another major AI breakthrough.

Human cognition is also lot about one-trial-learning like (like most animals can), generalization from a few example and transferring knowledge or skills from one domain to another: from sports to business, from chemistry to cooking, from geometry to art, from astronomy to nuclear physics… and vice versa. In daily life: using a plastic bottle as a funnel, a ceramic mug to sharp a knife, a folded cloth hanger to temporarily replace the broken stand for my laptop.

Hence, for Kurzweils prophecy of AGI to succeed by 2032, he needs to specify it with a little help from people who know about human general intelligence. My suggestion:

By 2032 we might have AGI that can correctly perform any linguistic and mathematical routine cognitive task, an average educated human can. The task should not be novel; should not need any subjective experience nor theory of mind nor any biographical memory; it should not require to understand the meaning of the task in the real multisensorial world, and it should not necessitate generalization from a few examples nor the transfer of knowledge between totally different domains, nor one-trial-learning

Expand full comment

You know it was actually Ray Kurzweil who first pointed out how ridiculous “AGI“ was, because artificial intelligence was always supposed to be general! As opposed to what?! We need a fundamental understanding of what we’re doing and right now we’re just making a lot of tech and using a lot of energy.

Expand full comment

Whatever happens in 2029, Ray will say "Ta Dah! AGI!"

Expand full comment

Which Aguera y Arcas and Norvig declared about LLMs in 2023. They beat him to punch by 6 years.

Expand full comment

What's the difference between AGI and a big-ass self-updating database that's big enough to encompass sufficient knowledge and is powered by sufficient compute to pass as AGI for most people?

Expand full comment

Yeah, AGI would need what you say, a massive amount of self-updating knowledge and sufficient compute. It would also need fine-grained models, ability to check the effect of one's actions, then refine the models and iterate. Doable stuff. Just a lot of it.

Expand full comment

Right. Clever programming with sufficient speed.

Expand full comment

Kurzweil is a one trick pony. He identifies trends from the past, in a certain way, and then asserts that the future has to follow the pattern. I find him just a harmless old fool

Expand full comment

I'll take Vogon poetry over AI poetry

Expand full comment

Perhaps they are the same

Expand full comment

I've been following Kurzweil since 2001. It is fool's errand to predict long-term future, and his predictions in 1999 for the next 20 years have been wildly optimistic.

But the pace picked up since 2010. Self-driving cars in all major cities by 2030 looks plausible. Same with robotic manipulation, and AI agents that are quite a bit better than now.

AGI by 2040 looks very likely. In the grand scheme of things, it is not bad to be off by 10 years.

Expand full comment

I still find AGI predictions problematic because it is so poorly defined. I liked DeepMind's paper, "Position: Levels of AGI for Operationalizing Progress on the Path to AGI," because it presents AI as more of a spectrum split between narrow and general. I don't know that I agree with their categorization of LLMs exactly, but I do think a spectrum of definition is a good idea. By their definition, AGI is emerging right now; it just doesn't yet hit their standards for 'competent.'

Expand full comment

Yes.

But even that paper talking about "glimpses" is hype...

Expand full comment

I'm confused, which bit specifically are you referring to with the "glimpses" quotation? The word isn't used in the paper, and I've read through it again and can't figure out which bit you are referencing. It mentions "sparks" of AGI but that is in the introduction; that's just due diligence and literature review, not in any way relevant to the contribution of the paper or the framework they are presenting. I think the main thing that I take away from the paper is the idea of creating a multi-tiered testable framework for defining and evaluating AGI, not necessarily their specific multi-tiered testable framework. It is the principle of the thing.

Expand full comment

Sorry, I meant "sparks" not "glimpses".

The multi-tier idea is fine and necessary, I agree. But their starting point is that we already have "sparks" of AGI, which I disagree with.

Expand full comment

Thanks for the clarification, I'm following you now. Yes, I am skeptical of the idea that current LLMs are "emerging" AGI in their current form (by their definition).

Expand full comment

I thought 2047 was the date. That's why I set my Asimovian novel Eye Candy then.

Expand full comment

2045 is when the singularity happens. Per Kurzweil.

Expand full comment

Think it'll be like in Lawnmower Man?

Expand full comment