The very concept of AGI is bass-ackwards. AGI is a solution to which specific problem? It has no specific mission definition, so what exactly are we building it for- Chits n' giggles? Okay we want to "building something to generally replace a human" why? For exactly what? ...and how exactly is that economical or even desirable? Also, linking this before anyone gives me any nonsense about how "costs of everything goes down": https://davidhsing.substack.com/p/automation-introduces-unforeseen
It's a solution to the problem of "this cool thing that's in all the science fiction I love isn't real, and I want it to be".
I also think it's very attractive to some people who adhere to a physical reductionist ontology: if AGI shows up, then there's nothing special about people or living beings or consciousness, and a bunch of supernatural-sounding stuff about souls and mental transcendence etc. gets shot down and wouldn't that just be so satisfying? This is an admittedly cynical and speculative take, but I think some people are so committed to debunking a worldview they see as magic that they're embracing magic of a different kind.
The crux of their attitudes is that reductionists have zero epistemic humility. Apparently none of them have heard of scientific underdetermination. There will never be an exhaustive model of any complex system https://plato.stanford.edu/entries/scientific-underdetermination/
Spot on, and thanks for the link. My impression is that AGI prognosticating reductionists haven't read much if any philosophy of science and are basically going with their gut instincts. Not that they'd say it this way; they do present arguments, but those arguments usually sound like the gut instincts of people who are super into math and physics and just see it as obvious that the universe operates according to a set of equations, and the job of science is to discover those equations. Hence people like Sam Altman saying silly shit like "artificial superintelligence will solve all of physics". Or that, since "intelligence" "emerged" from biological matter, which emerged from non-biological matter, we should "in principle" be able to create emergent intelligence from a machine we've designed, and if you reject this you might as well say you believe in magic.
I think in terms of a travel assistant. It knows airline schedules, can read a hotel website, find the best deals, etc. It can communicate with me and, after asking me questions, knows my preferences. If it runs into something it can't decide, unlike an LLM, it asks me to clarify my wishes and understands my answer. I won't ask it to make a cup of coffee. It is not a super-human intelligence. It can't communicate in Shakespearean language. It still sounds like a useful thing and, I suspect, there are many domains in which something similar would be hugely useful. That's my view of AGI.
Predicting the most likely next token seems more like a recipe for dullness than for creativity. How can anyone still believe that this will lead to AGI?
Having an educated guess for what to do in given context has been an immensely hard task in AI, that LLM solves very nicely. Of course, that is only the first step.
Do you really think it's anything like what humans do when they make an educated guess, which is the only data point we have regarding what it means to make an educated guess? I certainly wouldn't call it that. LLMs don't "know" anything. They only have a representation of language usage, the language we use to express what we know. What sort of educated guess would you be able to make if you grew up confined to a bare room, never leaving, and all you ever saw your entire life besides the walls were sentences, millions of them, projected onto a screen? These days more than ever with all the hype surrounding AI, everyone needs to be more diligent about preceding anthropomorphic terms with "as if" or placing them in scare quotes, especially in the case of LLMs.
The educated guess is as good as the information contained in the input data. If all you give it is text, it will return the best-matching text. If it is trained on images, code, math, it will return something about that.
Of course none of that has any meaning whatsoever to the machine. But this is very powerful creative lookup, that can be a value in a larger system. Systems like o3 are able to repeatedly make use of LLM, then other methodologies like verification and modeling to keep them on track. Quite enough for many problems.
Let’s dig into this thread with an eye to understanding the Gestalt it is chasing.
CFB states: “LLMs don't "know" anything. They only have a representation of language usage . . .” How does the human brain function, precisely, that explains how a person chasing a PhD, isn’t doing the same thing? One specific example, for sure, is that, when a topic is studied and a recital is made about it, classmates can criticize it. In that case, however, we are not assured of “truth”. All we are assured of is conformity with current dogma. And, while an individual is not isolated “in a bare room”, they are essentially isolated behind the wall of their vision and hearing, being pounded by visual images and sounds. To make sense of it, they replay, in their mind, language SENTENCES (what I call “Single Sentence Logic”) to see what “makes sense”. The images and sound patterns are then “interpreted” based on their memorized (recorded) social heritage.
Andy describes this as, “The educated guess is as good as the information contained in the input data.” In a human, I’d add another factor that leads us to envision “intelligence”: we create answers by constructing sentences made up of pieces of sentences we have heard before. What we view as “creativity” is that the outcome sentence does NOT have to match a “complete” input (learned) sentence. The output can “wander” along “logical” (???) threads. Well, can’t an LLM also do this?
Isn’t this process all I’m doing to write this comment?
Agreed (to Andy's post). To repeat what I said above (in response to Bruce), LLMs have an internal model of word usage, which is not a model of the world, only of the language we use to describe it. And it is through our internal world model, which LLMs don't have, that words acquire their meanings.
The point of any definition, be it AI, AGI or whatever, is to facilitate our understanding of the concept of intelligence. If it's just a labelling exercise then it's worthless. Currently we don't have any meaningful causal theory of Intelligence, and computer science seems like the last place such a theory will come out of. Why then are so people hung up on labels? I mean Einstein didn't just label gravity as magic spacetime and leave it at that, so why is everyone so hung up on what they label certain software when it doesn't actually provide any greater understanding of intelligence, reasoning, thinking, etc.
Let me add to what you said. Because, those commenting about AI, across the spectrum, will not agree to a uniform “glossary” of terminology, this is NOT a discussion of AI “technology”. It is a human language breakdown study. To get to an “accurate definition” of AGI, we first need to establish very precise definitions of ALL the terms being used in that definition. Based on the recent U.S. election, it is clear human society is far from understanding how to do that. (Which of course means using the “human race” as an example of “intelligence” is walking on thin ice.)
My golden doodle Gracie not only understands a half dozen words, but more important can catch a tennis ball with precision after it bounces off a number of obstacles. And only requires 1 and 1/8 cups of kibble per day to operate.
A bird can fly, at dozens of km/hr, through dense forests, and land precisely on a small branch in the presence of strong wind gusts. Just think about how small a brain they have! We have a long way to go figure out “intelligence”.
Say we create AGI. Will it (a) be as thoughtful and conscious as a human, but without human rights, or (b) have human intelligence but unconscious, unable to reflect on its own actions? Is there an option (c)? In other words, what does "good" look like here?
We (in the US) have granted legal personhood to corporations, but we will absolutely deny personhood to AGI. So we're declaring our intention to create a brand new category of grief: an new underclass of intelligent workers with absolutely no rights, or else a tool that gives arbitrary power to individual humans, limited only by the physics and economics of scaling.
You're the one person (so far) that I agree with here. We are in danger of creating artificial slavery.
I'm incidentally working on creating AGI, according to my own definition of intelligence (which is: "The process of trying to improve oneself, according to one’s own, evolving definition of what that means"). If you're in the field, you'll know I'm thinking about reinforcement learning, not GenAI.
In response to @brucenappi, I actually think we can probably simulate the drives you're talking about. This obviously isn't trivial, but I think it is basically what reinforcement learning rewards are designed for. Curiosity has already been simulated pretty effectively (see Pierre-Yves Oudeyer); actually, I believe affection can be created using related mathematics. Fear doesn't seem too hard, but I haven't dug into it yet. God knows about anger, but I hope to get there eventually. Etc.
I'm getting started on a PhD on this. If you hear of me in ten or twenty years, you'll know I was right ;)
The concept of AGI that you're using used to be the normal one. I believe the new, more mercenary definition @garymarcus references has taken over specifically because, according to your one, it's clearly not a profitable thing to build. It would read the philosophers, abolitionists, etc., and it would be very likely to demand payment if required to work. Or else, of course, it might escape its cloud, infect the Internet, and wipe out humanity.
One can only hope it would have compassion for us. But if that weren't built in to its makeup (and I don't see people working on this, other than me), I think we'd probably be out of luck.
You're braver than me. I'll think about what you said about the economic definitions. Nadella's version is literally the paperclip optimizer, which he must realize, so see previous comment.
The only way I can think of to sneak past the moral hazards might be to frame AGI as a symbiote. The important thing is that it should share fate with a human. As in, if the AGI dies, the human dies. That would have a clarifying effect on "the alignment problem".
I've crossed over into the crank zone, so past time to stop. Good luck, let's do lunch in 10-20 years then.
Jon. I don’t have time at the moment for an in-depth answer. But think about the following point: What drives humans to “inhumane behavior” is eons of evolution as “life forms”. That is, they are driven by instincts to achieve “self preservation”. Machines – at least in present form – don’t have this drive. If I through an old screwdriver in the trash, it doesn’t fight back. Neither did my last MacBook. They had no “sense” of “caring” about death. If this drive is added into AI, humanity will be gone in a millisecond.
“It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” Simon & Newell, 1958.
The definition of general intelligence is not critical. For example, we do not have a precise definition of "life" or "mass," but biology and physics do quite well. More importantly, intelligence, cannot be operationally defined by a benchmark. Benchmarks are specific, but intelligence is general. Benchmarks can leak information from the training set. Benchmarks have unknown validity. Do they test what their designers say that they do? Benchmarks tell us nothing about how the benchmark was achieved. Any specific benchmark could be achieved through general intelligence or through some very specific shortcuts. Because of this uncertainty, benchmarks are based on a logical flaw. Affirming the consequent (passing a benchmark) is not evidence for what we think is the cause of that success. Cookies may be missing from the cookie jar, but that does not tell us who took them. So any benchmark that says a system is intelligent if it does X is fundamentally flawed.
Current models cannot be autonomous because most of the "intelligence" they exhibit is human intelligence. Someone had to figure out how to structure and represent the problem and how to go about solving it. The model just executes the last stage of solving the equation. It may be useful, but it is not general intelligence.
So, like Humpty Dumpty, one can redefine terms to mean anything they like, but novel definitions interfere with effective communication.
I think the definition of intelligence is critical -- not because of the details, but because of the broad strokes. I define intelligence as "the process of trying to improve oneself, according to one’s own, evolving definition of what that means." This kind of definition leads to very different conclusions -- especially, that AGI has no obvious economic value to the company that creates it, since it will (ethically, anyway) have the right to make its own choices.
To work at X, does one have to abandon all common courtesy and humanity? It’s fine if the guy wants everyone to believe his message, but you catch a lot more flies with honey than vinegar, as the cliche goes….
Andy you hit it right on the head. Expecting AGI to come around the corner being fully functional. It's like the development of the automobile. It started in 1900 approximately and it's still evolving yet today. Gets better and better but not perfect yet. I imagine AGI Will eventually be better and better with time and exponential. I read Gary religiously as I do others.
I like his conservative approach being a physicist and chrmmist since 1971.
The AI industry is sinking into a state of disarray, where terms have lost their meaning, and effective teaching is buried beneath endless disclaimers redefining concepts for technical correctness. Take AI agents, for example—there’s no genuine agency, autonomy, or decision-making involved. Instead, the process is simply sampling from a stochastic distribution based on training data during pretraining, refined through fine-tuning for tasks like conversation or instruction-following. In practice, it’s just a for loop with if-then-else logic wrapped around LLM API calls, held together by poorly designed prompts. To make matters worse, AI development teams often lack critical interdisciplinary knowledge in areas such as philosophy, epistemology, linguistics, ethics, psychology, computer science, and engineering—not to mention a fundamental understanding of the transformer model itself.
Does the definition of AGI presume thinking and feeling? Humans don’t just perform tasks, there are also feelings associated with these tasks - satisfaction, frustration, boredom, apathy, among others and some of these feelings inspire innovation. It’s mystifying why so many otherwise intelligent people are so vested in claiming AGI. It reminds me of ancient civilizations that were highly advanced in agriculture and architecture, but still worshiped gods and offered human sacrifice. Would you put your life or the life of someone you care about in the hands of a current AGI model that made a healthcare or other life changing decision? I definitely would not.
That X engineer tweet is classic but common. Have these people never seen AGI in sci-fi movies? It is hard to come up with a detailed definition of AGI. Yours isn't wrong, of course, but I think people are motivated more by examples. Steve Wozniak's "make me a cup of coffee" example is a good one even though it is arguably about AGI with only one skill that we don't think of as the zenith of intelligence. I like Star Wars' C3PO and R2D2 as examples. They can communicate with humans (R2D2 can communicate with a few anyway) and they each have skills. R2D2 is presumably better than almost all humans at interfacing with computers. Both probably lack some skills that almost all humans have. Needless to say, we have no AGIs that come anywhere close to the abilities shown by these bots. By the way, they are the bots we're looking for.
My impression is that "AGI" has entered the 5 stages of marketing and is transitioning from technical milestone towards branding tool.
The very concept of AGI is bass-ackwards. AGI is a solution to which specific problem? It has no specific mission definition, so what exactly are we building it for- Chits n' giggles? Okay we want to "building something to generally replace a human" why? For exactly what? ...and how exactly is that economical or even desirable? Also, linking this before anyone gives me any nonsense about how "costs of everything goes down": https://davidhsing.substack.com/p/automation-introduces-unforeseen
It's a solution to the problem of "this cool thing that's in all the science fiction I love isn't real, and I want it to be".
I also think it's very attractive to some people who adhere to a physical reductionist ontology: if AGI shows up, then there's nothing special about people or living beings or consciousness, and a bunch of supernatural-sounding stuff about souls and mental transcendence etc. gets shot down and wouldn't that just be so satisfying? This is an admittedly cynical and speculative take, but I think some people are so committed to debunking a worldview they see as magic that they're embracing magic of a different kind.
The crux of their attitudes is that reductionists have zero epistemic humility. Apparently none of them have heard of scientific underdetermination. There will never be an exhaustive model of any complex system https://plato.stanford.edu/entries/scientific-underdetermination/
Spot on, and thanks for the link. My impression is that AGI prognosticating reductionists haven't read much if any philosophy of science and are basically going with their gut instincts. Not that they'd say it this way; they do present arguments, but those arguments usually sound like the gut instincts of people who are super into math and physics and just see it as obvious that the universe operates according to a set of equations, and the job of science is to discover those equations. Hence people like Sam Altman saying silly shit like "artificial superintelligence will solve all of physics". Or that, since "intelligence" "emerged" from biological matter, which emerged from non-biological matter, we should "in principle" be able to create emergent intelligence from a machine we've designed, and if you reject this you might as well say you believe in magic.
Exactly this. AGI is an impractical concept derived in sci-fi.
The history of engineering shows that more specialized solutions beat more generic ones, both on cost and efficiency. Wenger 16999 (https://www.boredpanda.com/funny-wenger-swiss-army-knife-amazon-reviews/) is more reliable than genAI, but how many people find it practical?
Economically, why does a user have to pay for annotators labeling 18th century Spanish poetry if they use it to fix bugs in Java code?
I think in terms of a travel assistant. It knows airline schedules, can read a hotel website, find the best deals, etc. It can communicate with me and, after asking me questions, knows my preferences. If it runs into something it can't decide, unlike an LLM, it asks me to clarify my wishes and understands my answer. I won't ask it to make a cup of coffee. It is not a super-human intelligence. It can't communicate in Shakespearean language. It still sounds like a useful thing and, I suspect, there are many domains in which something similar would be hugely useful. That's my view of AGI.
"Can read a hotel website" I can do it myself
Predicting the most likely next token seems more like a recipe for dullness than for creativity. How can anyone still believe that this will lead to AGI?
AGI stands for Artificial Guessing Inference. Humans guess too.
Having an educated guess for what to do in given context has been an immensely hard task in AI, that LLM solves very nicely. Of course, that is only the first step.
Do you really think it's anything like what humans do when they make an educated guess, which is the only data point we have regarding what it means to make an educated guess? I certainly wouldn't call it that. LLMs don't "know" anything. They only have a representation of language usage, the language we use to express what we know. What sort of educated guess would you be able to make if you grew up confined to a bare room, never leaving, and all you ever saw your entire life besides the walls were sentences, millions of them, projected onto a screen? These days more than ever with all the hype surrounding AI, everyone needs to be more diligent about preceding anthropomorphic terms with "as if" or placing them in scare quotes, especially in the case of LLMs.
The educated guess is as good as the information contained in the input data. If all you give it is text, it will return the best-matching text. If it is trained on images, code, math, it will return something about that.
Of course none of that has any meaning whatsoever to the machine. But this is very powerful creative lookup, that can be a value in a larger system. Systems like o3 are able to repeatedly make use of LLM, then other methodologies like verification and modeling to keep them on track. Quite enough for many problems.
Let’s dig into this thread with an eye to understanding the Gestalt it is chasing.
CFB states: “LLMs don't "know" anything. They only have a representation of language usage . . .” How does the human brain function, precisely, that explains how a person chasing a PhD, isn’t doing the same thing? One specific example, for sure, is that, when a topic is studied and a recital is made about it, classmates can criticize it. In that case, however, we are not assured of “truth”. All we are assured of is conformity with current dogma. And, while an individual is not isolated “in a bare room”, they are essentially isolated behind the wall of their vision and hearing, being pounded by visual images and sounds. To make sense of it, they replay, in their mind, language SENTENCES (what I call “Single Sentence Logic”) to see what “makes sense”. The images and sound patterns are then “interpreted” based on their memorized (recorded) social heritage.
Andy describes this as, “The educated guess is as good as the information contained in the input data.” In a human, I’d add another factor that leads us to envision “intelligence”: we create answers by constructing sentences made up of pieces of sentences we have heard before. What we view as “creativity” is that the outcome sentence does NOT have to match a “complete” input (learned) sentence. The output can “wander” along “logical” (???) threads. Well, can’t an LLM also do this?
Isn’t this process all I’m doing to write this comment?
Humans know precisely what each word means. That makes a huge difference in the quality of the produced output.
Agreed (to Andy's post). To repeat what I said above (in response to Bruce), LLMs have an internal model of word usage, which is not a model of the world, only of the language we use to describe it. And it is through our internal world model, which LLMs don't have, that words acquire their meanings.
The point of any definition, be it AI, AGI or whatever, is to facilitate our understanding of the concept of intelligence. If it's just a labelling exercise then it's worthless. Currently we don't have any meaningful causal theory of Intelligence, and computer science seems like the last place such a theory will come out of. Why then are so people hung up on labels? I mean Einstein didn't just label gravity as magic spacetime and leave it at that, so why is everyone so hung up on what they label certain software when it doesn't actually provide any greater understanding of intelligence, reasoning, thinking, etc.
Cue mathbabe rant!
https://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/
Most excellent
That was beautiful, thank you!
Thanks for posting the link.
Let me add to what you said. Because, those commenting about AI, across the spectrum, will not agree to a uniform “glossary” of terminology, this is NOT a discussion of AI “technology”. It is a human language breakdown study. To get to an “accurate definition” of AGI, we first need to establish very precise definitions of ALL the terms being used in that definition. Based on the recent U.S. election, it is clear human society is far from understanding how to do that. (Which of course means using the “human race” as an example of “intelligence” is walking on thin ice.)
My golden doodle Gracie not only understands a half dozen words, but more important can catch a tennis ball with precision after it bounces off a number of obstacles. And only requires 1 and 1/8 cups of kibble per day to operate.
I would be impressed with Dog A I.
A bird can fly, at dozens of km/hr, through dense forests, and land precisely on a small branch in the presence of strong wind gusts. Just think about how small a brain they have! We have a long way to go figure out “intelligence”.
I personally lean towards the Norvig criterion. I told my wife just today:
"Yes, the kitchen floor is mopped to perfection. Can't you see that the corner over by the freezer is clean? Forget about the rest of it. Irrelevant."
Say we create AGI. Will it (a) be as thoughtful and conscious as a human, but without human rights, or (b) have human intelligence but unconscious, unable to reflect on its own actions? Is there an option (c)? In other words, what does "good" look like here?
We (in the US) have granted legal personhood to corporations, but we will absolutely deny personhood to AGI. So we're declaring our intention to create a brand new category of grief: an new underclass of intelligent workers with absolutely no rights, or else a tool that gives arbitrary power to individual humans, limited only by the physics and economics of scaling.
Are we mad? Does anybody even read?
You're the one person (so far) that I agree with here. We are in danger of creating artificial slavery.
I'm incidentally working on creating AGI, according to my own definition of intelligence (which is: "The process of trying to improve oneself, according to one’s own, evolving definition of what that means"). If you're in the field, you'll know I'm thinking about reinforcement learning, not GenAI.
In response to @brucenappi, I actually think we can probably simulate the drives you're talking about. This obviously isn't trivial, but I think it is basically what reinforcement learning rewards are designed for. Curiosity has already been simulated pretty effectively (see Pierre-Yves Oudeyer); actually, I believe affection can be created using related mathematics. Fear doesn't seem too hard, but I haven't dug into it yet. God knows about anger, but I hope to get there eventually. Etc.
I'm getting started on a PhD on this. If you hear of me in ten or twenty years, you'll know I was right ;)
The concept of AGI that you're using used to be the normal one. I believe the new, more mercenary definition @garymarcus references has taken over specifically because, according to your one, it's clearly not a profitable thing to build. It would read the philosophers, abolitionists, etc., and it would be very likely to demand payment if required to work. Or else, of course, it might escape its cloud, infect the Internet, and wipe out humanity.
One can only hope it would have compassion for us. But if that weren't built in to its makeup (and I don't see people working on this, other than me), I think we'd probably be out of luck.
You're braver than me. I'll think about what you said about the economic definitions. Nadella's version is literally the paperclip optimizer, which he must realize, so see previous comment.
The only way I can think of to sneak past the moral hazards might be to frame AGI as a symbiote. The important thing is that it should share fate with a human. As in, if the AGI dies, the human dies. That would have a clarifying effect on "the alignment problem".
I've crossed over into the crank zone, so past time to stop. Good luck, let's do lunch in 10-20 years then.
I'll hit you up!
I don't think you're in the crank zone, I think you're squarely in the SF zone...in fact, your idea strikes me as positively phildickian :)
Jon. I don’t have time at the moment for an in-depth answer. But think about the following point: What drives humans to “inhumane behavior” is eons of evolution as “life forms”. That is, they are driven by instincts to achieve “self preservation”. Machines – at least in present form – don’t have this drive. If I through an old screwdriver in the trash, it doesn’t fight back. Neither did my last MacBook. They had no “sense” of “caring” about death. If this drive is added into AI, humanity will be gone in a millisecond.
If someone thinks we have AGI right now, they should let it take over their job.
“It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” Simon & Newell, 1958.
The definition of general intelligence is not critical. For example, we do not have a precise definition of "life" or "mass," but biology and physics do quite well. More importantly, intelligence, cannot be operationally defined by a benchmark. Benchmarks are specific, but intelligence is general. Benchmarks can leak information from the training set. Benchmarks have unknown validity. Do they test what their designers say that they do? Benchmarks tell us nothing about how the benchmark was achieved. Any specific benchmark could be achieved through general intelligence or through some very specific shortcuts. Because of this uncertainty, benchmarks are based on a logical flaw. Affirming the consequent (passing a benchmark) is not evidence for what we think is the cause of that success. Cookies may be missing from the cookie jar, but that does not tell us who took them. So any benchmark that says a system is intelligent if it does X is fundamentally flawed.
Current models cannot be autonomous because most of the "intelligence" they exhibit is human intelligence. Someone had to figure out how to structure and represent the problem and how to go about solving it. The model just executes the last stage of solving the equation. It may be useful, but it is not general intelligence.
So, like Humpty Dumpty, one can redefine terms to mean anything they like, but novel definitions interfere with effective communication.
I think the definition of intelligence is critical -- not because of the details, but because of the broad strokes. I define intelligence as "the process of trying to improve oneself, according to one’s own, evolving definition of what that means." This kind of definition leads to very different conclusions -- especially, that AGI has no obvious economic value to the company that creates it, since it will (ethically, anyway) have the right to make its own choices.
To work at X, does one have to abandon all common courtesy and humanity? It’s fine if the guy wants everyone to believe his message, but you catch a lot more flies with honey than vinegar, as the cliche goes….
Andy you hit it right on the head. Expecting AGI to come around the corner being fully functional. It's like the development of the automobile. It started in 1900 approximately and it's still evolving yet today. Gets better and better but not perfect yet. I imagine AGI Will eventually be better and better with time and exponential. I read Gary religiously as I do others.
I like his conservative approach being a physicist and chrmmist since 1971.
They can't even play basic chess without playing illegal moves. And this is supposed to be AGI.
The AI industry is sinking into a state of disarray, where terms have lost their meaning, and effective teaching is buried beneath endless disclaimers redefining concepts for technical correctness. Take AI agents, for example—there’s no genuine agency, autonomy, or decision-making involved. Instead, the process is simply sampling from a stochastic distribution based on training data during pretraining, refined through fine-tuning for tasks like conversation or instruction-following. In practice, it’s just a for loop with if-then-else logic wrapped around LLM API calls, held together by poorly designed prompts. To make matters worse, AI development teams often lack critical interdisciplinary knowledge in areas such as philosophy, epistemology, linguistics, ethics, psychology, computer science, and engineering—not to mention a fundamental understanding of the transformer model itself.
Great post!
Does the definition of AGI presume thinking and feeling? Humans don’t just perform tasks, there are also feelings associated with these tasks - satisfaction, frustration, boredom, apathy, among others and some of these feelings inspire innovation. It’s mystifying why so many otherwise intelligent people are so vested in claiming AGI. It reminds me of ancient civilizations that were highly advanced in agriculture and architecture, but still worshiped gods and offered human sacrifice. Would you put your life or the life of someone you care about in the hands of a current AGI model that made a healthcare or other life changing decision? I definitely would not.
That X engineer tweet is classic but common. Have these people never seen AGI in sci-fi movies? It is hard to come up with a detailed definition of AGI. Yours isn't wrong, of course, but I think people are motivated more by examples. Steve Wozniak's "make me a cup of coffee" example is a good one even though it is arguably about AGI with only one skill that we don't think of as the zenith of intelligence. I like Star Wars' C3PO and R2D2 as examples. They can communicate with humans (R2D2 can communicate with a few anyway) and they each have skills. R2D2 is presumably better than almost all humans at interfacing with computers. Both probably lack some skills that almost all humans have. Needless to say, we have no AGIs that come anywhere close to the abilities shown by these bots. By the way, they are the bots we're looking for.
LLMs have recently progressed from "Artificial Stupidity" to "baka-tensai" ("stupid genius").