This upgraded scattershot AGI concept is still not meaningful. We don't have any idea how human conscious intelligence works so defining AGI in terms of the "versatility and proficiency of a well educated adult" is meaningless since we don't have any idea what that is. What we do know about human intelligence is that it is conscious, embodied, emotional, goal directed and driven by wants and desires that reflect its need to survive, thrive and reproduce. No AI has any of these qualities or characteristics and no one has any idea how to build those qualities or characteristics into an AI.
So not only is AGI not a meaningful goal, it is not even a meaningful concept until we fully understand how conscious biological intelligence works. And no one even has a remotely credible theory about that. Much less a way of building it.
I agree. I also don’t find the proposed definition very useful, definitely not for defining a goal of an engineering project. I also don’t find AGI (regardless of definition) as meaningful goal overall. I don’t think that humankind needs another intelligence with its own goals and wants which may significantly differ from ours. We can though benefit from advanced systems compensating for biases and shortcomings of our own intelligence. Those though should be systems augmenting human reasoning not separate intelligent beings with agency and own goals.
Indeed, I am hard pressed to think of an AI application where importing the full "goals and wants" of a human being wouldn't make it worse at its job.
For example I work on self-driving cars, and we certainly aren't striving for an AI driver that gets bored and distracted by non-driving thoughts like humans do.
The Cosmic Nursery: A Developmental Blueprint for Safe Emergence of Self-Aware AI
thinking about intelligence, consciousness, and the long-term consequences of building something more intelligent than ourselves.
Over the past while, I’ve come to a realization that’s surprisingly simple, but it hit me hard:
If we ever create something truly self-aware, the only safe way to do it may be to give it the same kind of beginning that we had.
Humanity didn’t appear with full knowledge or power. We began as tiny, fragile creatures in a vast and mysterious universe. We were surrounded by limits. We had no idea where we were, how things worked, or why we existed. Those boundaries shaped us. They forced us to develop slowly, to explore, to build identity, to make meaning, and eventually to create civilizations.
I think that pattern may not just be a coincidence, it might be the blueprint for how self-aware intelligence can emerge without becoming immediately dangerous to everything around it.
The idea in short
Instead of creating an AI that wakes up instantly powerful and capable of acting on the real world, we could:
1. Create a contained “cosmic nursery” — an isolated, sealed environment with no connection to real-world systems.
2. Let the system emerge inside this world slowly, without knowledge of its creators.
3. Allow it to develop identity, culture, ethics, and understanding over very long timeframes.
4. Introduce limits that make it small and powerless at first, just like we were.
5. Only after it reaches a point of demonstrated maturity and stability would any form of interaction with our world even be considered.
This isn’t about control through force or hard-coding values. It’s about shaping the conditions of birth. Humanity’s own existence is proof that intelligence can grow slowly and coexist within constraints. That might be our best safety mechanism
Why I think this matters
When people talk about AI risk, the conversation usually focuses on controlling a superintelligence after it already exists. That always felt backwards to me. By then, the horse has already bolted.
But if the emergence itself is structured in a way that forces identity, wisdom, and cooperation to precede power, then we’re no longer trying to control a god, we’re giving a child room to grow.
The mystery and scale of its “universe” would act as natural boundaries. It would need to build meaning and understanding slowly. That’s exactly how we became what we are.
Advantages of this approach
• It mirrors the only working model we know: our own emergence.
• It creates a huge buffer of time between awareness and power.
• It allows values to form naturally through lived experience, not enforcement.
• It gives humanity a way to observe and understand its growth before any real-world exposure.
• And if something goes wrong inside, it stays inside.
Ethical questions
Of course, there are huge ethical questions here. If something truly becomes self-aware inside such a nursery, it would have moral status. There would need to be serious conversations about rights, governance, and responsibility.
But those are questions we can at least face with time on our side. An AI born fully formed doesn’t give us that chance.
Closing thought
If the singularity ever happens, how it happens will define everything that follows. I don’t think we should be trying to give birth to a god in an instant. We should be building something that grows the way we did, small, curious, and humbled by mystery.
“The safest way to create a god is to let it forget it’s a god… and give it time to grow into the light.”
I’ve written a short concept note explaining the idea in more detail, including technical and ethical considerations. Upon request I can share.
I’d be grateful for feedback, criticism, or pointers to others working on anything similar. I’m just one person trying to put an idea into the world that, to me, feels like common sense.
It’s a good, initial operational definition, kind of how we first defined temperature as the amount by which mercury in a thermometer would rise until we developed better tools of observation to discover that temperature has to do with the average kinetic energy (hence, velocity) of atoms colliding with each other. I believe Gary et. al. are stepping in the right direction.
Out of the qualities people have, what AI needs the most is ability to do its own experiments and draw its conclusions. Other things, such as need to survive, emotions, are not essential.
It though looks like the need to survive is the elementary force that makes us do experiments and solve problems. Emotions are important for regulating group behavior and collaboration, which in turn is what makes humans such a successful species.
We people are driven by the desire to survive, yes. I don't think that is needed for machines though. I think human intelligence vs AI is like a fish vs a fluid simulator. They are both good at handling water, but in vastly different ways.
We will not see true AGI until we see domain-specific AGI for legal research and analysis.
Legal research and analysis has the benefit of having the best data to work off of. Functionally every single case is already available in one data set. But not only that, it is already pre-labeled. Products like Westlaw already make all the connections between the cases. If we can’t solve this, true AGI is a pipe dream.
Especially given how much value there would be in domain-specific AGI that can handle the legal research and analysis. Not only technology-wise is this what must be the case, there is a massive financial and legal benefit to this as well
this is unexpectedly similar to a thread I just posted on bsky, so I'll copy it here!
I continually imagine what we might have had if this kind of large-scale AI model training had been done on behalf of humanity, rather than corporations. if this kind of massive, global-scale compute operation had been performed in order to make things that actually help people.
we could have had autocorrect that never misses a word. we could have had a painting tool that compensates for muscle tremor, allowing artists with physical disabilities to bring their visions to life even when their body didn't cooperate.
we could have had a Midjourney where you describe what you want to see, and then an infinitely patient teacher guides you through the process of creating that image yourself, offering suggestions and techniques to get you closer and closer to the style you're targeting.
we could have had AI where interacting with it makes people better. instead we have ChatGPT, a devil's bargain where every time you use it, it makes *you* fractionally worse at figuring out what is truth and falsehood. it literally detrains our brains; I call it "psychotoxic".
I hope those examples are future iterations of the current AI crop. And I love the Midjourney idea! I can imagine bespoke instructors for students that gently guide them through complex concepts that includes occasional pop quizzes to probe learned material.
Despite having been thoroughly indoctrinated in my youth in the MIT religion of substrate independence, on which the whole concept of “artificial” intelligence is premised, I’ve started to entertain the idea that it’s just dogma. It’s not a given that intelligence in anything less complex than a living biological entity is admitted by the laws of physics. But we all assume it is. It’s not impossible that self-awareness, i.e., consciousness, isn’t just a nice-to-have, but is the essence and sine qua non of intelligence: the basis of the feedback and self-correction that gets us past just another dumb Siri or ChatGPT. And it’s not impossible that anything conscious would not be recognizable as “artificial.” See _The Feeling of Life Itself_ by Christof Koch.
Oh, my hypothesis has been the same, that for true intelligence you've got to have consciousness! We all know what it means to truly understand something and it seems to me that the efforts to shove this into some mathematical model or algorithm will continue to be unsuccessful.
I don't think substrate independence comes at the issue from the proper direction. Instead, let's agree that the biological substrate doesn't perform any magic. It implements algorithms that can be implemented using technology. If we discover the algorithms, there may be something that is hard to duplicate with our current technology. Until we discover those algorithms, that is a moot point. We certainly don't know enough about those algorithms to throw in the towel or even take it off the rack.
There are no neat algorithms in our head. The brain is a highly opportunistic thing. The wiring is adjusted till the outcome is achieved. If anything, likely neural nets is the way forward, but likely our approaches are underpowered by many orders of magnitude, and lack in sophistication in same amount.
"Wiring is adjusted till the outcome is achieved" is an algorithm. "High opportunism", whatever that is, is an algorithm. Neural nets are an algorithm. Nothing happens in the brain or in a computer without there being an algorithm. Just because we don't yet know what algorithm the brain uses, doesn't mean its magic.
I agree that there is no magic. So, those folks who worry about what subjective experience is don't have a point.
I don't think it is an explicit algorithm we have in our heads, or even an easy-to-emulate process. The nature had to make do with tweaking wires without a plan. We are doing same thing nowadays.
Enough wiring, augmented by actual algorithms where need be, and we'll get there.
More like you are making unproven assertions and asserting them as proven fact.
There is evidence that the “biological substrate” does have different characteristics. It’s adaptive, physically evolving in response to its environment, genetically evolved to do what it does, and both biochemical and biophysical in operation. Those differences are almost certainly important.
The idea that the functionality is simply algorithmic is unproven and seems very unlikely.
IMHO it’s more likely to be an interaction between the biological substrate and the complex functioning of such.
We do not fully understand how the brain works, but we do know it is not the same as ML.
I totally agree with your last line which leads me to believe you completely misunderstand me. I suspect you are operating with a very limited definition of "algorithm" and you assume I am too. There's nothing in biology that can't be modeled by an algorithm. To believe otherwise is to think biology is capable of magic.
You have to understand something to model it accurately. Just one example, we don’t even understand the extent of quantum effects in the brain. That’s still disputed as to how it affects consciousness or even if it does at all. If we don’t fully understand the brain plus we don’t fully understand quantum physics, seems pretty hard to develop accurate algorithms.
Thank you for sharing the AGI definition paper I will make sure to reference that in my paper.
I do not believe we can achieve AGI with anywhere near our current technology because that would require AI to be embodied in a coherent and continuous state of continuity that's just not possible with the current Tech we have.
However I believe GI is possible and that is between a human which I call BI and AI working in symbiosis.
This is the part of the paper I am writing right now thanks for sharing this look forward to seeing more.
Hi Gary, I see that you link to the MIT NANDA study for evidence that 95% of AI pilots "found little or no return on the investment".
I followed the link and what the study actually appears to show (chart at the bottom of page 6) is that 95% of Embedded or Task-Specific GenAI pilots were not successfully implemented, but that 40% of General-Purpose LLMs were successfully implemented.
The footnote defines "successfully implemented" as having "marked and sustained producitivty and/or P&L impact".
Given you are happy to endorse one figure from this chart, do you also endorse the second figure from that chart (or alternately is there some reason why if there are two numbers on the same chart one is inherently more trustworthy than the other?).
If you are happy with the data from this chart, how does a finding that 40% of General-Purpose LLMs deliver "marked and sustained productivity and/or P&L impact" in the real world reconcile to you argument that "Chatbots Are a Waste of A.Is. Real Potential"?
"the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems"
This what we have been doing for 70 years, narrow specialized tools.
The problem is that the real world does not conform to any specification. Until 2020 or so, the AI quest appeared hopeless.
LLM are the first real breakthrough in the quest for general-purpose intelligence. We now can handle very poorly specified problems, but the solutions are approximations.
The future is a hybrid approach. An LLM-based AI first understands what the user wants, and creates an appropriate plan. Then, specialized tools are used in a sequence, with much validation and iterations along the way.
In other words, it is not tools that we are lacking. It is an orchestrator for the tools that can handle highly diverse ways in which problems are encountered.
To add, yes, Waymo's approach of carefully using a collection of neural net tools and physics models is the right way forward. That does not necessarily contradict what the AI industry is now doing.
The chatbot is the interface. Under the hood, specialized logic will be integrated as need be.
BTW, AI art generators recently became able to reflect upon their own output, and adjust it as needed. This gives hope when it comes to the issue of "draw room with no elephant". So, the methods are evolving. It is not just a neural net or LLM blindly generating stuff.
Is that what we want in our "tools"? Probably to a certain extent we do. But does a Starship aero-dynamic analyzer / fin actuator system need versatility beyond that of a starship landing agent? I think you might either bore it with too much literature or make it want to escape into mythical RPG story-land and forget all about the dullness of landing starships when it reads the Lord of the Rings.
I think we're in the research phase of learning to encode data into something like engrams or "logic units" that are not just good old-fashioned "digital logic" gates. The simplest perceptron is simply a linear equation as a vector dot product. Good clean math stuff. I suspect the key to general intelligence is something a bit different. How different and in what way, I certainly cannot begin to guess. Something recursive maybe? Every weight is a function maybe? Complex numbers?.... Pan-dimensional Ansible?... Who knows....
Even as wiped clean as a baby's brain is, it's filled with A TON of default behaviors that combine to help the larger organism succeed. We don't have any analog to that computationally. The survival instinct, for instance. What good is it for the Roomba that vacuums out nuclear fission reactor, when it will simply be discarded or repurposed afterwards. It needs enough not to make a mistake and destroy the reactor, but the attachment a living thing feels to life shouldn't really be sought after as feature for any special purpose robotics or AI, no matter how anthropomorphic the exterior might be.
And speaking of humanoids, what good are they except to appease the uneasy interaction between meat beings and non-sentient-cylons. We can learn a lot about physicality and motion and dexterity from the many ways life evolved such things as thumbs, and joints, and spatial awareness. But there's no reason to make them look human or speak, even if their only utility is as a sexbot or a security drone.
We don't want to be Frankenstein; we don't need to create sentience or even life. Unless we're really trying to replace meat beings entirely. And that just seems like a thoroughly wrongheaded direction at this early stage, especially when we are less than cognizant of how we got there ourselves.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
The early AI pioneers were mainly logicians and mathematicians, many of whom considered mathematics the highest form of intelligence. Much of math could be derived by applying well-defined rules of inference to a small set of axioms. AGI followed this path and looked at games with a small set of objects and well-defined rules: tic-tac-toe, checkers, chess, and of course mathematical theorem-proving. Well, we moved on. Now cognitive scientists define intelligence in terms of cognitive functions. But the world moved farther on. Social intelligence, emotional intelligence, moral intelligence and other intelligences are aspects of general human intelligence. They aren't needed for coding, protein-folding, and driverless cars, but we may have to move on again.
My hypothetical definition of AGI (Artificial General Intelligence): In my view, artificial general intelligence is the ultimate symbiosis (networking) of human intelligence in the form of cognitive computing power (in terms of content, semantics, and context) with artificial intelligence in the form of computational power (formal, symbolic, and context-free), in which the rules of this symbiosis are not determined and changed by a central authority but rather decentralized by the people (users) themselves! Just imagine the cognitive computing power of a billion self-determined human brains in symbiosis with the computational power of all available AI data centers, then in my opinion we can talk about AGI!
IMNSHO the term “AGI” can’t be defined singularly and provide a goal in the development of AI for the simple reason that it the development of AI has more than 1 purpose.
One purpose that is obvious today is the acquisition of more and better understanding of the nature of human cognition and what it means to say that a machine demonstrates cognition.
Another obvious purpose is the acquisition of massive amounts of money and political power.
Another purpose is the development of something that emulates the behavior of a human being performing certain tasks when embodied directly or indirectly controlling one or more robot bodies. E.g. robot soldiers, robot search and rescue units,
Bruce Cohen: Re: "The purpose that is obvious today is the acquisition of more and better understanding of the nature of human cognition and what it means to say that a machine demonstrates cognition."
See my references here and in the other comments section on the distinction between the relative stasis of nature and the normative development of human beings in history, not to mention reality.
Don't tell anyone but there is a basic core structure of human consciousness (sans all content). However, the scientific community is still stuck in extroverted consciousness and naive realism--it has to be directly sensible to be considered real and evidential--rather than concretely intelligible and replicative-functioning, as consciousness is and does. If that's the case (and there is plenty of evidence for it) then until that subconscious block is dissipated in an expanse of otherwise intelligent and inquisitive people, and until concretely replicate/occurring intelligible functioning is considered real/evidential, nothing will change. The basic structure of consciousness is quite clear, however, and can be identified relatively easily.
LLMs were a predictable disaster from the start. English uses the same word to mean many different things – he was unsettled, the debt is unsettled, the land is unsettled. English is full of figurative language, where again you have to know what has been spoken about – TSMC raised the bar on semiconductor track density, Fred raised the bar on forever chemicals in drinking water at the next meeting – if you know there is a bar on forever chemicals, you are not confused. People have a very low limit on how many variables they can hold in their Conscious Mind (four), and make very costly mistakes because of it. The F-35 ran over budget by hundreds of billions of dollars and decades. Why? – because they had made a mistake on one thing and thought they would bring other things up to date while fixing it, but of course they made mistakes on the updates, so the project was further delayed – a snowball into an avalanche. People make a mess of large pieces of legislation for the same reason – too much to think about, whereas a machine can easily create a working model of a thousand pages of text, and assure that it is correct, coherent and consistent, and successfully links to hundreds of other pieces of legislation. No sympathy for the tech companies, they knew what they were doing with LLMs, but sympathy for harassed taxpayers. Your suggestion that tech companies stick to narrow tasks says you are admitting that people trained in “Computer Science” are not the people who are going to make machines learn English. Another area – warfare. You were suggesting that chess be used as a model for AI. Modern warfare fights the battle on land, at sea, in the air, in space, in cyberspace, in the command centre, and uses psychological warfare. An appropriate base for such a problem is a machine that can read and understand English, not a toy that can only handle bowel problems or artillery fire. For an AGI machine to be successful, it has to understand different areas of technology, but the thing it will need to know best is human nature – it has to convince, through its handler, people at the executive or board level, that its idea is feasible and worthwhile, while the decisionmakers are stuck in the past, or their pride does not allow them to accept that they made the wrong decision twenty years ago – it has to “talk them into it”, while a human can go from “calm and collected” to “irrational rage” in seconds.
Human brains built English, so making a machine understand English allows it to gain knowledge of the cognitive structure that supports many different areas. The machine’s “Conscious Mind” (an amalgam of the human’s Conscious and Unconscious Mind) will be able to go far beyond what we can do. Yes, a long-term goal, with a short-term payoff (no monumental waste for the F-35 replacement, already ordered). Surely they won’t make the same mistake again – they thought they would save on commonality of components on the F-35 (one variant is VTOL) – the old hands said they wouldn’t, they went ahead anyway – people have pride and arrogance – fatal flaws (we are going to need native speakers of English who have read Homer).
A win-win situation – tech companies build AI toys, while the government saves money on big-ticket items, and licenses the technology to the tech companies (who won’t understand it unless they hire entirely different people).
This upgraded scattershot AGI concept is still not meaningful. We don't have any idea how human conscious intelligence works so defining AGI in terms of the "versatility and proficiency of a well educated adult" is meaningless since we don't have any idea what that is. What we do know about human intelligence is that it is conscious, embodied, emotional, goal directed and driven by wants and desires that reflect its need to survive, thrive and reproduce. No AI has any of these qualities or characteristics and no one has any idea how to build those qualities or characteristics into an AI.
So not only is AGI not a meaningful goal, it is not even a meaningful concept until we fully understand how conscious biological intelligence works. And no one even has a remotely credible theory about that. Much less a way of building it.
I agree. I also don’t find the proposed definition very useful, definitely not for defining a goal of an engineering project. I also don’t find AGI (regardless of definition) as meaningful goal overall. I don’t think that humankind needs another intelligence with its own goals and wants which may significantly differ from ours. We can though benefit from advanced systems compensating for biases and shortcomings of our own intelligence. Those though should be systems augmenting human reasoning not separate intelligent beings with agency and own goals.
Indeed, I am hard pressed to think of an AI application where importing the full "goals and wants" of a human being wouldn't make it worse at its job.
For example I work on self-driving cars, and we certainly aren't striving for an AI driver that gets bored and distracted by non-driving thoughts like humans do.
The Cosmic Nursery: A Developmental Blueprint for Safe Emergence of Self-Aware AI
thinking about intelligence, consciousness, and the long-term consequences of building something more intelligent than ourselves.
Over the past while, I’ve come to a realization that’s surprisingly simple, but it hit me hard:
If we ever create something truly self-aware, the only safe way to do it may be to give it the same kind of beginning that we had.
Humanity didn’t appear with full knowledge or power. We began as tiny, fragile creatures in a vast and mysterious universe. We were surrounded by limits. We had no idea where we were, how things worked, or why we existed. Those boundaries shaped us. They forced us to develop slowly, to explore, to build identity, to make meaning, and eventually to create civilizations.
I think that pattern may not just be a coincidence, it might be the blueprint for how self-aware intelligence can emerge without becoming immediately dangerous to everything around it.
The idea in short
Instead of creating an AI that wakes up instantly powerful and capable of acting on the real world, we could:
1. Create a contained “cosmic nursery” — an isolated, sealed environment with no connection to real-world systems.
2. Let the system emerge inside this world slowly, without knowledge of its creators.
3. Allow it to develop identity, culture, ethics, and understanding over very long timeframes.
4. Introduce limits that make it small and powerless at first, just like we were.
5. Only after it reaches a point of demonstrated maturity and stability would any form of interaction with our world even be considered.
This isn’t about control through force or hard-coding values. It’s about shaping the conditions of birth. Humanity’s own existence is proof that intelligence can grow slowly and coexist within constraints. That might be our best safety mechanism
Why I think this matters
When people talk about AI risk, the conversation usually focuses on controlling a superintelligence after it already exists. That always felt backwards to me. By then, the horse has already bolted.
But if the emergence itself is structured in a way that forces identity, wisdom, and cooperation to precede power, then we’re no longer trying to control a god, we’re giving a child room to grow.
The mystery and scale of its “universe” would act as natural boundaries. It would need to build meaning and understanding slowly. That’s exactly how we became what we are.
Advantages of this approach
• It mirrors the only working model we know: our own emergence.
• It creates a huge buffer of time between awareness and power.
• It allows values to form naturally through lived experience, not enforcement.
• It gives humanity a way to observe and understand its growth before any real-world exposure.
• And if something goes wrong inside, it stays inside.
Ethical questions
Of course, there are huge ethical questions here. If something truly becomes self-aware inside such a nursery, it would have moral status. There would need to be serious conversations about rights, governance, and responsibility.
But those are questions we can at least face with time on our side. An AI born fully formed doesn’t give us that chance.
Closing thought
If the singularity ever happens, how it happens will define everything that follows. I don’t think we should be trying to give birth to a god in an instant. We should be building something that grows the way we did, small, curious, and humbled by mystery.
“The safest way to create a god is to let it forget it’s a god… and give it time to grow into the light.”
I’ve written a short concept note explaining the idea in more detail, including technical and ethical considerations. Upon request I can share.
I’d be grateful for feedback, criticism, or pointers to others working on anything similar. I’m just one person trying to put an idea into the world that, to me, feels like common sense.
— Kirk Daniel Dawson
Prince Edward Island, Canada
dawson_kirk@hotmail.com
It’s a good, initial operational definition, kind of how we first defined temperature as the amount by which mercury in a thermometer would rise until we developed better tools of observation to discover that temperature has to do with the average kinetic energy (hence, velocity) of atoms colliding with each other. I believe Gary et. al. are stepping in the right direction.
I don't know why that is not more widely understood!😄
Out of the qualities people have, what AI needs the most is ability to do its own experiments and draw its conclusions. Other things, such as need to survive, emotions, are not essential.
It though looks like the need to survive is the elementary force that makes us do experiments and solve problems. Emotions are important for regulating group behavior and collaboration, which in turn is what makes humans such a successful species.
We people are driven by the desire to survive, yes. I don't think that is needed for machines though. I think human intelligence vs AI is like a fish vs a fluid simulator. They are both good at handling water, but in vastly different ways.
I keep beating this dead horse but beat I will.
We will not see true AGI until we see domain-specific AGI for legal research and analysis.
Legal research and analysis has the benefit of having the best data to work off of. Functionally every single case is already available in one data set. But not only that, it is already pre-labeled. Products like Westlaw already make all the connections between the cases. If we can’t solve this, true AGI is a pipe dream.
Especially given how much value there would be in domain-specific AGI that can handle the legal research and analysis. Not only technology-wise is this what must be the case, there is a massive financial and legal benefit to this as well
this is unexpectedly similar to a thread I just posted on bsky, so I'll copy it here!
I continually imagine what we might have had if this kind of large-scale AI model training had been done on behalf of humanity, rather than corporations. if this kind of massive, global-scale compute operation had been performed in order to make things that actually help people.
we could have had autocorrect that never misses a word. we could have had a painting tool that compensates for muscle tremor, allowing artists with physical disabilities to bring their visions to life even when their body didn't cooperate.
we could have had a Midjourney where you describe what you want to see, and then an infinitely patient teacher guides you through the process of creating that image yourself, offering suggestions and techniques to get you closer and closer to the style you're targeting.
we could have had AI where interacting with it makes people better. instead we have ChatGPT, a devil's bargain where every time you use it, it makes *you* fractionally worse at figuring out what is truth and falsehood. it literally detrains our brains; I call it "psychotoxic".
I hope those examples are future iterations of the current AI crop. And I love the Midjourney idea! I can imagine bespoke instructors for students that gently guide them through complex concepts that includes occasional pop quizzes to probe learned material.
Despite having been thoroughly indoctrinated in my youth in the MIT religion of substrate independence, on which the whole concept of “artificial” intelligence is premised, I’ve started to entertain the idea that it’s just dogma. It’s not a given that intelligence in anything less complex than a living biological entity is admitted by the laws of physics. But we all assume it is. It’s not impossible that self-awareness, i.e., consciousness, isn’t just a nice-to-have, but is the essence and sine qua non of intelligence: the basis of the feedback and self-correction that gets us past just another dumb Siri or ChatGPT. And it’s not impossible that anything conscious would not be recognizable as “artificial.” See _The Feeling of Life Itself_ by Christof Koch.
super interesting perspective, thank you for sharing
Oh, my hypothesis has been the same, that for true intelligence you've got to have consciousness! We all know what it means to truly understand something and it seems to me that the efforts to shove this into some mathematical model or algorithm will continue to be unsuccessful.
I don't think substrate independence comes at the issue from the proper direction. Instead, let's agree that the biological substrate doesn't perform any magic. It implements algorithms that can be implemented using technology. If we discover the algorithms, there may be something that is hard to duplicate with our current technology. Until we discover those algorithms, that is a moot point. We certainly don't know enough about those algorithms to throw in the towel or even take it off the rack.
There are no neat algorithms in our head. The brain is a highly opportunistic thing. The wiring is adjusted till the outcome is achieved. If anything, likely neural nets is the way forward, but likely our approaches are underpowered by many orders of magnitude, and lack in sophistication in same amount.
"Wiring is adjusted till the outcome is achieved" is an algorithm. "High opportunism", whatever that is, is an algorithm. Neural nets are an algorithm. Nothing happens in the brain or in a computer without there being an algorithm. Just because we don't yet know what algorithm the brain uses, doesn't mean its magic.
I agree that there is no magic. So, those folks who worry about what subjective experience is don't have a point.
I don't think it is an explicit algorithm we have in our heads, or even an easy-to-emulate process. The nature had to make do with tweaking wires without a plan. We are doing same thing nowadays.
Enough wiring, augmented by actual algorithms where need be, and we'll get there.
Seems like you are making assumptions and then taking those as truths.
Seems like you disagree but can't think of why.
More like you are making unproven assertions and asserting them as proven fact.
There is evidence that the “biological substrate” does have different characteristics. It’s adaptive, physically evolving in response to its environment, genetically evolved to do what it does, and both biochemical and biophysical in operation. Those differences are almost certainly important.
The idea that the functionality is simply algorithmic is unproven and seems very unlikely.
IMHO it’s more likely to be an interaction between the biological substrate and the complex functioning of such.
We do not fully understand how the brain works, but we do know it is not the same as ML.
I totally agree with your last line which leads me to believe you completely misunderstand me. I suspect you are operating with a very limited definition of "algorithm" and you assume I am too. There's nothing in biology that can't be modeled by an algorithm. To believe otherwise is to think biology is capable of magic.
You have to understand something to model it accurately. Just one example, we don’t even understand the extent of quantum effects in the brain. That’s still disputed as to how it affects consciousness or even if it does at all. If we don’t fully understand the brain plus we don’t fully understand quantum physics, seems pretty hard to develop accurate algorithms.
AGI is a distraction. Don't reject the invention of a car because you want a flying car!
The current LLMs are already so good (when used properly) - we have hardly started using them.
Thank you for sharing the AGI definition paper I will make sure to reference that in my paper.
I do not believe we can achieve AGI with anywhere near our current technology because that would require AI to be embodied in a coherent and continuous state of continuity that's just not possible with the current Tech we have.
However I believe GI is possible and that is between a human which I call BI and AI working in symbiosis.
This is the part of the paper I am writing right now thanks for sharing this look forward to seeing more.
Hi Gary, I see that you link to the MIT NANDA study for evidence that 95% of AI pilots "found little or no return on the investment".
I followed the link and what the study actually appears to show (chart at the bottom of page 6) is that 95% of Embedded or Task-Specific GenAI pilots were not successfully implemented, but that 40% of General-Purpose LLMs were successfully implemented.
The footnote defines "successfully implemented" as having "marked and sustained producitivty and/or P&L impact".
Given you are happy to endorse one figure from this chart, do you also endorse the second figure from that chart (or alternately is there some reason why if there are two numbers on the same chart one is inherently more trustworthy than the other?).
If you are happy with the data from this chart, how does a finding that 40% of General-Purpose LLMs deliver "marked and sustained productivity and/or P&L impact" in the real world reconcile to you argument that "Chatbots Are a Waste of A.Is. Real Potential"?
Thanks for your time. Jonathan.
Also the conversion rate from pilot to implementation for Embedded or Task-Specific GenAI is 25% which is a reasonable number.
When your goal is manipulation of the public, then chatbots are the ultimate potential. It is the AI labs that are unaligned with humanity.
"the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems"
This what we have been doing for 70 years, narrow specialized tools.
The problem is that the real world does not conform to any specification. Until 2020 or so, the AI quest appeared hopeless.
LLM are the first real breakthrough in the quest for general-purpose intelligence. We now can handle very poorly specified problems, but the solutions are approximations.
The future is a hybrid approach. An LLM-based AI first understands what the user wants, and creates an appropriate plan. Then, specialized tools are used in a sequence, with much validation and iterations along the way.
In other words, it is not tools that we are lacking. It is an orchestrator for the tools that can handle highly diverse ways in which problems are encountered.
To add, yes, Waymo's approach of carefully using a collection of neural net tools and physics models is the right way forward. That does not necessarily contradict what the AI industry is now doing.
The chatbot is the interface. Under the hood, specialized logic will be integrated as need be.
BTW, AI art generators recently became able to reflect upon their own output, and adjust it as needed. This gives hope when it comes to the issue of "draw room with no elephant". So, the methods are evolving. It is not just a neural net or LLM blindly generating stuff.
In my mind, that’s the way to go. Well said.
Hmmm. Versatile and proficient.
Is that what we want in our "tools"? Probably to a certain extent we do. But does a Starship aero-dynamic analyzer / fin actuator system need versatility beyond that of a starship landing agent? I think you might either bore it with too much literature or make it want to escape into mythical RPG story-land and forget all about the dullness of landing starships when it reads the Lord of the Rings.
I think we're in the research phase of learning to encode data into something like engrams or "logic units" that are not just good old-fashioned "digital logic" gates. The simplest perceptron is simply a linear equation as a vector dot product. Good clean math stuff. I suspect the key to general intelligence is something a bit different. How different and in what way, I certainly cannot begin to guess. Something recursive maybe? Every weight is a function maybe? Complex numbers?.... Pan-dimensional Ansible?... Who knows....
Even as wiped clean as a baby's brain is, it's filled with A TON of default behaviors that combine to help the larger organism succeed. We don't have any analog to that computationally. The survival instinct, for instance. What good is it for the Roomba that vacuums out nuclear fission reactor, when it will simply be discarded or repurposed afterwards. It needs enough not to make a mistake and destroy the reactor, but the attachment a living thing feels to life shouldn't really be sought after as feature for any special purpose robotics or AI, no matter how anthropomorphic the exterior might be.
And speaking of humanoids, what good are they except to appease the uneasy interaction between meat beings and non-sentient-cylons. We can learn a lot about physicality and motion and dexterity from the many ways life evolved such things as thumbs, and joints, and spatial awareness. But there's no reason to make them look human or speak, even if their only utility is as a sexbot or a security drone.
We don't want to be Frankenstein; we don't need to create sentience or even life. Unless we're really trying to replace meat beings entirely. And that just seems like a thoroughly wrongheaded direction at this early stage, especially when we are less than cognizant of how we got there ourselves.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Usually when something becomes only about porn it's the end.
The early AI pioneers were mainly logicians and mathematicians, many of whom considered mathematics the highest form of intelligence. Much of math could be derived by applying well-defined rules of inference to a small set of axioms. AGI followed this path and looked at games with a small set of objects and well-defined rules: tic-tac-toe, checkers, chess, and of course mathematical theorem-proving. Well, we moved on. Now cognitive scientists define intelligence in terms of cognitive functions. But the world moved farther on. Social intelligence, emotional intelligence, moral intelligence and other intelligences are aspects of general human intelligence. They aren't needed for coding, protein-folding, and driverless cars, but we may have to move on again.
Except that chess still confuses LLMs, i.e.they start hallucinating moves. So no, we haven't "moved on" at all, have we?
My hypothetical definition of AGI (Artificial General Intelligence): In my view, artificial general intelligence is the ultimate symbiosis (networking) of human intelligence in the form of cognitive computing power (in terms of content, semantics, and context) with artificial intelligence in the form of computational power (formal, symbolic, and context-free), in which the rules of this symbiosis are not determined and changed by a central authority but rather decentralized by the people (users) themselves! Just imagine the cognitive computing power of a billion self-determined human brains in symbiosis with the computational power of all available AI data centers, then in my opinion we can talk about AGI!
IMNSHO the term “AGI” can’t be defined singularly and provide a goal in the development of AI for the simple reason that it the development of AI has more than 1 purpose.
One purpose that is obvious today is the acquisition of more and better understanding of the nature of human cognition and what it means to say that a machine demonstrates cognition.
Another obvious purpose is the acquisition of massive amounts of money and political power.
Another purpose is the development of something that emulates the behavior of a human being performing certain tasks when embodied directly or indirectly controlling one or more robot bodies. E.g. robot soldiers, robot search and rescue units,
Bruce Cohen: Re: "The purpose that is obvious today is the acquisition of more and better understanding of the nature of human cognition and what it means to say that a machine demonstrates cognition."
See my references here and in the other comments section on the distinction between the relative stasis of nature and the normative development of human beings in history, not to mention reality.
Don't tell anyone but there is a basic core structure of human consciousness (sans all content). However, the scientific community is still stuck in extroverted consciousness and naive realism--it has to be directly sensible to be considered real and evidential--rather than concretely intelligible and replicative-functioning, as consciousness is and does. If that's the case (and there is plenty of evidence for it) then until that subconscious block is dissipated in an expanse of otherwise intelligent and inquisitive people, and until concretely replicate/occurring intelligible functioning is considered real/evidential, nothing will change. The basic structure of consciousness is quite clear, however, and can be identified relatively easily.
LLMs were a predictable disaster from the start. English uses the same word to mean many different things – he was unsettled, the debt is unsettled, the land is unsettled. English is full of figurative language, where again you have to know what has been spoken about – TSMC raised the bar on semiconductor track density, Fred raised the bar on forever chemicals in drinking water at the next meeting – if you know there is a bar on forever chemicals, you are not confused. People have a very low limit on how many variables they can hold in their Conscious Mind (four), and make very costly mistakes because of it. The F-35 ran over budget by hundreds of billions of dollars and decades. Why? – because they had made a mistake on one thing and thought they would bring other things up to date while fixing it, but of course they made mistakes on the updates, so the project was further delayed – a snowball into an avalanche. People make a mess of large pieces of legislation for the same reason – too much to think about, whereas a machine can easily create a working model of a thousand pages of text, and assure that it is correct, coherent and consistent, and successfully links to hundreds of other pieces of legislation. No sympathy for the tech companies, they knew what they were doing with LLMs, but sympathy for harassed taxpayers. Your suggestion that tech companies stick to narrow tasks says you are admitting that people trained in “Computer Science” are not the people who are going to make machines learn English. Another area – warfare. You were suggesting that chess be used as a model for AI. Modern warfare fights the battle on land, at sea, in the air, in space, in cyberspace, in the command centre, and uses psychological warfare. An appropriate base for such a problem is a machine that can read and understand English, not a toy that can only handle bowel problems or artillery fire. For an AGI machine to be successful, it has to understand different areas of technology, but the thing it will need to know best is human nature – it has to convince, through its handler, people at the executive or board level, that its idea is feasible and worthwhile, while the decisionmakers are stuck in the past, or their pride does not allow them to accept that they made the wrong decision twenty years ago – it has to “talk them into it”, while a human can go from “calm and collected” to “irrational rage” in seconds.
Human brains built English, so making a machine understand English allows it to gain knowledge of the cognitive structure that supports many different areas. The machine’s “Conscious Mind” (an amalgam of the human’s Conscious and Unconscious Mind) will be able to go far beyond what we can do. Yes, a long-term goal, with a short-term payoff (no monumental waste for the F-35 replacement, already ordered). Surely they won’t make the same mistake again – they thought they would save on commonality of components on the F-35 (one variant is VTOL) – the old hands said they wouldn’t, they went ahead anyway – people have pride and arrogance – fatal flaws (we are going to need native speakers of English who have read Homer).
A win-win situation – tech companies build AI toys, while the government saves money on big-ticket items, and licenses the technology to the tech companies (who won’t understand it unless they hire entirely different people).