46 Comments

Wow. Interesting read. I'm much more optimistic than Marcus even though I disagree with his neuro-symbolic approach. It's always darkest before dawn. The AI field is on the cusp of another one of Kuhn's proverbial paradigm shifts. I think the time has come for a few brave maverick thinkers to throw the whole field out the window and start anew.

The AI community's obsession with language is a perfect example of putting the cart before the horse. Perceptual generalization should come first. It is the most important component of intelligence but the community, as a whole, has largely ignored it. The representationalist approach, which deep learning embodies, is the notion that *everything* in the world must be represented in the system. It should be the first thing to be thrown out, I'm sorry to say. Corner cases and adversarial patterns have proved deadly to DL, something that the autonomous vehicle industry found out the hard way after spending over 100 billion dollars by betting on DL. Combining DL with symbolic AI will not solve this problem.

Consider that a lowly honeybee's brain has less than 1 million neurons and yet it can navigate and survive in highly complex 3D environments. The bee can do it because it can generalize. It has to because its tiny brain cannot possibly store millions of learned representations of all the objects and patterns it might encounter in its lifetime. In other words, generalization is precisely what is required when scaling is too costly or is not an option. Emulating the generalizing ability of a bee’s tiny brain would be tantamount to solving AGI in my opinion. Cracking generalized perception alone would be a monumental achievement. Scaling and adding motor control, goal-oriented behavior and even a language learning capability would be a breeze in comparison.

The exciting thing is that one does need expensive supercomputers to achieve true perceptual generalization. There's no reason it cannot be demonstrated on a desktop computer with a few thousands neurons. Scaling can come later. I think a breakthrough can happen at any time because some of us AGI researchers see the current AI paradigm merely as examples of what not to do. We're taking a different route altogether. Systematic generalization is a growing subfield of AI. My prediction is that cracking AGI on an insect level can happen within 10 years. Scaling to human-level intelligence and beyond will mostly be an engineering problem with a known solution.

AGI is a race and only the best approach will win. Good luck to all participants.

Expand full comment

Good points. My only disagreement with you would be calling the honeybee "lowly" :-)

Expand full comment

On perceptual generalizations . . . for example, a general grasp of space-time would be a good place to start since all empiric models rely on that as a foundation. But what OTHER aspects do you think would be good candidates in framing ‘general intelligence’?

Expand full comment

Thanks for the comment. Our team does not believe in specifics except in sensor design. Our approach is rooted in the idea that the mechanism of generalized intelligence must be universal and is based on event timing (spikes) and symmetry (everything has a complement). The system must be fully wired for generalization which requires some learning. Space, time and physics and other regularities are learned automatically. It's all in the timing. We see the brain mostly as a massive timing mechanism.

Expand full comment

Hmm, okay you argue for 'general percepts' but then 'do not believe in specifics'?! Except for sensor design – so specifically what are those sensors sensing?

I also have the a problem with Gary's neuro-symbolic notion (not even sure what the heck that means).

> . . . throw the whole field out the window and start anew <

So what is your general approach, underlying first principle(s)? A link to a white paper or?

Expand full comment

Thanks for your interest in our work. We wrote a Medium article about it over two months ago that was well received. I copied and pasted parts of it in my first reply above. Here's the link, if you're interested.

https://medium.com/@RebelScience/deep-learning-is-not-just-inadequate-for-solving-agi-it-is-useless-2da6523ab107

Expand full comment

What corner cases and adversarial patterns would stop a neuro-symbolic approach from achieving better reasoning and memory skills?

Expand full comment

Gary, I meant to respond to this when it first arrived. This is a particularly well written piece. Couldnt agree with you more about databases of machine interpretable knowledge. and the need for hand crafted knowledge combined with learning from data if it made sense. Exactly what I had characterized as 'computational abstractions' in one of my papers.

Expand full comment

This is the most genuinely generally intelligent (GGI) piece on AGI I have read in recent memory.

Expand full comment

This interview is a real treat!

You mentioned adaptability. I agree with you there. It's one of the keys or hallmarks of AGI. However, we will not get that anytime soon.

The limitation is that everything is centered around explicit architectures and optimization objectives. As a result, I currently cannot see the rise of an optimization objective that brings broad adaptivity across domains and tasks.

A human can change objectives based on external factors and the environment. For example, say a human wants to become a professional soccer player because it's fun. At some point, the objective is maybe maximizing the income, and when the soccer player becomes too old for professional soccer, a new job needs to be learned.

Is there an overarching far-goal objective that makes humans switch between near-goal goals? An overarching objective like achieving happiness in life? How do we program a broad, vague overarching objective into an AGI that facilitates choosing between near-goal, explicit objectives for adaptivity? An evolutionary programming approach may be necessary to let the systems evolve in a controlled random fashion.

I do think that certain domain-specific capabilities will emerge automatically (for example, how some general language skills emerge in decoder-style transformers if they are trained on next-word prediction). But it's hard to imagine how we can get to adaptivity across domains and tasks. And then, there is also the question of whether an adaptive system is helpful. It will undoubtedly be useful as a marketing stunt. Still, is a system that can adapt to a specific task (and do a somewhat good job at it) better/more economical/more useful than a special-purpose, more narrow AI designed specifically for the task at hand?

Anyways, I am just thinking out loud and just wanted to say that I really liked this interview! Thanks for sharing!

Expand full comment

Wow, such human-centric thinking!

What is intelligence? The ability to make choices for survival and reproduction. Plants are intelligent; they make chemicals to chase off insects and other plants. Insects are intelligent; they manage to invade your kitchen for food. Animals are intelligent; they survive and thrive in the wild, finding food every day. Humans are intelligent; well, maybe... unless they manage to bomb themselves or set the planet on fire.

I've been involved with robots. I used to be a researcher at Unimation. I've been involved in AI. I published work in Expert Systems and Knowledge Representation. I've done research in task planning. I've done work in Robot/Human cooperation on changing a tire. Now I'm working on self-reproducing systems that self-survive.

In all the areas I've seen there are only "imitations" of intelligent behavior. Hey, we made it speak! Oh, look, the robot can walk! Oh, look, it now spouts perfectly formed nonsense (we made a politician!).

Wake me when we make something that self-survives and self-reproduces. The closest I've seen is work by Craig Venter on finding the minimal viable DNA sequence that self-survives and self-reproduces. Once it gets out of the lab and survives then we will have created an intelligence. As Darwin noted, once the survival pressure arrives, it will evolve or die.

We will know intelligence has arrived when we see it exponentially reproduce and when we can't shut it off. Venter's 'bacteria robot' won't have an off switch. If it can make dopamine, people will want it so badly that it will survive and reproduce. Game over.

Now we just need a 'Venter robot' to destroy Pineapple crops and save the Pizza business.

-- Tim Daly

Expand full comment

Exactly! We need to stop misinterpreting "intelligence" to "description of intelligence" only then can we start understanding how to develop true intelligence. Otherwise we will just be trying to mimic "our" decisions & what we think is learning. It will always remain artificial and feel fake.

The current focus is to achieve intelligence with computers. They are just computing devices and hence any program written & its intelligence is going to be limited to just what computers are capable of, i.e., mathematical computing. I think we need to look beyond computing & electronics to build intelligence.

While organic maybe a way to go, I find that we as living beings are already intelligent with what can be evolved using "carbon-hydrogen" molecules and the networks it forms. It would be really good to see how we can take the principles based on which the networks of these molecular bonds came to acquire intelligence & find if we can adapt that process into other molecules that can be programmable and hence acquires intelligence.

But whichever, the first and foremost has to be that we go away from computing based devices & create better systems that can help build intelligence.

Expand full comment

Speaking of the neuro stuff, count me as unimpressed.

I audited Minsky and Pappert's graduate AI seminar fall term 1972. In it, Minsky spent an inordinate amount of time pointing out that as a computational model, perceptrons couldn't actually compute the things people insisted they could. IMHO, things haven't improved much. (Only 50 years ago, not 75, so this doesn't count as a data point on 75 years being enough or not for major change.)

In real life, the average mammalian neuron has hundreds of inputs and thousands of outputs, a cubic millimeter of grey matter has multiple _kilometers_ of wiring (axons). The idea that the "neural net" model is a model of neurons is silly** in the extreme. (One article I read the other day pointed to research that indicates that human neurons may be quite a bit more complex that rat neurons. Since "mammalian neuron" is a rat neuron*, things are probably even worse than I think. (On the other hand, avian neurons may be better still: bird brains manage to pack the same functionality (as non-homo mammals) into, if anything, less space without needing to resort to the mammalian trick of macroscopic folds in the grey matter.))

Like climate change, with every article I read, things are shown to be worse than I thought they possibly could be. (I advise against subscribing to Science: it's too depressing.)

*: Oh, yes. Rat neurons. Rat brain researchers found individual neurons that lit up in correspondence with points in local 3D space, and said researchers thought they were making progress. More recently, they found that it seems that the same neuron that (presumably) represents a point in 3D local space, also functions to represent a point in 3D global space as well. That is, a rat's representation of space in its neurons is still something beyond what we can understand just yet. Oops. (I love the folks who try to figure this stuff out: it's impossibly hard, but they try anyway. And sometimes find kewl stuff.)

**: The sum-and-threshold operation that's the basis of both Perceptrons and neural nets is only one of the many computational (and logic!) functions performed by neurons. So "silly" here really is the right term.

Expand full comment

AGI is already here; not due to intelligent computers, but due to incedibly STUPID HUMANS.

"Human intelligence" has disappeared

Expand full comment

The CNET debacle really brought the point home for me, which made it all the more surprising to discover you went ahead and use an LLM to produce the interview above!

It was surely an unreliable neural network, dozing off in his comfy Markov blanket, to dream up figures like the "golum", which is some sort of superimposition of the Golem, Gollum, and perhaps Talos (closest i could get in Greek mythology); attribute souls to tibiae; misspell Shelley and Wiener.

Also, the context seems to be affecting the output on a merely correlational basis: the generally pessimistic approach leaks into a common imported idiom in "scaling-über-alas"; there are slips of the tongue typical of a model who's unable to map grammar and syntax to Forms through the rigorous application of Propositional Calculus, thus producing aberrations such as "whose's".

Not even the human-in-the loop doing "light editing" managed to help on this!

In conclusion, you proved your point. Language models are unreliable. You could have just said that and avoided such snarky mise en scène, that's all.

Expand full comment

Is CNET's use of ChatGPT really a debacle? Seems like they just wanted to do an experiment using the hottest computer technology of the day. It drew a lot of attention, which I am sure they don't mind at all. While it was a bit of a stunt, it's exactly the kind of thing a technology publication like CNET should take on, IMHO.

Expand full comment

Of course it isn't! My whole comment was liberally infused with sarcasm; the inaccuracies i mention all come from the post itself (:

Expand full comment

Since you all briefly mention the "singularity" idea, I thought I'd make my snarky comment on it.

I like the "singularity" idea: that intelligent systems reach a threshold beyond which their abilities become significantly and qualitatively superior; up on another plane, so to speak. I think it's dead-on correct.

The problem is that it's already happened and the folks talking about it have missed it. It's us. We just don't get how incredibly far beyond the rest of the animals on this planet we are. My favorite example of this is the trivia item that we're the only animals on the planet that understands that sex leads to pregnancy, and pregnancy leads to childbirth. No other animal on this planet can deal with family relationships, can understand that this thing is their kith and kin. That reasoning is simply nor there. Outside of homo sapiens.

So the idea that there's something "smarter than people" around the corner is problematic, since we don't even get it how smart homo sapiens is. (The point that people can be idiots is well taken, we can be, but it doesn't disprove that we figured out QM and Galois theory.) Watching all the ethnologists and philosophers freaking out when they find some animal that can remember were it hid three acorns and get all hot and bothered exclaiming "animals can do mathematics!" is hilarious. The foundational question of our field is still there laughing at us: "What is a number than a man may know it, and a man that he may know a number."

Expand full comment

In fact it's happened several times already: 1) Speech make clever apes into humans. 2) Writing allowed for the development of mathematics and the elaboration of abstract concepts. 3) The integration of the Eastern mathematics into Western thought via the Arabic numeral system, catalyzed the scientific revolution and gave us the metaphor of the clockwork universe. 4) The development of computers has taken that whole process up a notch. 5) And, who knows? Maybe it's happening again.

David Hays and I have laid out the first four in our paper, The Evolution of Cognition (1990), https://www.academia.edu/243486/The_Evolution_of_Cognition. That led us to a more general account; here's a guide to it, Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society, https://www.academia.edu/37815917/Mind_Culture_Coevolution_Major_Transitions_in_the_Development_of_Human_Culture_and_Society_Version_2_1.

As for the Singularity, I've argued that we're already swimming in it, Redefining the Coming Singularity – It’s not what you think, https://www.academia.edu/8847096/Redefining_the_Coming_Singularity_It_s_not_what_you_think.

Expand full comment

Sadly Grady's points weren't very clear to me.

"Something something we need architecture"

I didn't really understand why he thought this would be so challenge. So far, language models seem fairly easy to compose with each other.

For some reason he seems to think that need new hardware for metacognition and for subjective experience. Maybe computers can't experience subjective experience, but it's not clear to me that we need to talk about subjective experience vs. capabilities. And when it comes to metacognition, GPT models are surprisingly capable and this seems likely to improve with scale.

Expand full comment

Very interesting read indeed - so much so that it became a discussion point in my interview with Dr. Denise Cook based on the Xzistor Concept brain model.

See here: The Uneasy Road to AGI

https://www.youtube.com/watch?v=HKI8z3Nm5hg

Expand full comment

My take-away: on several occasions, both mr. Booch an mr. Marcus are implicitly predicting the next AI-winter. Hype-funding of LLM (and neural nets in general) will become disillusioned, and this will infect all progress for 2 decades. Next jump: > 2049 (past predicted singularity). Rinse and repeat, and we’re > 150 years.

Expand full comment

Tim Daly has a point – this post sounds deeply anthropocentric, coloring otherwise sound points. The most sound point being 'architecture is all' which Gary and Grady seem to agree. The problem is 'architecture of what exactly?' which I do not see really specified.

> Legg and Hutter’s definition “Intelligence is an agent’s ability to achieve goals in a wide range of environments.”<

This heads in the right direction by specifying 'agent' (generic) instead of a heavily implied 'human agent' . . . but 'wide range of environments' – as a *context* for intelligence – is hopelessly vague. So again, no specific architecture.

> I also don’t see you entirely defining your terms.<

Can be said about almost ALL AGI discussions . . . swimming in confused word soup.

For an architectural alternative, one might consider:

'Entropy – a scientific base for super-intelligence'

ABSTRACT

This paper frames Super-Intelligence via three scientific roles. It starts by defining key terms: general and super intelligence, knowledge, and informatic process. It next names thermodynamic entropy, Signal Entropy, and Darwinian evolution, as scientific roles (informatic processes) that can be joined as an ‘Entropic continuum’ – making Entropy (as detailed below) the model’s core principle. The continuum maps general percepts of falling-apart, joining, and selective functioning as respective ‘levels’, each with differed Entropic Degrees of Freedom (DoF). The paper next notes the continuum entails a Bateson-like ‘pattern that connects’ the cosmos – a dualist-triune (2-3), where 2-3 functional DoF analysis offers a means for General and Super-Intelligence modeling. Further continuum details arise via progressively deeper DoF analysis. This approach does not envision an autonomous ‘singleton’ arising to dominate humanity. It instead targets a general ‘insight engine’ to aid human innovation and discovery, in a Naturally creative/contiguous simple-to-complex manner (3,700 words).

link to current draft:

https://drive.google.com/file/d/1wNtaOib67CMsBe0SudNDl6ZVWrbXmdNw/view?usp=share_link

Expand full comment

Good conversation. Gary Marcus, how do you define consciousness versus sentience?

Expand full comment

The key question here - what is intelligence and how it works - and that leads to the main scientific question - what is life and what is its key function. The answer is transformation - it's a universal mechanism that any system in the universe features - a life cycle.

For example - a biological cell as a basis of life. No matter, artificial or natural - all anywhere is functioning using the same principle of transformation. Just compare the architectures of a cell and a network using OSI model.

Cell network architecture - OSI model

= Membrane = Protein receptors =

7 - Application layer - Protein

High-level protocols such as for resource sharing or remote file access, e.g. HTTP.

6 - Presentation layer - Translation - tRNA

Translation of data between a networking service and an application; including character encoding, data compression and encryption/decryption

5 - Session layer - Ribosome - mRNA

Managing communication sessions, i.e., continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes

Data - 5 - 7

= Nucleus =

4 - Transport layer - Segments, Datagrams - Spliceosome - Splicing - mRNA

Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing

3 - Network layer - Packets - Transcription - pre-mRNA

Structuring and managing a multi-node network, including addressing, routing and traffic control

2 - Data link layer - Frames - DNA

Transmission of data frames between two nodes connected by a physical layer

= Nucleus =

1 - Physical layer - Bits, Symbols - Signal cascades

= Membrane = Protein receptors =

Expand full comment

Nice analogy

Expand full comment

Interesting that my own AGI project generates the critical Java business rules' source code from controlled English specifications using Booch UML activity models as the basis for specifying Java methods.

So cool. Thank you Grady.

If it happens in my remaining lifetime you had a role.

Expand full comment