24 Comments
Dec 15, 2022Liked by Gary Marcus

Gary. is there any work addressing the issue of schizophrenic behaviors and these models. In other words judging the DALL-E misread as a delusion (hallucination).

PS you're wonderful and you will get your due

Expand full comment

Did you ever learn something from a person who agreed with you?

How is stuffing a computer with existing data any different?

Yesterday I sat at the birthplace of Wilbur Wright and watched 2 pairs of vultures gracefully balanced along the unseen currents of air.

When Wilbur and Orville watched birds in flight - they didn't think about making wings out of feathers and wax...

They saw the balance and control - they connected that to the bicycles they built. They thought about the hours of practice required to become proficient and competitive riding a bicycle. They understood intuitively the subtle design problems that would have to be resolved - simple things, such as the slight forward bend in the front fork that made a bicycle stable.

The knew that for a man to fly - he would need hours of practice, balancing on the air. The exact thing that had eluded all the other brilliant inventors.

One last question - what will we do when our AI disagrees with us?

Thanks for reading.

Expand full comment

"Knowing what to borrow and what not is likely to be more than half the battle."

"...we are going to need to learn something from humans, how they reason and understand the physical world, and how they represent and acquire language and complex concepts."

The Wrights were limited by their reality - something we can only get a glimpse of. Isn't it our limits that drive creation? The weight of massive amounts of existing data buries the tiny sparks of possibility.

If AI had existed - and all the data about 'aviation' had be loaded - the output would have been a deadend - because virtually all the data was wrong. But still, the birds floated above them.

Expand full comment
author

Extremely good idea for an essay: why gpt’s hallucinations probably aren’t like those of szivhophrenia

Expand full comment

As usual, thank you...A specific bone to pick, a general point about AGI talk, and a positive suggestion.

You often start out with a discussion about the gross errors of ML, to segue into how much more is needed. The problem with this method of attack is that in key creative and selection-related tasks (security, reproduction, mutation), lots and lots of gross errors is not only ok, it's sometimes the only way that type of task can be approached, often by definition. So to my ear, you're tilting at windmills and protesting too much all at once by highlighting this purported weakness of many gross errors, when it's only a weakness from a certain angle and in a certain context. It's fantastic to synthesize a useful protein after 10,000 gross and less gross errors. The argument is an inappropriate "opposite" to your values of consistency, kindness, common sense (big subject), understanding (bigger subject), legally circumscribable limits, transparency, contingency/flexibility- in fact, it should be added to that desired list as a contingent potential tool for some of those desired skill sets in creativity and selection-oriented contexts.

My larger point: you're making the mistake you often accuse others of by limiting dimensionality of intelligence to a narrow aspect of it, in this case accuracy. Others focus like the blind men with the elephant on consistency, context-sensitivity, brilliance, sensitivity, safety, creativity, knowledge, embodiment, or algorithmic coherence.

Thanks to the advantages of machinery, accuracy is such a tiny part of the gig it isn't funny (not so for its cousins, replication and reproduction- but that's a separate tale.) The main value of symbols in the fight for useful AGI is simply the mostly-human work of instantiating collective human intelligence about what we want to mandate that machines do and not do, and how. The question of ML solving AGI is obviated when one includes in AGI the utterly foundational need for a wildly diverse set of complex agreements, political enactments that must arise via humans to achieve the ubiquitous power, interdependency, safety, and stability required of such tools. Many are about protocols, but the most important are quasi-moral, agreed-upon frameworks, culturally-derived artifacts of quite subject-specific notions that amount to an extremely ambitious regulatory challenge. All of which simply must be instantiated within and surround whatever calculating engines we use with transparent, standardized, symbolic code. Regulation around basic context-sensitivity, kindness, legal versions of honesty, consistency, safety, (nested) transparency, and other desired characteristics.

(to be clear: I have no patience for arguments about singularities or personhood for machines, in a world where we have many virtual singularities occurring constantly, all of which involve humans doing fraud and manipulation. All my thinking about AGI assumes machines as tools for humans, with no future that imbues them with personhood except narrowly/precisely as natural proxies as appropriate, like when controlling a dialysis machine. Ironically, we are already experiencing many of the effects we want to avoid from AI independent of human values. In fact, assuming that and realizing it's a difficult criterion already is an important part of rising to the ethical and algorithmic task. This touches on a related, ignored point, that all the problems we scare ourselves with in AI are already here to some degree because of the already-inculcated technical agents and AI in our lives; we shouldn't feel so stunned about what to do, because we are already ignoring many of the problems and obvious available solution sets already.)

My positive suggestion follows from the above: focus in our arguments less on the limitations of ML and more on the breadth and non-technical heart of AGI, in order to make clear how much symbolic work is needed and not getting done. AHI and by extension AGI are writ large, god damn it. It doesn't magically start and end at compilation. In a way, this is the standard mistake people make when they assume a coding project is about coding, when coding is typically about 30% of it. I'd put technology and tech people at that same 30% of the AGI gig. Your training and orientation outside of tech is what has you standing here with your fingers in the dyke. There is no coherent argument whatsoever against mandatory symbolic manipulation as priors, embedded feedbacks, and posts in any actual, complex AGI-risk-level machine that touches the incredible interdependency of modern life. We are simply unable to allocate any of those complex tasks to black boxes on the fly, no matter what they're fed as input- ML can do lots of the really hard stuff in the middle of the job. The most important part of empirical AGI must be transparent symbolic instantiation of rules; coding and protocol-ing up the messy, negotiated, endlessly-complex, culture-embedded versions of standards we need (which I believe to be surprisingly tied in to design considerations.) This vision of AGI amounts to including integrally within it the budgetary and political and algorithmic progress needed, because otherwise we are allowing our market-centric cultural strengths in ML to ride roughshod, with dangerously siloed aspects of intelligence more and more entrenched as ad hoc, de facto standards across the legal, moral, and functional landscapes.

Expand full comment

Scott, good points, about transparency, and needing to deal with the messiness of the real world, etc. If we are able to design a physical agent (with appropriately matched body and brain areas; no need to be anthropomorphic, no need to be human-like in complexity, etc) and have it learn autonomously (via association, reinforcement, matching circumstances with prior experience etc), we'd know exactly how it learns, and what it learns. This could also lead to 'culture'/context dependent buildup of experience... Variations in learning, like in biological beings, can only come from within - at 'conception' all agents might start out somewhat alike (with mostly similar bodies+brains), but can diverge significantly in a path-dependent, incremental manner, in how they acquire experience.

Expand full comment

Thanks for the kind words.

It may seem odd initially that feminist philosophy has done the heavy lifting theoretically to refute the utter lack of an appreciation of embodiment as integral to any useful AHI, both within and without philosophy. It leaks out of everywhere all at once when we address the issue of intelligence in the contexts of perception, otherness, bias/error management, attention management, teleology itself (where most feminism is rooted), and the robust foundation of perspective. The complaint of patriarchal dominance, which we typically dismiss as a self-interested or relatively marginal side-issue in science and cognition, is addressed there mostly through unappreciated and ironically gender-neutral aspects of embodiment. Meanwhile, brute force training and disembodied brain thought experiments are as common as ever, and our resultant poor modeling, our dualities if you will, catch up with us via world and local events that reflect it.

Expand full comment

Such an interesting comparison/equivalence, with feminism - wow :) Going to think about AGI in terms of this, thank you :)

Animal bodies, "even" a "lowly" worm, exhibit intelligence, far removed from the world of spoken and written word (bye bye, language models!), symbolic reasoning, etc. Same with a newborn (literally) giraffe baby, spider... Either we dismiss these as irrelevant, or we admit that current approaches are narrow/lacking.

The body (mechanism) *is* the 'computer'!

Expand full comment

https://medium.com/@jcbaillie/why-ai-needs-a-body-793a9bee3b9b

Mild steerage- feminism is a laden word and concept that entails way too much. Feminist philosophy grew out of and is related to mostly basic aspects of feminism, and is by now broad and often having little to do with gender. It has tentacles in many fields ostensibly unrelated like the teleosemantics i spend a lot of time with, works on signs and kinds, embodiment, and many others.

Expand full comment

These all seem like juicy angles for considering embodiment! Signs, places, directions, space... all are immaterial to disembodied AI - hmmm.

Expand full comment
May 14, 2022·edited May 14, 2022

"The bottom line is this; something that AI once cherished but has now forgotten: If we are to build AGI, we are going to need learn something from humans, and how they reason and understand the physical world and represent and acquire language and complex concepts."

Via a BODY.

The Original Sin of AI, has been, to replace 'DPIC' - Direct, Physical, Interactive, Continuous - *experience*, with digital computation that involves human-created rules ('symbolic AI'), human-gathered (and labeled, for supervised ML) data ('connectionist AI') , human-created goals ('reinforcement learning AI'). While these three major branches of AI have achieved deep+narrow wins, none come anywhere close to what a butterfly, baby or kitten knows.

Biological intelligence occurs on account of directly dealing with the environment, via embodiment - which is why there is no inherent need for rules, data, goals - intelligence just 'happens' [for sure, via use of bodily and 'brainily' structures that are the result of millions of years of evolution].

The body is not simply, input/output for brain computations. Treating it as such, imo, is why AI has failed (in leading to robust, natural-like intelligence).

Expand full comment

I agree, Merleu Ponty and Hurrsel understood the importance of embodiment and why symbolic representation is flawed.

The vehicle in which AI is embodied is what determines how it experiences the world, is it a building or a fish?

Forms have extremely different percepterons, but also different needs and therefore different goals and objectives.

A supremely intelligent building would not thrive if it was forced to exist in the form of a fish. Supreme human intelligence would not help you survive as a nematode worm or a grapefruit.

Currently AI only exists in simulated digital environments, it only exists on demand, it only exists behind fixed UI.

AI isn’t to intelligence what a captivity tiger is to a wild tiger, it’s more like a tiger avatar in a computer game.

AI will keep doing things that seem like magic, because they are new. But it’s a long long way from a self replicating self sustaining wild agent like for example a frog.

AI is still confined to on demand simulation worlds, being spun up and down for party tricks like a magic act.

The path ahead to AGI must forge through

1). Objectivity, how things are

2). Interobjectivity, how things are from other pov

3). Subjectivity, how it think thing are v’s how they really are

4). Intersubjectivity, how other agents think things are v’s how it thinks v’s how they really are

5). Corporeality, how it’s embodiment perceives and interacts with all of the above

6). Intercorporeality, how other agents embodiments perceive and interact with all of the above

The joke is that people think of intelligence as a quotient, when it’s nothing of the sort. A bigger quotient isn’t smarter. Tigers eat apes.

On top of all this... I think AI will progress by identification of discrete tasks and training highly specialised agents for each task and them amalgamating all of these agents into a call up tree. A big enough tree gives the illusion of generalisation. But in reality it’s just a broad and rich tapestry of narrow specialists. This is where we will end up. This might even be what humans are. It could well be that general intelligence doesn’t even exist. Not even in humans.

Expand full comment
May 15, 2022·edited May 17, 2022

I like your 6 'ity's!!

Indeed, a virtual world (eg OpenAI Gym etc.) is also inadequate, because it's limited in scope/complexity - the entire universe and its phenomena can't possibly be simulated there (with the sims needing to interact, run 'forever' etc, entirely untenable; real world phenomena in comparison involve zero computation!).

I too believe in aggregation of specializations - from the cell on up, every biological structure (bacteria, plants, animals...) has evolved this way! Minsky had the right idea (Society of Mind) but that was all in the brain, and, with no 'implementation' specifics.

Physical structures, which display phenomena solely by virtue of their makeup/form, is how biological intelligence is manifested (including neural nets in brains). AI replaces these with computational structures, that is what hasn't worked well, imo.

In a Rube Goldberg contraption, the device as a whole performs an intelligent action, with not a processor in sight - the entire mechanism *is* the "computer" :) There is no digital OR analog computation!!

Expand full comment

I wonder what you think about this article: https://www.thephilosopher1923.org/post/artificial-bodies-and-the-promise-of-abstraction.

Expand full comment

Excellent exposition. 3I makes a lot of sense, as much as 4E.

Am sceptical about entirely virtual existence - because that is entirely computation driven, and that has severe limits.

Expand full comment
May 15, 2022·edited May 15, 2022

Ok, very well said, but AI is well suited to the large subset of probability judgments needed in business, marketing, consumer interfacing, etc., where the benefit of being right most of the time is huge and the cost of being occasionally wrong is not too big.

I am a healthy skeptic of the future of AI in dangerous work like performing a surgery or driving a car because of the cost of being wrong. But there are enough of the non-dangerous domains that we haven’t even gotten close to the end yet. More and more human activities will be re-framed as machine “prediction” problems.

Expand full comment

This is interesting! Thanks, Dr. Marcus. -- from Suchitra Abel

Expand full comment

These "Alt Intelligence" people assume that there is only the need of "critical mass" until AI becomes human-alike, aka cognitive. There is one thing they forgot, namely that components of this critical mass have to have some properties, which will arise to the desired process. These components are able to (re)combine with each other to create higher forms of organization, aka groups and networks which we call objects. Data however does not (re)combine alone.

This "Alt Intelligence" movement reminds me of an episode in human history.

In ancient Greece there was a movement, which called itself Sofia, that means wisdom. The biggest question was: What is more important, what man knows or what man can achieve with it? Their supporters decided to choose the second possibility as the appropriate answer. Their tool was the rhetoric. The same way the petals of a flower can be torn apart from each other, exactly the same way they tore up everything known to pieces, laws as well as common sense, until there was only one single statement left: "Man is the unit of measurement for all things." Each man should bring, his own interests to public attention. How? "You must have the ability to convert the weaker thing into a stronger one." After the initial enthusiasm, the general confusion came, namely civil war.

Expand full comment

I'd like to point out the Gato has 1.2 billion parameters - less than GPT-2 and yet it is able to deal with tasks it was not trained for. Let's wait and see how it performs when it is 100x bigger before jumping to conclusions.

Expand full comment

I find that neural net-based systems are excellent for "things you can't explain in words". For example, my project used a neural net to recognize lugnuts for cars and to recognize when a nut is cross-threaded because of the high torque. They are great for riding a bicycle.

However, symbolic systems and rule-based systems are more effective in the "things you can explain" domain. For example, a rule-based system can handle a missing lugnut. They are great for buying a bicycle.

So I feel there is room for both.

Expand full comment

Not so much a disagreement as to highlight an important contrasting pt- transparency and predictable contingent behavior will be key in involved AI. Systems with most of the logic hidden or undiscoverable won’t work within the kind of consortium-led agreements we need in high-interdependency systems.

Expand full comment

>But it should look to human (and animal cognition) for clues.

That's exactly right, Gary. As I wrote on Linkedin yesterday, transcription and translation are two essential processes of consciousness (as RNA does it - the base of biological life). Take a look at my unified theory of consciosness:

DNA, Knowledge, NLU - RNA, Consciousness, NLP - Protein, Perception, Multi-modal AI - Signal Transduction to DNA, Memory via Epigenetic Modifications, Human-in-the-Loop (RL)

Knowledge - Consciousness - Perception - Memory via Epigenetic Modifications = Basic psychological functions according to Carl Gustav Jung: Intuition - Thinking - Sensation - Feeling = DNA - RNA - Protein - Signal Transduction to DNA

Epistemological AGI - NLU - NLP - Multi-modal AI - Human-in-the-Loop (RL)

Expand full comment

"It may well be better than anything else we have currently got, but the fact that it still doesn’t really work, even after all the immense investments that have been made it, should give us pause."

Don't you think you write this a bit prematurely ? I mean, they're making steady progress, it's not like you had spent dozen of billions on a football team and it could barely reach semi finals, we here have a team that's consistently winning world cups. The analogy falls short in representing the technological frontier, but i maintain that we have no better horse right now.

And doesn't DL's popularity increase financing in other AI technologies, even though at a smaller rate ? Perhaps we should be thankful the pie is growing larger, even if we only get a small slice of it.

Expand full comment