24 Comments
Dec 15, 2022Liked by Gary Marcus

Gary. is there any work addressing the issue of schizophrenic behaviors and these models. In other words judging the DALL-E misread as a delusion (hallucination).

PS you're wonderful and you will get your due

Expand full comment

Did you ever learn something from a person who agreed with you?

How is stuffing a computer with existing data any different?

Yesterday I sat at the birthplace of Wilbur Wright and watched 2 pairs of vultures gracefully balanced along the unseen currents of air.

When Wilbur and Orville watched birds in flight - they didn't think about making wings out of feathers and wax...

They saw the balance and control - they connected that to the bicycles they built. They thought about the hours of practice required to become proficient and competitive riding a bicycle. They understood intuitively the subtle design problems that would have to be resolved - simple things, such as the slight forward bend in the front fork that made a bicycle stable.

The knew that for a man to fly - he would need hours of practice, balancing on the air. The exact thing that had eluded all the other brilliant inventors.

One last question - what will we do when our AI disagrees with us?

Thanks for reading.

Expand full comment
author

Extremely good idea for an essay: why gpt’s hallucinations probably aren’t like those of szivhophrenia

Expand full comment

As usual, thank you...A specific bone to pick, a general point about AGI talk, and a positive suggestion.

You often start out with a discussion about the gross errors of ML, to segue into how much more is needed. The problem with this method of attack is that in key creative and selection-related tasks (security, reproduction, mutation), lots and lots of gross errors is not only ok, it's sometimes the only way that type of task can be approached, often by definition. So to my ear, you're tilting at windmills and protesting too much all at once by highlighting this purported weakness of many gross errors, when it's only a weakness from a certain angle and in a certain context. It's fantastic to synthesize a useful protein after 10,000 gross and less gross errors. The argument is an inappropriate "opposite" to your values of consistency, kindness, common sense (big subject), understanding (bigger subject), legally circumscribable limits, transparency, contingency/flexibility- in fact, it should be added to that desired list as a contingent potential tool for some of those desired skill sets in creativity and selection-oriented contexts.

My larger point: you're making the mistake you often accuse others of by limiting dimensionality of intelligence to a narrow aspect of it, in this case accuracy. Others focus like the blind men with the elephant on consistency, context-sensitivity, brilliance, sensitivity, safety, creativity, knowledge, embodiment, or algorithmic coherence.

Thanks to the advantages of machinery, accuracy is such a tiny part of the gig it isn't funny (not so for its cousins, replication and reproduction- but that's a separate tale.) The main value of symbols in the fight for useful AGI is simply the mostly-human work of instantiating collective human intelligence about what we want to mandate that machines do and not do, and how. The question of ML solving AGI is obviated when one includes in AGI the utterly foundational need for a wildly diverse set of complex agreements, political enactments that must arise via humans to achieve the ubiquitous power, interdependency, safety, and stability required of such tools. Many are about protocols, but the most important are quasi-moral, agreed-upon frameworks, culturally-derived artifacts of quite subject-specific notions that amount to an extremely ambitious regulatory challenge. All of which simply must be instantiated within and surround whatever calculating engines we use with transparent, standardized, symbolic code. Regulation around basic context-sensitivity, kindness, legal versions of honesty, consistency, safety, (nested) transparency, and other desired characteristics.

(to be clear: I have no patience for arguments about singularities or personhood for machines, in a world where we have many virtual singularities occurring constantly, all of which involve humans doing fraud and manipulation. All my thinking about AGI assumes machines as tools for humans, with no future that imbues them with personhood except narrowly/precisely as natural proxies as appropriate, like when controlling a dialysis machine. Ironically, we are already experiencing many of the effects we want to avoid from AI independent of human values. In fact, assuming that and realizing it's a difficult criterion already is an important part of rising to the ethical and algorithmic task. This touches on a related, ignored point, that all the problems we scare ourselves with in AI are already here to some degree because of the already-inculcated technical agents and AI in our lives; we shouldn't feel so stunned about what to do, because we are already ignoring many of the problems and obvious available solution sets already.)

My positive suggestion follows from the above: focus in our arguments less on the limitations of ML and more on the breadth and non-technical heart of AGI, in order to make clear how much symbolic work is needed and not getting done. AHI and by extension AGI are writ large, god damn it. It doesn't magically start and end at compilation. In a way, this is the standard mistake people make when they assume a coding project is about coding, when coding is typically about 30% of it. I'd put technology and tech people at that same 30% of the AGI gig. Your training and orientation outside of tech is what has you standing here with your fingers in the dyke. There is no coherent argument whatsoever against mandatory symbolic manipulation as priors, embedded feedbacks, and posts in any actual, complex AGI-risk-level machine that touches the incredible interdependency of modern life. We are simply unable to allocate any of those complex tasks to black boxes on the fly, no matter what they're fed as input- ML can do lots of the really hard stuff in the middle of the job. The most important part of empirical AGI must be transparent symbolic instantiation of rules; coding and protocol-ing up the messy, negotiated, endlessly-complex, culture-embedded versions of standards we need (which I believe to be surprisingly tied in to design considerations.) This vision of AGI amounts to including integrally within it the budgetary and political and algorithmic progress needed, because otherwise we are allowing our market-centric cultural strengths in ML to ride roughshod, with dangerously siloed aspects of intelligence more and more entrenched as ad hoc, de facto standards across the legal, moral, and functional landscapes.

Expand full comment
May 14, 2022·edited May 14, 2022

"The bottom line is this; something that AI once cherished but has now forgotten: If we are to build AGI, we are going to need learn something from humans, and how they reason and understand the physical world and represent and acquire language and complex concepts."

Via a BODY.

The Original Sin of AI, has been, to replace 'DPIC' - Direct, Physical, Interactive, Continuous - *experience*, with digital computation that involves human-created rules ('symbolic AI'), human-gathered (and labeled, for supervised ML) data ('connectionist AI') , human-created goals ('reinforcement learning AI'). While these three major branches of AI have achieved deep+narrow wins, none come anywhere close to what a butterfly, baby or kitten knows.

Biological intelligence occurs on account of directly dealing with the environment, via embodiment - which is why there is no inherent need for rules, data, goals - intelligence just 'happens' [for sure, via use of bodily and 'brainily' structures that are the result of millions of years of evolution].

The body is not simply, input/output for brain computations. Treating it as such, imo, is why AI has failed (in leading to robust, natural-like intelligence).

Expand full comment
May 15, 2022·edited May 15, 2022

Ok, very well said, but AI is well suited to the large subset of probability judgments needed in business, marketing, consumer interfacing, etc., where the benefit of being right most of the time is huge and the cost of being occasionally wrong is not too big.

I am a healthy skeptic of the future of AI in dangerous work like performing a surgery or driving a car because of the cost of being wrong. But there are enough of the non-dangerous domains that we haven’t even gotten close to the end yet. More and more human activities will be re-framed as machine “prediction” problems.

Expand full comment

This is interesting! Thanks, Dr. Marcus. -- from Suchitra Abel

Expand full comment

These "Alt Intelligence" people assume that there is only the need of "critical mass" until AI becomes human-alike, aka cognitive. There is one thing they forgot, namely that components of this critical mass have to have some properties, which will arise to the desired process. These components are able to (re)combine with each other to create higher forms of organization, aka groups and networks which we call objects. Data however does not (re)combine alone.

This "Alt Intelligence" movement reminds me of an episode in human history.

In ancient Greece there was a movement, which called itself Sofia, that means wisdom. The biggest question was: What is more important, what man knows or what man can achieve with it? Their supporters decided to choose the second possibility as the appropriate answer. Their tool was the rhetoric. The same way the petals of a flower can be torn apart from each other, exactly the same way they tore up everything known to pieces, laws as well as common sense, until there was only one single statement left: "Man is the unit of measurement for all things." Each man should bring, his own interests to public attention. How? "You must have the ability to convert the weaker thing into a stronger one." After the initial enthusiasm, the general confusion came, namely civil war.

Expand full comment

I'd like to point out the Gato has 1.2 billion parameters - less than GPT-2 and yet it is able to deal with tasks it was not trained for. Let's wait and see how it performs when it is 100x bigger before jumping to conclusions.

Expand full comment

I find that neural net-based systems are excellent for "things you can't explain in words". For example, my project used a neural net to recognize lugnuts for cars and to recognize when a nut is cross-threaded because of the high torque. They are great for riding a bicycle.

However, symbolic systems and rule-based systems are more effective in the "things you can explain" domain. For example, a rule-based system can handle a missing lugnut. They are great for buying a bicycle.

So I feel there is room for both.

Expand full comment

>But it should look to human (and animal cognition) for clues.

That's exactly right, Gary. As I wrote on Linkedin yesterday, transcription and translation are two essential processes of consciousness (as RNA does it - the base of biological life). Take a look at my unified theory of consciosness:

DNA, Knowledge, NLU - RNA, Consciousness, NLP - Protein, Perception, Multi-modal AI - Signal Transduction to DNA, Memory via Epigenetic Modifications, Human-in-the-Loop (RL)

Knowledge - Consciousness - Perception - Memory via Epigenetic Modifications = Basic psychological functions according to Carl Gustav Jung: Intuition - Thinking - Sensation - Feeling = DNA - RNA - Protein - Signal Transduction to DNA

Epistemological AGI - NLU - NLP - Multi-modal AI - Human-in-the-Loop (RL)

Expand full comment

"It may well be better than anything else we have currently got, but the fact that it still doesn’t really work, even after all the immense investments that have been made it, should give us pause."

Don't you think you write this a bit prematurely ? I mean, they're making steady progress, it's not like you had spent dozen of billions on a football team and it could barely reach semi finals, we here have a team that's consistently winning world cups. The analogy falls short in representing the technological frontier, but i maintain that we have no better horse right now.

And doesn't DL's popularity increase financing in other AI technologies, even though at a smaller rate ? Perhaps we should be thankful the pie is growing larger, even if we only get a small slice of it.

Expand full comment