151 Comments

I'm not worried about a conscious AI because, regardless of the many claims, I don't think anyone knows what consciousness is. I'm, however, very worried about an AGI falling in the wrong hands. An AGI will behave according to its conditioning, i.e., its motivations. It will not want to do anything outside its given motivations. And its motivations will be conditioned into it by its trainers and teachers.

Expand full comment
Aug 22, 2023Liked by Gary Marcus

What bugs me about the paper : all the indicators of consciousness could be implemented in some silly 2D matrix of integers world-of-a-kind, and yet nobody would dare hypothesize such a simple computer program is conscious.

It seems to further support that complexity and/or substrate are keys. Perhaps the heuristic of "if it's really smart and self-aware, it's probably conscious" is sufficient for preventing suffering.

Expand full comment

We need to start putting policy in our place as if AI is conscious and adversarial. That is what all the deep AI policy wonks are calling for. First step, curb access to open source LLMs. Do we really want everyone and their brother training these machines for god knows what? Marcus, you are right on the money. It is time to wake up and smell the coffee. And one can say this and still be excited about using these applications in a variety of different spaces. I dub this approach “critical” AI literacy.

Expand full comment
Aug 22, 2023Liked by Gary Marcus

I think we can presume enough of the AI community are totally mad and will make terrible decisions. The only way here is legislation. We shouldn't assume for a moment that people will take anything but the worst possible decision.

Expand full comment

I'm with Rebel - we have no remote idea of how matter and energy give rise to subjective experience. I am happy to bet (not that we can prove) that consciousness is far easier to fake.

https://www.mattball.org/2022/09/robots-wont-be-conscious.html

Expand full comment

Hi Gary, all conscious entities to date, are analog - with 3D form, molecules etc - which undergo phenomena (molecular docking, electric field generation etc). It's an absurd, baseless assumption that these can be replaced by chips, clock cycles and numpy :)

The Physical Symbol System Hypothesis (on which all these claims and presumptions are based) is exactly that, a hypothesis.

Expand full comment

Any tools we make should be a simple and specialized as needed for a job. General super-smart systems, and especially human-like systems, will just result in more complexity and problems.

Expand full comment

Sentient AI will literally never happen. I think a lot of people have fallen too far down the sci-fi rabbit hole, and they’re doing a disservice to actual science. I would say all this singularity/sentience talk is just harmless nerd fantasy, but it’s creating enough hype to move markets and scare governments into relying on the tech industry to regulate itself.

Expand full comment

Undergoes phenomena that don't involve explicit computation - that is the key distinction between it and a chip.

If I draw two dots on a piece of paper that are an inch apart, and two others that are 5 inches apart, and ask you which pair is closer, you would say the first pair. Did your brain calculate the Euclidian distances, compare them and pick the first? What would "AI" do? Compute and compare. That is the distinction.

Expand full comment

They're going to build *something* (barring economic collapse, resource constraints, etc.) and it might even have some or all of the properties that they believe they've taken from neuroscience.

It won't be human-like consciousness as we understand it. What they're building is, by definition, outside of the scope of the many varieties of awareness that human beings possess.

The danger in this is that their machines *won't need to be human like*. This is the consequence that is so worrying.

Expand full comment

What kind of tortured locked-in soul, starved of sensation, with no lived experience, will they strive to achieve - a human replica, or something else? Those who aspire to be gods risk potentially horrific consequences.

Expand full comment
Aug 23, 2023·edited Aug 23, 2023

FWIW, the best proposal about consciousness I'm aware of was put forth by William Powers back in 1973 in his book, Behavior: The Control of Perception. Though it was favorably reviewed in Science his way of thinking never caught on, perhaps because his conception was analog and by that time all the cool kids had gotten caught up in the so-called cognitive revolution, which was and remains a digital enterprise.

Powers account of consciousness is elegant and, in a way, simple, but it is not easily conveyed in brief compass. You really need to think through his whole model. Briefly, Powers’ model consists of two components: 1) a stack of servomechanisms – see the post In Memory of Bill Powers – regulating both perception and movement, and 2) a reorganizing system. The reorganizing system is external to the stack, but operates on it to achieve adaptive control, an idea he took from Norbert Wiener. Powers devoted “Chapter 14, Learning” to the subject (pp. 177-204). Reorganization is the mechanism through which Powers achieves learning, which he calls reorganization.

Here's a passage from his book that gets at the heart of things (pp. 199-201):

To the reorganizing system, under these new hypotheses, the hierarchy of perceptual signals is itself the object of perception, and the recipient of arbitrary actions. This new arrangement, originally intended only as a means of keeping reorganization closer to the point, gives the model as a whole two completely different types of perceptions: one which is a representation of the external world, and the other which is a perception of perceiving. And we have given the system as a whole the ability to produce spontaneous acts apparently unrelated to external events or control considerations: truly arbitrary but still organized acts.

As nearly as I can tell short of satori, we are now talking about awareness and volition.

Awareness seems to have the same character whether one is being aware of his finger or of his faults, his present automobile or the one he wishes Detroit would build, the automobile’s hubcap or its environmental impact. Perception changes like a kaleidoscope, while that sense of being aware remains quite unchanged. Similarly, crooking a finger requires the same act of will as varying one’s bowling delivery “to see what will happen.” Volition has the arbitrary nature required of a test stimulus (or seems to) and seems the same whatever is being willed. But awareness is more interesting, somehow.

The mobility of awareness is striking. While one is carrying out a complex behavior like driving a car through to work, one’s awareness can focus on efforts or sensations or configurations of all sorts, the ones being controlled or the ones passing by in short skirts, or even turn to some system idling in the background, working over some other problem or musing over some past event or future plan. It seems that the behavioral hierarchy can proceed quite automatically, controlling its own perceptual signals at many orders, while awareness moves here and there inspecting the machinery but making no comments of its own. It merely experiences in a mute and contentless way, judging everything with respect to intrinsic reference levels, not learned goals.

This leads to a working definition of consciousness. Consciousness consists of perception (presence of neural currents in a perceptual pathway) and awareness (reception by the reorganizing system of duplicates of those signals, which are all alike wherever they come from). In effect, conscious experience always has a point of view which is determined partly by the nature of the learned perceptual functions involved, and partly by built-in, experience-independent criteria. Those systems whose perceptual signals are being monitored by the reorganizing system are operating in the conscious mode. Those which are operating without their perceptual signals being monitored are in the unconscious mode (or preconscious, a fine distinction of Freud’s which I think unnecessary).

This speculative picture has, I believe, some logical implications that are borne out by experience. One implication is that only systems in the conscious mode are subject either to volitional disturbance or reorganization. The first condition seems experientially self-evident: can you imagine willing an arbitrary act unconsciously? The second is less self-evident, but still intuitively right. Learning seems to require consciousness (at least learning anything of much consequence). Therapy almost certainly does. If there is anything on which most psychotherapists would agree, I think it would be the principle that change demands consciousness from the point of view that needs changing. Furthermore, I think that anyone who has acquired a skill to the point of automaticity would agree that being conscious of the details tends to disrupt (that, is, begin reorganization of) the behavior. In how many applications have we heard that the way to interrupt a habit like a typing error is to execute the behavior “on purpose”—that is, consciously identifying with the behaving system instead of sitting off in another system worrying about the terrible effects of having the habit? And does not “on purpose” mean in this case arbitrarily not for some higher goals but just to inspect the act, itself?

* * *

That's from a blog post I did a year ago. In that post I go on to quote a passage from a well-known 1988 article by Fodor and Pylyshyn , "Connectionism and Cognitive Architecture: A Critical Analysis." Here's a link to that post: https://new-savanna.blogspot.com/search?q=powers

That's the first in a series of four posts. The fourth post in that series establishes a link between the Fodor and Pylyshun passage, Powers, consciousness, and the glia. Back in the old days no one paid much attention to the glia, treating them more or less as 'packing peanuts' for the neuronal web. Things are changing now. Here's a link to that 4th post: https://new-savanna.blogspot.com/2022/08/consciousness-reorganization-and_20.html

Expand full comment

Funny, I used that same line in a cautionary piece I wrote on AI on Medium a couple months ago, in the same vein: what IS the need?

There's also this other Goldblum line from The Lost World: “‘Ooh, ah.’ That’s how it always starts. But then later, there’s running…and screaming.”

Expand full comment

"We can’t even control LLMs. Do we really want to open another, perhaps even riskier box?" We need to open the riskier box because of our limitations. We can't see the right answer because we have a Four Pieces Limit, so we regularly have billion dollar stuffups, and respond too late to emergencies. The trouble is, we won't be able to understand the answers AGI gives, because of our limitations - an interesting quandary.

Expand full comment

Do peer reviewed journals even exist in this field?

I get the value of pre-prints for publishing cutting edge stuff, avoiding getting scooped, bypassing the blood-sucking publishing gatekeepers, giving younger researchers a means to get their work out there without having it watered down by established researchers protecting their turf, etc. But these are big names making big claims.

I used to be cynical about peer review, but after the past 6 months of speculative AI hype pieces delivered to the public via arxiv, I am ready to repent.

Expand full comment