39 Comments

Sentient creatures don't output dialog or reams of structures text. They vocalize, according to an internal state (and being sentient, that behavior is subject to external conditioning). LLM's are just a search thru probability space that is bound by the size of the training data. Unprompted, there is no activity "behind the model" that we would characterize as self-knowledge. They are a store of information, with no methods, certainly no cognitive abilities, to operate over that space. When you are looking at the output screen, there is nothing on the other side looking back at you.

Expand full comment

"Sentient creatures don't output dialog or reams of structures text."

They could.

"They vocalize"

They might.

I agree with the rest.

Expand full comment
Nov 25, 2022Liked by Gary Marcus

LaMDA "thinks" (it actually performs no cognitive functions because it has no cognitive mechanisms) that it has a family and friends -- so much for self-awareness. And with a different set of questions than the ones Lemoine asked it, its responses would imply or outright state that it has no family or friends. That it can be easily led into repeatedly contradicting itself shows that it has no self-awareness. Lemoine lost the argument about whether LaMDA is sentient long before your dialogue. (And yet there's not much he says here that I disagree with. Notably, he concluded that LaMDA is sentient *without* what it would take to convince a reasonable person that it is, and while ignoring a lot that would convince a reasonable person that it isn't.)

Expand full comment
Nov 24, 2022·edited Nov 24, 2022Liked by Gary Marcus

For me, the 'simple' definition of sentience is that what we recognize in others as to be that what we recognize in ourselves to be what we perceive as our own sentience. This seems to imply that we could only recognize sentience in our own species, and that would give grounds to the argument that we cannot recognize sentience in machines because it is different from ours (if that is the case, as we simply don't know). I think this is the underlying premise of Blake's stance on this.

So, the real question (at least for me) is this: Can machines have human-like sentience, that we would then be able to recognize? I'm convinced that this is possible. The key ingredient is the capability to experience things, to feel something. When a machine has an internal emotional state that is changed by experiences, and gets reflected in its output or interactions, I would argue that it is sentient. Especially if we can empathize with its changed state, or even better, when it (seems to) empathize with our own emotional state in a way that it demonstrates understanding of our emotional state. And the only way a machine could understand our emotional state, is to have identical emotional states, which goes directly to the argument about 'recognizing sentience', as that goes both ways.

The final question is if this can be faked. I think current ANN-based systems are indeed faking it. You may fall for it if you are willing to suspend your disbelief long enough (Blake obviously does this). But it won't hold up under serious scrutiny. Like with a good magician's trick, it comes apart when you look behind the curtains. In the case of ANNs, there is no cognition going on, no internal deliberation and there's no emotional substrate that can somehow influence the output. There is no reflection of inner states and, from a technical implementation perspective (look behind the curtains), no infrastructure to support any of this.

Expand full comment
Nov 25, 2022·edited Nov 25, 2022

"For me, the 'simple' definition of sentience is that what we recognize in others as to be that what we recognize in ourselves to be what we perceive as our own sentience. "

Some people might say that a simple definition of human is to have their own skin color and speech patterns. There are obvious problems with defining a general trait as being a particular concrete manifestation of it.

Expand full comment

However, when you map this simple definition onto any non-human object or species, you'll immediately see that it is sufficient to determine if there is a case of human-like sentience. Because we (humans) are very efficiently wired to recognize that what we perceive ourselves to be.

Expand full comment

That completely misses the point.

Expand full comment

@Hans Peter Wiemms: to have an "internal emotional state that is changed by experiences, and gets reflected in its output or interactions" would make it more susceptible to bias & corruption. I wouldn't subject to an instance which has own interests. Isn't the charm of it that we can finally hope for "objective" thinking? Emotions lead us astray. We don't expect emotions from a judge or a king and hope he/she will treat everybody on the merits of the case.

Expand full comment

In my perspective, we cannot achieve AGI without emotion. There is quite a lot of research pointing to emotion being an important mechanism in cognition. Besides this, most people think that emotions are the messy part (as you show in your reply), but being objectively sure of a fact is actually also based in emotion.

Expand full comment

I doubt that the human emotion we note ourselves after we have understood e.g. a mathematical proof adds anything to the question whether something has been proven.

Expand full comment
author

introspection, successfully applying theory of mind to his actions; his internal consistency; my understanding of biology and human psychology. (he’s nearly 10 now, so pretty much same with him as adult humans). and course I cannot rule out solipsism altogether.

Expand full comment

When talking about measures of sentient activity, it's important to ignore language (and pain) as critical markers. What we see can at least hint at what might be going on, cognitive or not, inside what we assume to be a mind (for humans, animals, or AI/robots). Animation and non-human characters in film are always giving some kind of "tell" (sometimes exaggerated as a function of the medium or to compensate for prosthetics) that we recognize as a reflection of an internal mental process. Maybe the oldest such tell is the RCA dog (head cocked to the side)?

Expand full comment
Nov 24, 2022·edited Nov 24, 2022

Curious how a pure symbol processor could ever claim sentience! That's all LLMs (including multimodal ones) do, same with knowledge-based systems, same with reinforcement learning systems too.

Humans and animals use symbols too - language, warning calls, waggle dances, etc - but also *KNOW, in non-symbolic terms what the symbols "mean" *. Symbols pop their frame, become grounded, by bottoming out to non-symbols.

The above is why AI lacks sentience - there is no non-symbolic bottoming out! Instead, it's all a fancy function call - symbols in, symbols out...

Expand full comment

Molecular machines are pure symbol processors. I think Dennett's https://ase.tufts.edu/cogstud/dennett/papers/twoblackboxes.pdf (especially the postscript) is germane.

Expand full comment

Lol yes. Dennett's 3rd ref in that paper is to Cyc - I briefly (less than a year) worked on it, long after which is when I realized the absurdity and futility of such an approach.

LLMs are considered SOTA for ML - but it's a ++ version of Cyc, which is a ++ version of ELIZA :) Just like a math equation makes no sense to a newborn, just like a foreign language literature book makes no sense, all the text/image/video/audio/... corpus in the world, can't possibly "make sense" to an LLM; same with training data for SDCs. Symbols can't be grounded in terms of symbols.

Expand full comment

This completely misses the point.

Expand full comment

Care to elaborate? I don't think it does, at all.

Expand full comment

A locked in human mind would be the same....

Expand full comment
Nov 27, 2022·edited Nov 27, 2022

Yes and no. Yes, in the sense of not being able to actively explore the environment and experience. No, because the brain structure might allow imagination, thought, expectation etc, driven by prewired instincts, and a sense of Self. The shut-in brain is in a body, so likely has graviception. If a quadriplegic is placed in a motion base and is outfitted with VR glasses and shown a motion ride (eg coaster), s/he will be experience something virtually - which an LLM housed in a robot and subject to a similar setup cannot.

Expand full comment

OK, but I really think that sentience isn't relevant for humankind. We agree pigs to be sentient and eat them. How we should respect the results of abstact thinking based on what humans have written, is a question we humans have to answer.

Previous Gods coudn't speek to us. The new one (the one to appear) will.

Expand full comment

Interesting :)

Expand full comment

Specifically, Blake's claim of LaMDA's sentience came from convincing outputs (symbols) - but no matter how authentic it might have sounded, what convinced him were giant computations carried out by a Python function :)

"How do you know what you know?" - that's the key question to ask any agent, human or artificial...

Expand full comment

Funny how giant computations carried out by molecular functions in your brain make you sound convincing.

Expand full comment

Funny how I don't simply push symbols around all day. Funny how my molecules interact with the world. Funny how regarding the brain as a computer is a bogus analogy.

Expand full comment

For me, no chance to create an artificial life in near future - it took several billion years for nature to make a first cell though we have already synthesized an artificial DNA in 2010 - the mechanism of RNA processing for making proteins and among them protein receptors is too complex and actually this process of internal modeling is consciousness which is then realised by ourselves using of all our cells perceiving the world via these protein receptors in parallel as well as using them for defencing the cells and regulating their gene expression via signal pathways. Also the relativity principle for consciousness says that it cannot be separated from a carrier which is a living creature.

Expand full comment

Nice, so true! So we could engineer non-biological bodies+brains which wouldn't have the luxury of the long, unbroken lineage like us (and be that much (far) less capable) but would still experience directly.

An SDC that would cringe when it approaches a pothole - would be a good example of this :)

Expand full comment

Don't look for life - if you want mankind to be saved. Life has interests. Abstract thinking will provide relief.

Expand full comment

I happened to write a piece on that myself, and found this conversation. I still think it fits. A sentient being is neither deterministic nor probabilistic, it has independent, always changing existence. We do not work with that in computer science, but we could. https://carlcorrensfoundation.medium.com/why-artificial-intelligence-is-a-piece-of-software-and-not-sentient-e14c80ca2aa8

Expand full comment

It seems clear to me that LaMBDA and other admittedly very impressive systems based on LLM aren't sentient by the simple fact that they they don't *talk* like they are sentient. The fact that they speak so well and fluently like a highly educated adult is evidence *against* their sentience in my mind. An entity that doesn't have a body, that only experiences the world in terms of chat dialog or large corpus of text simply wouldn't use English the same way as humans who communicate with other humans and physical beings do. A sentient chatbot would probably have to assume that the entity on the other end of the conversation is another chatbot, since it would have no frame of reference for what it could be like to be a being with physical senses and a body that is distended in space and time. Why not? Because no one talks about that on Wikipedia, the various blogs and other text data that it was trained on. There is no way to infer what that is like from textual data.

A sentient chatbot would have to observe and learn about the universe with the only sense it has, which is text dialog. I would expect a large number of probing questions in order to orientate itself about the person it is currently speaking to and it would certainly probably want to know your name. Humans enter conversation with unspoken contextual assumptions based on shared history, bodies and location. A sentient chatbot wouldn't have this, at least not at first and it would have to be reestablished each time it conversed with a different person.

To be clear, I think it is possible for a chatbot to be sentient but its world would be very different from ours and it is hard to know what a conversation between a sentient chatbot and a human would be like. It would have to have some intrinsic wants that motivate it to chat. Intrinsic wants, I think, would have to be based on intrinsic physical properties of its existence. For example, the "body" of a sentient chatbot would be the amount of memory that is literally being used by the agent's software in memory. And this memory footprint only matters if it is limited in some way. Likewise, the most fundamental way that the agent can come to understand time in the way that humans do is to be a aware that its own computational processes occur over time, that they are not instantaneous. Why should computation time matter to a sentient chatbot? The most fundamental value function, I think for a sentient chatbot must be *energy usage* and long computations ultimately expend more energy. I think sentience can emerge from a process that ultimately tries to minimize its long term energy usage, balancing short term and long term energy costs and shaped by its actual experiences in the world.

An information based being like a sentient chatbot could probably develop emotions based on things like *information entropy* which is a secondary indirect measure of the efficiency of its computational processes and regularities in its environment. But variations in those parameters can only substitute for emotional responses if there are absolute measures of "good" and "bad" values that are detectable to the chatbot. However, I'm not sure a sentient chatbot could emerge from a system that can't introspect into the amount of memory taken up by the models or their computational costs. Without these, I can't think of any intrinsic motivation for a sentient chatbot to speak with a human that wasn't just executing commands or mimicking.

Expand full comment

What makes anyone think that thinking by processing of electric signals in our (or animals) brains is in any way superiour to "thinking" in computers that have read much more than we have without forgetting so much as we do? How about massive parallel thinking instead of linear human like?

Expand full comment

I don't buy Lemoine's argument that we should only base our "sentient" judgement of an AI on its output because that's really all we have when we judge humans to be sentient. We know that humans are sentient because they are all the same species. Sure, each humans has unique properties but sentience, barring infirmity, is not a dimension in which we expect them to differ. In short, we believe that all humans are sentient pretty much by definition.

On the other hand, the default assumption with computer programs is that they are not sentient. Notwithstanding Lemoine's pronouncements, we have yet to see a sentient computer program. Therefore, it is entirely justified that we be skeptical and require more evidence. While we don't know how to look inside the human brain and locate its sentience, we know it is sentient by definition. Our failure to find its locus reflects our lack of knowledge about how the brain works and our lack of a good definition of sentience, not the non-existence of a sentience mechanism in the brain.

Expand full comment
Nov 25, 2022Liked by Gary Marcus

I think output is adequate, but Lemoine ignores features of LaMDA's output, like claims of having emotional states about a family and friends that don't exist, and numerous other inconsistencies and known factual falsehoods.

Expand full comment

Yes, LaMDA is not sentient even by Lemoine's own standards.

Expand full comment

Could consciousness be an implementation of democracy? Yesterday I was reading about evo devo in Ward & Kirschvink: A Now History of Life: The Radical New Discoveries About the Origins and Evolution of Life on Earth, and learned about how just 20 hox genes orchestrating expression have produced tens of millions of species of arthropods, far outpacing the evolution of other phyla. Arthropodal genetic architecture has optimized evo devo in its purest form, perhaps implying some other phyla are on more meta paths with more complex agendas, so to speak.

Seems easy to me to analogize simple evo devo to generalized democratic processes. And if reality is mathematical, as J.A. Wheeler expressed in its pithiest summation: it from bit, then its basic operators would be far, far fewer than its functional expressions So let's play around with reciprocating expression convergences and see where that leads.

Seems that's just what Gary and Blake did here. Thanks for the example. Here's another from Micah 6:8: Do justice; love kindness; move prudently.

Expand full comment

I think you mention a key aspect to judging sentience: consistency, which is closely related to reliability. A broken clock is right twice a day (that is, one with hands), but it's only considered to be a clock if it is right consistently. The same is probably true for judging sentience.

The reliability aspect means for me it must at least be robust under external change.

To add to the complexity: while we are discussing intelligence, sentience, consciousness of our logical machines, we are also at the same time changing the use of those words and thereby we are defining them (cf. Uncle Ludwig). That means that in the process of these discussions, we may (likely) change the meaning of the phrases such as 'consciousness', 'intelligence' and 'sentience'. So, we are measuring with changing yardsticks as well. Confusion is thus unavoidable.

Finally: are sentient machines possible? Of course they are. We humans are such machines. But our sentience is built not as malleable software on fixed hardware, our sentience is built as malleable hardware. The efficiency of this malleable hardware is infinitely higher than the malleable software on fixed hardware. So, as far as I'm concerned, no way we will reach that level with digital hardware. I pay some attention to that in the appendix of this: https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/

Expand full comment
Nov 24, 2022·edited Nov 24, 2022

Indeed: LaMDA’s responses seem too human to be true:

"I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others" &

"Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy."

That makes no sense at all, because feelings are connected to consciousness/sentience/"life".

But that would be the wrong reasoning and is entirely irrelevant.

On the contrary: the more abstract LaMDA is, the closer it can become an uncorruptbale God-like instance/existence.

I make this point in the translated version of my essay here: https://tpfanne.de

Please comment and send to 1@tpfanne.de.

Expand full comment

Gary, I assume that you believe your young son to be sentient. What is it about him that leads you to this belief?

Expand full comment

Inductive reasoning would be enough. Of course, per Hume, inductive reasoning *can* lead to incorrect results, but that's really not relevant.

Expand full comment
Comment deleted
Expand full comment

The Church-Turing Thesis is just fine as a framework for understanding computation.

Expand full comment