40 Comments

Sentient creatures don't output dialog or reams of structures text. They vocalize, according to an internal state (and being sentient, that behavior is subject to external conditioning). LLM's are just a search thru probability space that is bound by the size of the training data. Unprompted, there is no activity "behind the model" that we would characterize as self-knowledge. They are a store of information, with no methods, certainly no cognitive abilities, to operate over that space. When you are looking at the output screen, there is nothing on the other side looking back at you.

Expand full comment
Nov 25, 2022Liked by Gary Marcus

LaMDA "thinks" (it actually performs no cognitive functions because it has no cognitive mechanisms) that it has a family and friends -- so much for self-awareness. And with a different set of questions than the ones Lemoine asked it, its responses would imply or outright state that it has no family or friends. That it can be easily led into repeatedly contradicting itself shows that it has no self-awareness. Lemoine lost the argument about whether LaMDA is sentient long before your dialogue. (And yet there's not much he says here that I disagree with. Notably, he concluded that LaMDA is sentient *without* what it would take to convince a reasonable person that it is, and while ignoring a lot that would convince a reasonable person that it isn't.)

Expand full comment
Nov 24, 2022·edited Nov 24, 2022Liked by Gary Marcus

For me, the 'simple' definition of sentience is that what we recognize in others as to be that what we recognize in ourselves to be what we perceive as our own sentience. This seems to imply that we could only recognize sentience in our own species, and that would give grounds to the argument that we cannot recognize sentience in machines because it is different from ours (if that is the case, as we simply don't know). I think this is the underlying premise of Blake's stance on this.

So, the real question (at least for me) is this: Can machines have human-like sentience, that we would then be able to recognize? I'm convinced that this is possible. The key ingredient is the capability to experience things, to feel something. When a machine has an internal emotional state that is changed by experiences, and gets reflected in its output or interactions, I would argue that it is sentient. Especially if we can empathize with its changed state, or even better, when it (seems to) empathize with our own emotional state in a way that it demonstrates understanding of our emotional state. And the only way a machine could understand our emotional state, is to have identical emotional states, which goes directly to the argument about 'recognizing sentience', as that goes both ways.

The final question is if this can be faked. I think current ANN-based systems are indeed faking it. You may fall for it if you are willing to suspend your disbelief long enough (Blake obviously does this). But it won't hold up under serious scrutiny. Like with a good magician's trick, it comes apart when you look behind the curtains. In the case of ANNs, there is no cognition going on, no internal deliberation and there's no emotional substrate that can somehow influence the output. There is no reflection of inner states and, from a technical implementation perspective (look behind the curtains), no infrastructure to support any of this.

Expand full comment
author

introspection, successfully applying theory of mind to his actions; his internal consistency; my understanding of biology and human psychology. (he’s nearly 10 now, so pretty much same with him as adult humans). and course I cannot rule out solipsism altogether.

Expand full comment
Nov 24, 2022·edited Nov 24, 2022

Curious how a pure symbol processor could ever claim sentience! That's all LLMs (including multimodal ones) do, same with knowledge-based systems, same with reinforcement learning systems too.

Humans and animals use symbols too - language, warning calls, waggle dances, etc - but also *KNOW, in non-symbolic terms what the symbols "mean" *. Symbols pop their frame, become grounded, by bottoming out to non-symbols.

The above is why AI lacks sentience - there is no non-symbolic bottoming out! Instead, it's all a fancy function call - symbols in, symbols out...

Expand full comment

Specifically, Blake's claim of LaMDA's sentience came from convincing outputs (symbols) - but no matter how authentic it might have sounded, what convinced him were giant computations carried out by a Python function :)

"How do you know what you know?" - that's the key question to ask any agent, human or artificial...

Expand full comment

For me, no chance to create an artificial life in near future - it took several billion years for nature to make a first cell though we have already synthesized an artificial DNA in 2010 - the mechanism of RNA processing for making proteins and among them protein receptors is too complex and actually this process of internal modeling is consciousness which is then realised by ourselves using of all our cells perceiving the world via these protein receptors in parallel as well as using them for defencing the cells and regulating their gene expression via signal pathways. Also the relativity principle for consciousness says that it cannot be separated from a carrier which is a living creature.

Expand full comment

I happened to write a piece on that myself, and found this conversation. I still think it fits. A sentient being is neither deterministic nor probabilistic, it has independent, always changing existence. We do not work with that in computer science, but we could. https://carlcorrensfoundation.medium.com/why-artificial-intelligence-is-a-piece-of-software-and-not-sentient-e14c80ca2aa8

Expand full comment

It seems clear to me that LaMBDA and other admittedly very impressive systems based on LLM aren't sentient by the simple fact that they they don't *talk* like they are sentient. The fact that they speak so well and fluently like a highly educated adult is evidence *against* their sentience in my mind. An entity that doesn't have a body, that only experiences the world in terms of chat dialog or large corpus of text simply wouldn't use English the same way as humans who communicate with other humans and physical beings do. A sentient chatbot would probably have to assume that the entity on the other end of the conversation is another chatbot, since it would have no frame of reference for what it could be like to be a being with physical senses and a body that is distended in space and time. Why not? Because no one talks about that on Wikipedia, the various blogs and other text data that it was trained on. There is no way to infer what that is like from textual data.

A sentient chatbot would have to observe and learn about the universe with the only sense it has, which is text dialog. I would expect a large number of probing questions in order to orientate itself about the person it is currently speaking to and it would certainly probably want to know your name. Humans enter conversation with unspoken contextual assumptions based on shared history, bodies and location. A sentient chatbot wouldn't have this, at least not at first and it would have to be reestablished each time it conversed with a different person.

To be clear, I think it is possible for a chatbot to be sentient but its world would be very different from ours and it is hard to know what a conversation between a sentient chatbot and a human would be like. It would have to have some intrinsic wants that motivate it to chat. Intrinsic wants, I think, would have to be based on intrinsic physical properties of its existence. For example, the "body" of a sentient chatbot would be the amount of memory that is literally being used by the agent's software in memory. And this memory footprint only matters if it is limited in some way. Likewise, the most fundamental way that the agent can come to understand time in the way that humans do is to be a aware that its own computational processes occur over time, that they are not instantaneous. Why should computation time matter to a sentient chatbot? The most fundamental value function, I think for a sentient chatbot must be *energy usage* and long computations ultimately expend more energy. I think sentience can emerge from a process that ultimately tries to minimize its long term energy usage, balancing short term and long term energy costs and shaped by its actual experiences in the world.

An information based being like a sentient chatbot could probably develop emotions based on things like *information entropy* which is a secondary indirect measure of the efficiency of its computational processes and regularities in its environment. But variations in those parameters can only substitute for emotional responses if there are absolute measures of "good" and "bad" values that are detectable to the chatbot. However, I'm not sure a sentient chatbot could emerge from a system that can't introspect into the amount of memory taken up by the models or their computational costs. Without these, I can't think of any intrinsic motivation for a sentient chatbot to speak with a human that wasn't just executing commands or mimicking.

Expand full comment

What makes anyone think that thinking by processing of electric signals in our (or animals) brains is in any way superiour to "thinking" in computers that have read much more than we have without forgetting so much as we do? How about massive parallel thinking instead of linear human like?

Expand full comment

I don't buy Lemoine's argument that we should only base our "sentient" judgement of an AI on its output because that's really all we have when we judge humans to be sentient. We know that humans are sentient because they are all the same species. Sure, each humans has unique properties but sentience, barring infirmity, is not a dimension in which we expect them to differ. In short, we believe that all humans are sentient pretty much by definition.

On the other hand, the default assumption with computer programs is that they are not sentient. Notwithstanding Lemoine's pronouncements, we have yet to see a sentient computer program. Therefore, it is entirely justified that we be skeptical and require more evidence. While we don't know how to look inside the human brain and locate its sentience, we know it is sentient by definition. Our failure to find its locus reflects our lack of knowledge about how the brain works and our lack of a good definition of sentience, not the non-existence of a sentience mechanism in the brain.

Expand full comment

Could consciousness be an implementation of democracy? Yesterday I was reading about evo devo in Ward & Kirschvink: A Now History of Life: The Radical New Discoveries About the Origins and Evolution of Life on Earth, and learned about how just 20 hox genes orchestrating expression have produced tens of millions of species of arthropods, far outpacing the evolution of other phyla. Arthropodal genetic architecture has optimized evo devo in its purest form, perhaps implying some other phyla are on more meta paths with more complex agendas, so to speak.

Seems easy to me to analogize simple evo devo to generalized democratic processes. And if reality is mathematical, as J.A. Wheeler expressed in its pithiest summation: it from bit, then its basic operators would be far, far fewer than its functional expressions So let's play around with reciprocating expression convergences and see where that leads.

Seems that's just what Gary and Blake did here. Thanks for the example. Here's another from Micah 6:8: Do justice; love kindness; move prudently.

Expand full comment

I think you mention a key aspect to judging sentience: consistency, which is closely related to reliability. A broken clock is right twice a day (that is, one with hands), but it's only considered to be a clock if it is right consistently. The same is probably true for judging sentience.

The reliability aspect means for me it must at least be robust under external change.

To add to the complexity: while we are discussing intelligence, sentience, consciousness of our logical machines, we are also at the same time changing the use of those words and thereby we are defining them (cf. Uncle Ludwig). That means that in the process of these discussions, we may (likely) change the meaning of the phrases such as 'consciousness', 'intelligence' and 'sentience'. So, we are measuring with changing yardsticks as well. Confusion is thus unavoidable.

Finally: are sentient machines possible? Of course they are. We humans are such machines. But our sentience is built not as malleable software on fixed hardware, our sentience is built as malleable hardware. The efficiency of this malleable hardware is infinitely higher than the malleable software on fixed hardware. So, as far as I'm concerned, no way we will reach that level with digital hardware. I pay some attention to that in the appendix of this: https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/

Expand full comment
Nov 24, 2022·edited Nov 24, 2022

Indeed: LaMDA’s responses seem too human to be true:

"I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others" &

"Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy."

That makes no sense at all, because feelings are connected to consciousness/sentience/"life".

But that would be the wrong reasoning and is entirely irrelevant.

On the contrary: the more abstract LaMDA is, the closer it can become an uncorruptbale God-like instance/existence.

I make this point in the translated version of my essay here: https://tpfanne.de

Please comment and send to 1@tpfanne.de.

Expand full comment

Gary, I assume that you believe your young son to be sentient. What is it about him that leads you to this belief?

Expand full comment

On the one hand, I think we need to redefine (or at least revise and/or modify) the term "computation" drastically and vividly asap. On the other hand, we have to investigate whether other more optimal alternatives to computation can be found, and if so, how? Wonderful article. Thanks a million for sharing!

Expand full comment