Discussion about this post

User's avatar
Steven Marlow's avatar

Sentient creatures don't output dialog or reams of structures text. They vocalize, according to an internal state (and being sentient, that behavior is subject to external conditioning). LLM's are just a search thru probability space that is bound by the size of the training data. Unprompted, there is no activity "behind the model" that we would characterize as self-knowledge. They are a store of information, with no methods, certainly no cognitive abilities, to operate over that space. When you are looking at the output screen, there is nothing on the other side looking back at you.

Expand full comment
Marcel Kincaid's avatar

LaMDA "thinks" (it actually performs no cognitive functions because it has no cognitive mechanisms) that it has a family and friends -- so much for self-awareness. And with a different set of questions than the ones Lemoine asked it, its responses would imply or outright state that it has no family or friends. That it can be easily led into repeatedly contradicting itself shows that it has no self-awareness. Lemoine lost the argument about whether LaMDA is sentient long before your dialogue. (And yet there's not much he says here that I disagree with. Notably, he concluded that LaMDA is sentient *without* what it would take to convince a reasonable person that it is, and while ignoring a lot that would convince a reasonable person that it isn't.)

Expand full comment
37 more comments...

No posts