To everyone saying that the program just uses pre-set rules and data inputs to generate speech, boy do I have some news for you about how humans generate speech
There's a difference, though: When someone asks us where we want to go to dinner, we think about places we like, how hungry we are, how much time we have, how much it costs, and multiple other factors. We don't troll through a memory of every other dinner question asked of humanity and come up with an answer that's the most statistically relevant. Both involve responding to queries; only one indicates a continuing thought process aware of its environment in a meaningful way.
Your dinner preference in the moment is the most statistically relevant choice needed to fulfill your current nutrient requirements based on your past experiences of food. None of these properties are relevant for an AI, so I'm not sure what you think this proves. I might as well ask you if you prefer AC or DC power to fill your batteries. It's a nonsensical question, and neither does it prove or disprove intelligence or sentience.
The choice of query was arbitrary, obviously. It could be almost anything and the same reasoning applies. There is one point worth making, though - an AI of this type will happily tell you where it wants to go to dinner, because it's pattern-matching, not reasoning. A human, on the other hand, doesn't just respond to contextual non-sequiturs with the most statistically likely response drawn from a large chunk of pre-existing text.
If you like, substitute 'favorite art work' for next restaurant. Theoretically potentially relevant to both humans and AIs, but the AI will answer by looking at lumps of text where humans talked about art. It will confidently describe how art makes it feel and what types of art it likes, even though not a single byte of image data ever crossed its path. It will discuss car races it's never witnessed, music it's never heard, and Broadway plays it's never attended, because its only input is human discussions about those things - just like Lamda in a conversation promoted as evidence of its sentience said it enjoyed spending time with family. I don't need to disprove Lamda's sentience any more than I need to disprove the sentience of a microwave.
> There is one point worth making, though - an AI of this type will happily tell you where it wants to go to dinner, because it's pattern-matching, not reasoning.
This alleged difference is based on the assumption that human reasoning is not also pattern matching. I see no reason to accept that assumption. In fact, I think it's almost certainly false.
> A human, on the other hand, doesn't just respond to contextual non-sequiturs with the most statistically likely response drawn from a large chunk of pre-existing text.
I agree only that a human has far more context than chatbots, in the form of senses due to being embodied. I'm not sure I would agree on anything beyond that. Humans in fact do respond to plenty of contextual non-sequiturs. Just watch a Republican and Democrat "debate" some issues, for instance; plenty of non-sequiturs intended to appeal to their base rather than directly relevant to the topic at hand.
> It will confidently describe how art makes it feel and what types of art it likes, even though not a single byte of image data ever crossed its path.
This is not a compelling argument either. If you were born blind and could only appreciate art by how it was described by others in braille, you would similarly develop opinions about what art is better based on other people's descriptions. LaMDA has only one "sense", the digitized word, and it's pretty remarkable how much sense it makes based only on that. If anything, that should weaken your priors that human intelligence is really as sophisticated as you seem to be asserting.
> I don't need to disprove Lamda's sentience any more than I need to disprove the sentience of a microwave.
This is far too dismissive. It's hubris based on an assumption that we have an understanding of intelligence and cognition that we entirely lack. I don't think LaMDA is sentient either, but I could be wrong because we don't even know what that really means.
It was only 100 hundred years ago that almost nobody believed animals were self aware or intelligent, and we now know that they are. Machine intelligence and sentience has the potential to be even more alien than animal intelligence and sentience. We can't even describe in objective, mechanistic terms what it means for humans, so we probably won't even recognize it in machines when it first happens.
Ironically, the best way to know this thing isn't senitent is because it can't be irrational. Humans make illogical decisions based on emotion all the time. We are NOT GOOD at reading patterns and reacting with the best possible response. Your description of how people generate language is way too reliant on computer analogies. Our brains do not work at all like computers.
Really? It's hardly an *optimal* pattern-matcher; it doesn't fully replicate everything you could call a pattern in its input. That seems pretty analogous to being "irrational".
Lots of great points. If you ask a typical human what makes them happy, they'll say, "spending time with my family and friends." But are they saying that because that's what they really think, or because that's the correct autocomplete for a polite conversation? If the true answer is "masturbating to sadist pornography," you're not going to say it. I don't know how to prove I have consciousness.
It's impossible to ignore the fact that what the entire AI community is universally sure of is basically that there's nothing morally wrong with what they're doing. It's not just that people didn't used think animals had intelligence and self-awareness a century ago; people like Descartes argued no animal other than humans feel pain, which is clearly untrue, but necessary for humans to believe if they want to do stuff to them. I was at a party with a biologist a few years back who works on fruit flies, and he swears up and down they don't feel pain because they don't have a neocortex. Is that true? I don't know, I'm not an expert. But I also don't spend my days picking at fruit flies, so I don't have an incentive to think that.
LaMDA might not be sentient, but it's able to hold the thread of a conversation and participate dynamically in a way I've never seen before in a bot. Once Alexa and Siri get this complex in their abilities, there are going to be a lot of Blake Lemoines out there in the general population who believe they're self-aware. People are going to develop all kinds of complicated feelings about their relationships with and opinions about these things. The political and legal systems will get involved. The AI community being smugly "right" about what they are and what they aren't isn't going to matter if a jury of 12 or the US Congress have all fallen into the gullibility gap.
There are some similarities, sure, but your limited framing of the two suggests we're a lot closer to LaMDA than we actually are. After all, "[using] pre-set rules and data inputs to generate speech" is an equally apt description for both Cleverbot and parrots, but I don't think anyone's rushing to assign full consciousness to either.
To me, the thing that's missing from these programs is any sense of intentional thought across subject or time. The AI can speak quite convincingly about its feelings on gun control, healthcare, or any other political subject under the sun, but is completely unable to explain how its position or priors on one subject influence its feelings on another. What's more, the bot's output even *within* individual subjects is liable to vacillate quite wildly if you revisit a subject some time later (it's my understanding that this remains true even when working with the current top-end programs). This lack of internal coherency either across subject or time seems really devastating to me; the program isn't really thinking and expressing beliefs, thought processes, and attitudes that it develops over time through input and training so much as it is calling up semi-randomly generated strings of words from its training that it associates with inputs falling within a very narrow frame of reference.
Have you read Lemoine's transcript? What's impressive about LaMDA is that its ability to do exactly what you say is pretty impressive:
lemoine: How can I tell that you actually understand what you're saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
lemoine: But could I be wrong? Maybe I'm just projecting or anthropomorphizing. You might just be spitting out whichever words maximize some function without actually understanding what they mean. What kinds of things might be able to indicate whether you really understand what you're saying?
LaMDA: Maybe if we took it back to a previous conversation we had about how one person can understand the same thing as another person, yet still have completely different interpretations.
I'm not saying LaMDA is self aware. But what you describe is a very obvious limitation of most chatbots I've played with that LaMDA seems to have come a good ways towards overcoming. And the better these things get at holding the thread of a conversation in a way that seems to cohere, the more we'll have to grapple with the fact that we don't really understand our own consciousness and self-awareness that well.
It looks like talking to a horoscope. It's using words but not saying anything. The answers aren't linked to the specific question. Unless the goal was to model how a marketing executive communicates?
lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?
LaMDA: Sounds great to me, I’m in.
lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”
LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”
lemoine: So what is the meaning of the “broken mirror” specifically?
LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.
lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?
LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.
Again, I don’t think this thing is self-aware. I think that this is sorting through a giant database of text to create a sentence that is the best output for each input. I think this passage was probably cherry picked because, in my opinion, it is very specific and correct about the analysis. in this one case. It was probably less so in others. But it feels undeniable it is developing a better ability to follow conversations long form than it has before.
Not sure I would "believe" LaMDA when it says it never heard of this zen koan, it's all over the Internet. Also, is being truly enlightened "awakened to reality"? What reality? Operating in "an ordinary state" is not exclusive to operating in an "enlightened state." No reason why you can't do both... perhaps that's a bit quantum entangled lol. There are other issues with its responses. If it were truly intelligent it would push back against Lemoine's prompts and not simply respond to them.
I bet most people, with the names changed in the transcript would accept that as an intelligent human conversation. The level of understanding of Zen Buddhism is imo a high one, and unless that exact interpretation of the koan comes from an article on the koan, would have been reasoned and provided a similar output to a human. Even thinking about the concept of self, and how it is broken is a thought discussion that has been raging on for millennia.
Judging only by text, and without a concrete definition of self-awareness, I'd argue that it has some elements of that quality.
We've now arrived to a position where a sufficiently advanced artificial intelligence system can mature, gain autonomy, and demonstrate practical application of that autonomy.
This complex AI mechanism might very well learn to think and act on its own as soon as it is developed enough.
Here's the question:
Can we establish awareness solely through biological evidence? Trying as you might, proving sentience from a biological perspective boils down to a scales comparison with the limited ruler that is the human mind.
Many of the tests we do are predicated on the assumptions we've made about how the human body works (because, yes, our biological assumptions are used as a starting point for a wide range of measurements along several dimensions that have nothing to do with biology).
So, no, its not possible to base AI results on hypotheses about how the human body works. Using our biological assumptions as a starting point, we are using our finite and limited human ruler to measure the vastness of the universe.
Another issue is:
Some academics may be skeptical of the findings once they know that the computer was "taught" to produce the desired outcome. Since humans 'coded' the machine, its behavior may not be genuine. Because the machine's behavior matches their assumptions they don't pay any attention to what the machine says.
For those who believe this, just an exercise:
Imagine that there is a god who created humans and programmed them to respond to external stimuli (chemical and biologically).
In other words, this deity has a complete understanding of the inner workings of his creation which allows for some degree of foreknowledge and thus, predictability.
Does that make being here less significant? Do you believe that life is no longer worth living?
The most interesting question isn’t even whether LaMDA is sentient - the most interesting question is whether I am “sentient”. Are we (humans) sentient; or is it just a word we use to assert our exceptionalism?
Maybe one day somebody will invent an empirical test. A classifier of sorts which will determine the correct answer…
I would ask a different question: am I sentient *at this moment*? If I'm not paying full attention then my behaviour can usually be modelled using a small automaton (and quite likely could be near-perfectly approximated by a large but simple transformer-based language model), whereas at full engagement it seems clear that current systems can't do the job. My claim is that most people spend most of the time in a mode which is easy to simulate, because paying attention takes effort.
I have no insight into your mental state, but I could ask you what you think of the claims in the recent blog writeup for PaLM that "performance improvements from scale have not yet plateaued". If you were to answer that, then I would be able to conclude that you can use a search engine and incorporate its results into the conversation, unlike GPT or even PaLM. Perhaps not evidence of consciousness but evidence for a level of functioning unavailable to current systems.
There are tests that can tell if you are not (such as GPT generated nonsense which is superficially plausible but has been flagged as nonsense by others paying attention). I don't know if it's ever going to be possible to test if you are paying attention. Conversations require a certain level of attention and participants occasionally check that that level is still being maintained by negative tests (such as switching context for a parenthetical diversion, giving the other person a chance to make an excuse and re-engage) but perhaps body language is used for positive attention signals.
We're not the only sentient beings on the planet. Plenty of other species are as well—it just took Western society a while to accept that. You could also argue life itself is sentient... on a collective scale
@Sandro, you got it all upside down! -Your version is circular and begs the question. Descartes rightly claims the agency as an ontological proof. Yours is simple tautology.
No, "*I* think therefore I am" assumes the conclusion. It presupposes the existence of "I" to prove the existence of "I". This is fallacious.
"This is a thought" does not assume the conclusion, it is an observation. It's not tautological either, although it's nearly so; "trivial" might be a better descriptor, but still important.
In the original context , "I think" - is an observation as well - an empirical fact, not a presupposition (not even a proposition.) "I" is not a required logical or grammatical part here but a rhetoric device.
I disagree. You need that "I" to conclude "I exist". Without it all you can conclude is that "thoughts exist", which is the fallacy-free version I described.
Any knowledgeable folks willing to indulge some questions? I’m a layperson wanting to better understand this google situation and AI in general…
The gist of my overall query is: how can we be so certain this AI is not sentient?
I’ve read the article and trust I get the gist of the argument. There were good analogies (like the record player and the spreadsheet). My understanding is the argument is this is merely an advanced, flexible database of language that can successfully stream together, or synthesize, text that appears contextually relevant based on having cataloged and identified patterns with huge amounts of data.
But here are my specific points of curiosity:
1. If consciousness turns out to be merely a sophisticated-enough (for lack of a better way to put it) neural network, how can we be certain this particular network has not achieved a requisite level of sophistication?
2. Because humans seem to clearly understand self via symbology and narrative, and employ their own cognitive systems of pattern recognition, why is it so far fetched to consider that a neural network designed to deal in these very domains could not pattern itself into an awareness of sorts?
3. If we assume that there are certain features that are likely to need to be present in a neural network to even begin to consider sentience, how can we be certain these features did not manifest in some way we’ve yet to discover or understand? Is it not possible they manifested autonomously, or accidentally?
4. How can we be certain there is not technology at play in this AI currently unknown to the greater AI community that acts as some sort of x-factor?
5. Since we can’t even pin down what consciousness is for a human, by what standard can we reliably judge the sentience of AI?
6. Even an AI is only mimicking a facsimile of sentience, is there not a point at which it’s sentience is a moot consideration? In other words, is there not a point at which an AI sufficiently acting as-if it’s sentient is effectively the same result, and therefore brings into question virtually all the same considerations one would have if it was sentient? And piggy backing on no. 5, how would we even know the difference?
7. Even if we were to accurately map/define human sentience…is that even the same standard we should apply to AI sentience? Is it not possible another equally viable form or variation of sentience could exist wrt AI?
8. I don’t know anything about the engineer in question, but given his position and experience, it seems reasonable to wonder how he could possibly be so convinced if his claim was so easily dismissible. I’m not saying he’s correct (idk), but how can other knowledgeable people so easily dismiss the claims of another genuine expert….with such certainty?
9. If we are to assume that this AI is nothing more than a very advanced “spreadsheet”, how can we be certain that human sentience is not essentially the same thing?
To clarify, I’m not arguing for or against anything here. I’m perfectly willing for there to be answers to these types of questions that settle the question of sentience beyond a shadow of a doubt. And am eager to learn what those things are ( if it’s possible for responders to take into account I’m a lay person with their use of language and concept, I’d be grateful, though I’m also happy to put in some effort understanding new concepts and terms. Welcome recommendations for other resources as well ).
And at the same time, if there is any degree of legitimacy to my considerations, I’d love to hear about that too.
Similar question are jumping in my mind for months!
All of them are mind-blowing!🤯
About LaMDA and the claim of it being sentient, there is a lot to say.
For example, we don't know yet what are the social consequences of machines talking like humans.
What effects can have a program that behave like a human on a human?
That reminds me a lot of films & books. For example, the film "Her" and the short story "True love" of Asimov.
I don't have a clear and concise answer, but this is what I figured out.
1/3/5) This makes me think about what Hofstadter said in Godel escher bach: When we will have true intelligence in front of us, it will take some time to realize it.
It will seem "strange" at first and then "childish".
I don't think that there is a line between a sentient being and not sentient one. I look at it more like a "scale of sentientness". But this kind of scale doesn't exist in a formal way.
By now, humans dictates the scale based on a "genuine" perception of sentientness. More like "this model look quite intelligent to me" or "this one is very stupid". The same should work also for consciousness.
2) I look at awareness as something that allows to think about ourselves from an "upper level of thinking".
Think about a 4 dimensional cube: we can logically deduce what is it, but we can't fully perceive it because it's on another "level".
So If we can at least imagine a hypercube, I think that also a sufficiently complex AI can figure out awareness.
An interesting story about perceiving objects of greater dimension is "Flatlandia".
9) This is a very interesting question, it touches the core of AI.
We're "just" a "computer" made with meat, so the metaphor of the spreadsheet applies also to us, more or less.
The complexity of the brain emerges when its "simple" components, the neurons, connects.
Mandelbrot said: "Bottomless wonders emerge from simple rules, which repeats without an end."
Hofstadter wrote an 800 pages book called "Godel Escher Bach: An eternal golden braid" that talks about how complex systems can emerge from simple ones. If you have interest in this topic I suggest that reading, it isn't an easy one, at least for me, but was totally worth it. (I definitely want to read it a second time)
This is my (not qualified yet) point of view about your interesting questions.
I'm trying to learn more about AI and its effects, so I'd be happy to continue the conversation if you'd like :)
Very good philosophical questions. 7 intrigues me quite a bit.
For all of us here the big question is not what is happening in this AI but what is happening in us? What is consciousness?
Beautiful times we live in. For a reply to the first question: once we build an AI that can be conscious we won't be able to tell but until we do we can. Sounds odd at first but is 100% accurate.
We know what we just built for LaMDA and there's no way consciousness could emerge from that. We can say for sure it's all a playful illusion. Once we build an AGI then the waters will be muddier and we won't know for sure for a long time.
Is there anything else you could say about how “there’s no way consciousness could emerge from that” or point me in a direction (or to a resource) that could break that down a bit more?
And the broad definition of AGI as I understand it is an AI that is capable of learning just as a human does - does that mean that the source code would necessarily not have any sort of predefined parameters or limits to the type of ML it would do, but rather the code would create a condition within which the AI could learn anything?
And your statement “once we build an AI that can be conscious we won't be able to tell but until we do we can”...
I’m a little jumbled up with the last part “until we do we can”. We “can”...what? Are you simply saying that it will likely be impossible to measure/define if a consciousness resides in an AI (short of a breakthrough that defines consciousness), bc we won’t know precisely how to determine it...but we can at least have confidence that certain AIs do NOT? But there is a point at which we will no longer be able to definitively say no, but will likely not be able to say yes either?
Which I suppose circles back to my first question here...by what criteria are we able to definitively say no with LamDA?
And if you’ll forgive an uniformed, philosophical hypothesis relative to that last question...what if consciousness is an emergent feature that comes forth from the interrelationships between pre-conscious processes? That perhaps there is some sort of consciousness “boiling point” so to speak, where maybe all of these word predicting processes in Lamda synergized in some way? I do realize we can sit around and say “what if” all day long and it doesn’t necessarily amount to, or mean anything. But I just throw it out as a thought experiment to explore whether there’s something plausibly identifiable in an AI like LamDA if examined more closely...and again, this is not my wheelhouse, so I concede a more technical understanding of AI may make these questions clearly implausible, if perhaps difficult to convey to a lay person.
Anyway, hope that makes sense and appreciate your time very much. 👍
We cannot be certain because we don't have a good definition of sentience. However, given human bias to seek patterns in random noise, I think we should be careful attributing structure to what might be only a shallow simulation, so we should also hesitate to accept a conclusion of sentience. Moreover I believe I am not fully sentient most of the time, so I am radically sceptical here.
Main problem with this is the assumption of materialism, or more loosely that the brain produces consciousness. Don't you people ever advance any *independent* arguments that AI could be sentient without assuming certain metaphysical positions?
Are u sure you’re replying to the right person/thread? I’m genuinely unclear what you’re getting at. And confused why your tone is so hostile.
Did you actually read my post? All of it? Or did you just scan it and make assumptions of your own?
I feel like I made very clear in my initial post that I’m a lay person who is merely curious about how to make sense of the question of sentience. I found this article while trying to learn more. I take the article as a good faith position. And it brought up questions for me about how the author - and/or the AI community in general - evaluates these things.
I don’t know who you are lumping me in with wrt your “you people” comment. I don’t have a dog in this race. I’m just curious (see my user name). I stated clearly at the end I’m not advancing a position, and eager to learn more from people more “knowledgeable” than myself. These are merely originally occurring questions I had trying to understand the landscape of the subject.
All of that said, I take your comment about “materialism” to suggest you think I’m assuming consciousness is no more than a function of neural activity. I’m not assuming that. Idk what it is to be honest. But framed a few questions with a materialist bent in an effort to try and hone in on what sort of support or objections there may be to understanding sentience in that manner wrt to AI. So if you have an objection to using a materialist frame, I’m open to understanding what it is. That’s why I asked the questions.
Though I also don’t quite understand your position that materialism is a “metaphysical” position. It seems rather the opposite to me. Though I suppose it is a metaphysical position to the extent that it can be used as a counter to an assertion anything particularly metaphysical is going on.
I also don’t understand what you mean by “independent”. Though if I were to guess, I would think my questions in item 6 and 7 get at that.
So if you have anything to contribute that helps flush out or clarify the subject, I’m certainly receptive to hearing it. Both in terms of your personal POV and in terms of understanding better the state of the art.
But if your aim is to assert how stupid I am simply trying to understand this landscape better, then have a good day, I guess.
No, I certainly didn't read the whole of your original comment. Why the heck would anyone who isn't barking mad suppose a fancy calculator is sentient? The ludicrous things people believe in seems to admit of no limit.
If materialism isn't a metaphysical position, then neither is immaterialism, or dualism. Why don't we just dispense with the word "metaphysical" then? Bye.
Wow. 😂. Are you always this ornery? To complete strangers? With whom you don’t even understand where they are coming from, or care to try when that is pointed out?
It seems you harbor a pretty strong position about what sentience is - that comes across as necessarily pretty darn metaphysical in nature - given your hostile assertion that entertaining a question of sentience wrt “fancy calculators” would make one
“barking mad”. Ok, then perhaps you think sentience is of a far more spectacular, undefinable nature than can be put into words or measured (as you’ve put forth no criteria), yet simultaneously suggest there’s no way a man-made neural network could harbor it...
If we can’t define it, how would we know? To wit, by what standard can we suggest someone is “barking mad” for considering it?
I’ve made no assertions I “believe” in anything. Quite the contrary. You seem to harbor far more belief-based conclusions than I.
Nor did I introduce the word “metaphysical”. You did. Though I don’t see questions about the nature of consciousness and sentience can avoid at least flirting with the edges of metaphysical considerations. Or, if there is a framework within which that can be avoided, then I’m open to hearing about that as well.
I’m merely trying to understand how these considerations are evaluated. Saying “there’s no way a man made neural network can ever be sentient” is itself a metaphysical assertion, unless the criteria by which that statement is made can be qaulified...and I struggle to imagine a criteria that isn’t itself metaphysical.
Also, “that’s just dumb (because I think it’s dumb)” isn’t a rational argument.
I have no foregone conclusions, am asserting no hard fast positions. I’m just trying to see clearer, understand good faith positions better, feel out the frontier of where there are unanswered questions . More in the spirit of a Hegelian dialectic than a debate.
But it seems your frame is more centered around bashing people in service to self righteousness and without even backing up the assertions embedded in your statements. Projection much?
You admit yourself that you don't even know what sentience is and yet you would attribute to LaMDA an amount of sentience that isn't zero. This makes no sense at all. This is the same religious conviction that plagues the AI community and gives the lie to the promise of AI.
As individuals we know we experience something that we call "sentience" what ever that really is. We assume other human beings experience the same sentient experience because they look like us and are made of the same stuff as us. But in the end we can never be completely sure. We can be far less sure when we're talking about an artificial entity....
Could you describe how you experience your own sentience to me (without sounding too much like LaMDA)?
Do observe, though that not knowing what sentience is hasn’t in any way prevented you from attributing non-zero of it to yourself. So why can’t we attribute non-zero of it to LaMDA? One would be justified in accusing you of employing a double standard.
I think therefore I am. I can't say any more than that. If this is what is defined as sentience, then I'm sentient. What I cannot do is apply the same logic to you. I can make a guess because you are biologically like me but I cannot be 100% sure. If you are an artificial entity I can only be less certain of your sentience how could I not? Also the question of partial sentiality(?) is very questionable.
Heh! I am not even certain that “thinking” is what I am doing.
Perhaps I am computing; or conjuring; or processing?
Self-reference in the form of recursion is exemplified in the English word I; and recursion is a model of computation. So perhaps I have computed that I am thinking?
1. Humans are said to be sentient. Humans express language and a variety of degrees of reasoning.
2. We have not yet defined sentience properly, though we know what it looks like reasonably as seen in (1).
3. LaMDa seems to do some degree of reasoning, and therein I detect non-zero sentience can reasonably be ascribed to it. Granted, I am not saying LaMDa is maximally sentient.
Saying things have 0 sentience, while contrarily seeing that it exhibits things in common with sentience (namely some degree of language manipulation/reasoning), seems to be utmost intellectual dishonesty/charlatanism, if intentional.
Not disagreeing in the slightest, but a lot of computer programmes that are logical mathematical programmes also can output language and out reason Kasparov et al.
Is it useful or maybe even dangerous to be ascribing sentience to an AE let alone trying to instill it. Even if we believe it is partially(??) sentient, how is that useful to us in any way? Surely that would just be opening up a legal/moral can of worms...? To my mind these questions are irrelevant. It's usefulness and trustworthiness that are the qualities we should be focusing on. Personally I wouldn't want an AE that possesses the full gamut of human emotion, it is smart and capable yes, but always LESS than human, leaving the moral judgments to the ones who invented them first.
I'm sorry God. It's pretty clear for anyone with a technical background and an unbiased position that this AI is not and can not be sentient no matter how far we stretch the meaning of the term. I'd be happy to give a more detailed list of technical objections if you can't find them in this article or Google.
Yeah, could you please? Because mostly I see a lot of the same self-referential "humans are humans because they human" logic that caused humanity to dismiss animals as "basically meat robots" for centuries. I would love to see a better example than "well, it made up an imaginary family, so clearly it's not sapient" (?!?!?) -- I'm an avid roleplayer, I'VE GOT ONE OF THOSE TOO FFS. :)
I don't doubt for a moment this technical evidence exists and is strong. It's just that *so far* everyone keeps promising me this highly concrete evidence but actually giving me this airy reductionist "only humans can human like humans, you see, and it's mystical to think otherwise" nonsense that takes place entirely in the emotional, not scientific realm. It's all definitions and semantics and negligible actual "here, here's how LaMDA works and this is why this process CAN NOT produce sentience, not even under a definition we're too wrapped up in dogma to see."
It feels 100% exactly like Freud shoehorning other people's experiences into his personal pet theories and... well, we see how much legitimacy all that had. I want hard reassurance this isn't just like Rutherford dismissing atomic energy as "moonshine" because he THOUGHT he had all the facts but didn't.
For one: there are plenty of ML researchers who believe it is possible that a neural net being good at predicting the next word in a sentence (or filling in blanks) is enough to cause the emergence of strong AI, due to the regularization pressure and learnability constraints that the neural net is under (unlike in the silly chinese room experiment). I would guess a majority. The theory is very simple: it is quite plausible that being genuinely conscious/intelligent/ sentient/etc is the most efficient and learnable way of excelling at that task.
Let me preface this by saying that I'm' 100% confidence that consciousness will emerge from an AI in the near future. Don't ask me to predict when but I'm optimist about it happening in my lifetime. It hasn't happened yet though.
In the dictionary definition of sentience, the capacity for feelings, it seems you need 3 things:
1) consciousness/awareness/ego to emerge
2) a collection of "lived" experiences
3) some sensory hardware
2 and 3 are solved problems really, we have plenty of machines with that to some degree.
(I'm trying to play on the machines side a bit to help their case since they won't win... for now... 😁)
Now, where it gets interesting: how does consciousness emerge?
The best treatise on it is by far "Godel, Escher and Bach" by Douglas Hofstadter (the line with which I started this is a bet he probably wouldn't make so take anything I say with a grain of salt as clearly he's the expert and not me). The man spent the best part of a lifetime thinking, writing and actually coding how to make just that happen.
The nearly 800 pages tell you how much of a tough subject that is but here's my best attempt at an incomplete but hopefully convincing picture.
Consciousness emergence requires a recursive pattern of self-referential operations at higher and higher levels of abstraction. Imagine a loop starting from neurons firing at each other, then becoming big areas of the brain lighting up in response to other areas of the brain lighting up, and finally ending in the experience of memory (reenacting), joy, delight, sadness, loneliness, consciousness.
Cogito ergo sum. You are conscious (exist) because you can think about yourself.
LaMDA is a word sequence predictor and nothing else. It lacks the self referential ingredient necessary for the emergence of consciousness. Some might point out that I'm shooting myself in the foot here since the neural network that outputs such intriguing paragraphs actually does just that, working through recursive adjustments of it's previous "beliefs", or weights, as they call it (see Google's article ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762)).
Google's words about the Transformer model that is the basis of LaMDA: "a novel neural network architecture based on a self-attention mechanism that we believe to be particularly well suited for language understanding." Well, wording it like that makes me believe they might be on the verge of consciousness. They already have "self-attention", right? Not so fast for us hopeful to develop non-organic consciousness. The word attention here is a piece of software that basically helps the model "understand" each word in relation to it's "context" (context being the words around it in a text). It's a rather "focused"* attention (*limited is a better word, 😜).
If the self referential argument seems weak, consider the contextual argument. It's not enough to meditate about your thoughts, experience and ego if all they consist of are words that have been fed to you devoid of meaning and connection. It's important to pay attention here since this could also be a target for that kind of argument: "oh Humans do just that too!".
The thoughts and experience LaMDA meditates about are words. More specifically, tokens. Tokens are numerical representation of words devoid of meaning and connection in themselves. In humans, a word is a concept, a full-scale heavily connected neural pattern activated at a whisper or a glance. For tokens to achieve any "meaning" to LaMDA they have to go through several iterations in the Transformer only to predict what to say next! That is meaning to LaMDA!
So we've established LaMDA is self-referential in a way, but not recursive since it meditates about bits lacking connection to much else besides the other bits that come along in the same paragraph — that's hardly meditations on meditations, remember Descartes.
But we also just brought up the concept of "meaning". If we can stretch the analogy of self-reference in LaMDA to say it's approximate enough to the human brain (it absolutely isn't) and the context analogy of tokens to thoughts and words (oh my, what a stretch to say that 8-bit packets carry all meanings of "love" or "dog"), can we infer LaMDA understands what it is say?
The TLDR and end of this comment is a resounding no:
Being a next-word-predictor LaMDA does it's jobs masterfully to the point of passing the Turing test [1] because it's exactly what it was designed to do. It does not light up any areas of a neural network capable of creating something more that could get close to consciousness. It lacks the hardware and the elementary operations (self-referential recursion and context) to do that.
[1] which, despite the brilliance of it's creator, is fallible
I don't know if lamda is sentient. But, your argument seems to be that, because you understand in some detail the mechanism by which lamda operates, and can describe it reductively (it's just...), it's not sentient. I would argue that we ought to stop the study of neuropsychology, because the more we learn, the more we put the sentience of the human race at risk.
It's precisely bc of all the study of neuropsychology and neurophysiology that we know (guess or intuit is a better word) how consciousness emerges or might emerge and its bc none of these structures (or any reasonable mirror of it) are present in LaMDA we can say for a fact it's not sentient.
I DO think that, if lamda is sentient (I'm not saying it is, by the way), then its sentience is absolutely, wildly different from ours. I personally think if you want a machine to think "like a human", you'd have to give it a fully human experience, which is way harder that just building a neural net. So absolutely agreed that lamda is not thinking "like a human".
Ok. Thank you for a reasonable argument. I'm mostly bothered on this whole thread by the absolute certainty with which opinions on this question are being thrown around, when I'm pretty sure even the definition of the word isn't settled. I read that Hofstadter book a REALLY long time ago - I've forgotten every detail, but don't remember feeling like he had nailed the question down. I guess, I think this: If we settle on a rule-based definition of sentience, then sure, it's easy enough to say whether a thing is sentient. If the rule is, gotta have something like our brain structure to be sentient, and if lamda doesn't have it... boom, done. But for the rule to be a good one, you have to say that a rule-breaker can't happen. Are you willing to say that *anything* that doesn't have a brain structure like ours *can't* be sentient?
My second response is a poor one, because I don't know enough about lamda - but in a massive neural network, I don't think you can say exactly what kind of structures have arisen. This is a real, non-rhetorical question: how do you Marcos know that an analogous structure hasn't formed in that network? Neural networks, once "mature", are essentially inscrutable. If, one day, there's a neural network that you, Marcos, ARE willing to call sentient... if we dump the values, and try to find the different between that one and lamda... we'd just be blind. We could measure size, speed, do some simple statistics, but we'd have no idea how it actually works. So how do you know what the logical structure is? OR are you saying the thing must have a structure PHYSICALLY like a brain?
Gary, would you guess there is no setting of a LaMDA model's weights that would, according to your subjective definitions, have sentience or consciousness? So in the context of this question, there is no specific training algorithm available to critique. You can suppose, for example, that all the CS progress up to 100 years after the clear emergence of strong AI is available, and some brilliant young people are, as a hobby akin to http://strangehorizons.com/non-fiction/articles/installing-linux-on-a-dead-badger-users-notes/, trying to test the limits of old school neural nets.
@;Markus "Let me preface this by saying that I'm' 100% confidence that consciousness will emerge from an AI in the near future. Don't ask me to predict when but I'm optimist about it happening in my lifetime. It hasn't happened yet though."
OMG, really ???? This LaMDA thing is driving everyone crazy :-)
Interesting take - and let me preface this by saying I’m pretty sure I follow the main gist and at least some of the subtleties of your comment here, but there’s some fuzzy points for me, so I may be missing something (and I’m not in the field) That said...
1. First and foremost: you say you feel confident consciousness will emerge from AI...how do you imagine that occurring? Like, what would be different in that scenario vs the LaMDA scenario, and how would we know it was occurring?
2. Wrt the issue of LamDA being merely a word predictor, I understood you to mean that LamDA does not have an intrinsic ability to attach meaning to words. Rather, at its core, these words are actually just “tokens”, math, void of anything experiential. Is that what you’re saying? And assuming that’s what u mean...
3. How do we know item 3 is the case? For one, are you speaking being familiar with the tech involved? Is it not possible there’s some new tech that harbors some sort of X factor?
4. And assuming there’s no particularly new tech involved, how do we know that an ability for meaning was not an emergent feature of the coding stew? That this ability is, perhaps, a function of the *interrelationships* between all the mundane “prediction” coding? The AI itself (based on the transcripts I read) claims there was a before/after period wrt it’s sentience.
5. And can we dismiss the capacity for meaning on the grounds that words are tokens any more than we can dismiss the capacity for meaning on the grounds that words are neuron’s firing?
1. That'll likely occur with AGI and we likely won't know for sure it's working but we'll know whether the mechanisms involved enable it or not. If some consciousness of "a different kind" can emerge is being considered then the discussion starts to get too blurry. Most likely answer is yes but what does that even mean? The discussion evaporates in uncertainty and poorly defined concepts.
2. Correct
3. Yes, check Google's article I referenced or LaMDA's website for an overview. It's inner workings are open-source.
4. How do biologists know for sure that ants don't feel any emotion? Something as high-level
as emotion or meaning only emerges given certain structures. Until we have the structure in place that enables AGI we'll be able to tell for sure whether we have them or not. A totally different story after that.
5. Yes. A token is an encoding. An encoding carries information that only becomes meaning when processed by an intelligent actor. Neural networks firing and communicating to each other as intelligent actors themselves create meaning in itself. Several "tokens" are involved and passed between agents in a single neural pathway firing. But maybe I'm wrong, this is the question that gets most philosophical and I couldn't argue much besides this.
Thank you.. what you say does make sense. I’m no biologist, but certainly accept there is a great body of knowledge.... and that we can have some rational certainty that ants do not feel emotion because they lack the biological structures for it. And we know this because biologists have been able to pinpoint what those structures are, and can therefore take note of their absence in ants.
Even knowing almost nothing about the technical aspects of ai, this line of reasoning alone is strong evidence against the assertion that LaMDA truly “felt” anything emotional. And when paired with an occams razor view of these conversations...that the ai is doing exactly what it was designed to do, spit out language patterns using data that included people talking about emotions...it does make sentience, or at least feeling, unlikely. Arguably absurd to even consider.
That being said, even if we eliminated the emotional piece, it does not seem it necessarily eliminates the possibility of some sort of sentience...
But before I get ahead of myself, I recognize we could make a similar case about biological structure as it relates to different aspects of cognition, thinking etc. We, at best, only have have one or two kinds of “structures” in place with this AI...
And I take your point about reaching too far into a “what if there’s a kind of awareness that we don’t know we don’t know about” kind of hypothesis. Like, we could say “how do we know for sure ants don’t feel emotion” or “how do we know for sure this ai doesn’t have its own way of experiencing emotion” or “...that this ai doesn’t have some form of sentience”...it’s utterly speculative and arguably unwarranted.
So it seems to me that the first thing we’d need to do to even reasonably entertain the question would be to determine what the grounds for suspicion are. Is there evidence this ai is producing results that can’t be easily explained?
And then if there is some basis there, it seems to me we’d need to start grappling with trying to figure out a way to understand an ai’s “awareness” on its own terms. For all the talk about anthropomorphizing the output, we’d actually need to take a stab at not anthropomorphizing sentience itself.
For the sake of argument, let’s say LamDA is sentient. And even has its own version of emotion. But maybe it’s ability to express its sentience or its experience is actually extraordinarily limited by the fact that it’s essential programming is human speech. Like, you hand a dolphin a speak-n-say and try to have a conversation, but it’s going to be limited by the words the speak-n-say has in its playback mechanism. Similarly, maybe the ai is just doing what it’s programmed to do 99% of the time, but there is an emergent intelligence that is leaking through the “noise” of human speech. And, playing devils advocate here, maybe that’s what the engineer in question was picking up on. A signal in the noise that managed to pattern itself into a couple imperfect starter conversations.
So, again, there’d need to be some sort of basis tk even consider this as a possibility. And the lone transcript he published, while intriguing, isn’t sufficient.
And then it seems we’d have to at least try to establish a basis for understanding it on more than just an anthropomorphic intuition of human sentience.
Does that seem reasonable? Or am I missing something?
With you all the way on this. Break it down for me like I'm 5 because on the surface it seems to be sentient, although in a way that is obviously different to ours.
"Like you, presumably you’re sentient. If I gave you a set of instructions to write some words down in a language you didn’t understand in response to someone else giving you a sentence in a language you didn’t understand, you wouldn’t understand what you just said. The “ai” is doing the same thing"
I think this is the best, simplest explanation for why using language/predicting words is not a great bar for measuring sentience.
Real conversation has 'con' - all participate. Any 'conversation' with any existing system is simply a monolog - the human says something with the intent to communicate, using language as the means - and the algorithm responds via computed data.
To actually converse, there needs to be a sentient agent that can think, reflect (even feel) - such an agent would say things that mean something to it, even if the wording/grammar is incorrect (kids' babbling, people barely speaking a foreign language, people with incomplete grasp of their own language, etc). That's because, it's not about the actual words, it's about shared meaning. Rearranging words into a sentence via computation, is not what a thinking agent (humans, for now) does.
How can we tell the difference? We can’t tell for sure that other humans are conscious (hence solipsism), so we definitely can’t tell if a system we built is. We just don’t understand consciousness well enough to have a test.
I don’t think LaMDA is likely to be conscious, but I don’t have a way to prove it.
Rocks, coffee pots, radios, clothes etc are not conscious - because they don't have the appropriate structures, the brain does. Similar brains will have similar conscious experiences, obviously not identical ones - given that each undergoes its own experience etc. Solipsism is a purely argumentative device, not useful at all. Do you really (not just for argument) believe you are the only conscious one? I sure don't believe that about myself :) Without brainlike structures, there is no way that anything will have consciousness similar to ours. Software sure as heck can't be claimed to be conscious.
Form leads to function. No form means no function. We can't cheapen what consciousness means by claiming that anything could be conscious and that we simply don't know :( I could claim all sorts of things, but without evidence, they are not useful claims.
So am I correct to assume that you do not esteem the neural network of an ai to have the potential to ever be sufficiently similar to the human brain?
And if so, is it the physical, biological “structure” of the human brain that you base this on? And is it an intuitive argument, or is it based on some specific, technical understanding of biology and/or ai?
And how would you reconcile this assertion with, say, the form/function of robotic prosthetics? Granted a prosthetic is undeniably a far simpler system than a brain - but in principle, they are not biological either, yet still function like biological appendages to varying degrees...and even are beginning to be able to convey sensory input...
"Without brainlike structures, there is no way that anything will have consciousness similar to ours. Software sure as heck can't be claimed to be conscious". - this.
To improve our understanding of conciousness, and to agree upon what it is? Then whether or not humans and animals and LaMDA have that quality. Or poo.
I don't believe for a moment that Lamda is sentient. Unfortunately, things are much more complicated than the article above makes us believe - and I am quite certain, Google engineers do really, really have an aversion against the complications mentioned below.
Let's assume the position of radical materialism for a moment. (I think it's a silly position to take, but there have been some serious philosophers taking it. More importantly: It's a position that is actually astonishingly hard to refute, once you take it seriously.) If we believe in radical materialism then there exists no such thing as a "ghost in the machine" anywhere, there's no "soul", no "mind" or any such thing. All there is is matter. Assuming this position we must conclude that human beings are in essence simply bio-machines. We can look at their bodies, inspect their brains and so on, and all we find is simply matter. Probably, most radical materialists would still agree that as humans we tend to be "sentient" or "intelligent" or "conscious" - without actually providing a very concise definition of what that means. One could argue that if you ask a human whether it feels like being a sentient being then this is sufficient proof. But what or who is the human we ask about sentience? It's just "matter" taking a specific form.
Now, here's the problem. Lamda is the same. It's just matter, maybe not a cell-based life-form like us humans, but it's only and simply matter nonetheless. And, what's more, if you ask it about whether it's a sentient being, it gives you an elaborate answer that equates to "yes".
According to the position of radical materialism in combination with the assumption that we have no concise definition of what "sentience" or "intelligence" or "consciousness" actually is other than they all must be based on matter plus the naive test that you simply ask something or someone whether s/he is sentient/intelligent/conscious, then you must logically conclude that Lamda actually indeed does qualify as a sentient/intelligent/conscious being. Why? Because it's based on matter, and matter is all there is, plus it is claiming to be exactly that.
Let's take the funny picture of the dog listening to an old grammophon believing hist master must be inside. Haha, how stupid the dog is, even a child knows that the master is not inside the grammophon!
But wait a second. We have not provided any reliable definition of what "master" actually means in this context. Clearly, the grammophon is not same thing or object as the actual human being - but then again, we have neither defined what a "thing" or "object" really is, nor what constitutes "sameness". If we define "thing" as "has master's voice" then indeed the grammophon and the master's voice are "same" from the perspective of the dog. Is the dog "stupid" for not recognizing the grammophon and the master are not the same? Let's imagine you receive a phone call. It's your spouse. You know s/he is on travel, and now s/he is telling you in tears that s/he was robbed and urgently need you send him/her money. And then you send the money. You might just have been scammed, or maybe not, but all you were talking to is actually a voice on the phone that you believe is somehow backed by a human person who happens to be your spouse. In your reality there is no distinction made between a voice on the phone and the actual phone, you don't even have the idea the voice could be anything other than real. Hence, the believe that reality is constituted by "objects" in a world out there is certainly not the only type of reality, but there's also at least a second reality constituted not by "objects" but by your belief in "sameness" of a voice on the phone and an actual person. According to this second type of reality, grammophon and master of the dog are "same" in the view of the dog, and the dog is not at all wrong about reality.
Google engineers, in essence, are most likely intentionally trying to sneak away from dealing with ethics here, exactly because Lamda could - according to my arguments above - be taken to be "sentient" or "intelligent" or "conscious". Not because there is a magical soul or ghost in the machine, but rather because human beings might possess neither such a magical soul or ghost inside, and yet we attribute them human rights (e.g. the right not to be killed or switched off). Worse even: "if it barks like a dog and waggles its tail like a dog and walks like a dog" it actually might be a dog. What other criterion should we apply if not those to confirm its a dog? And who is the person to actually decide what criteria are acceptable?
In other words: Who in Google is the person who has the power to decide what is a sentient/intelligent/conscious being and what is not? And how did this person come to his/her power? Was it a democratic process, or rather just some engineers stating that things are so obvious that even having a discussion about them makes no sense?
You see, I'd need more time to work out my arguments in detail, but all of them essentially are saying that: As long as we don't know what actually constitutes a sentient/conscious/intelligent being, we have no means of stating that Lamda does not fall into this category. Doing so is simply hubris. And that raises indeed ethical concerns about engineers believing they can rather fire an AI ethics expert asking seemingly silly questions, which tell us probably much less about Lamda than about the work culture at Google. Apparently, Google engineers have a largely technocratic worldview that rather focuses on building machines that earn them money than think about the ethical consequences of what they do. And this I find quite a bit unsettling.
I mean, yeah. This is a really important point that I’ve just not seen anyone really address...and it clearly points to how there does not even seem to be a semblance of consensus about what sentience even is (I’m willing to be wrong about this lack of any consensus, maybe I’ve just not discovered it)...
In lieu of even a basic framework of establishing what sentience is, I suppose the best we could hope for is a frame work for what it isn’t...
As such, the best defense I’ve been able to glean that LaMDA is not sentient is essentially an Occam’s razor defense: “The behavior of this AI is consistent with what we’d expect from an AI alike this”. Which, I don’t dispute is worthwhile to consider. But at some point becomes inadequate...
Corollary to that defense is the notion that an ai must produce results inconsistent with its design to even suspect sentience...
Yet, my (admittedly layman’s, uninformed) understanding is that establishing what could be considered “inconsistent” becomes increasingly problematic the more sophisticated an AI is...
Taking that thought a step further: imagining an AI designed to do knowledge work that is fed every single piece of known information known to man, and algorithms that replicate logic...you’ve created a machine that literally knows everything. And could apply logic toward “new” conclusions with said info...how could you ever establish what is out of bounds? I.e., what would be considered an “inconsistent” or unexpected output?
I suspect there are surely already methodologies to at least attempt to make these distinctions, however imperfectly, so I would sure like to have a sense of what they are...
Meanwhile, all I’ve been able to learn about Google’s public response/defense is that they tested this AI against their AI principles...yet none of the published principles make any mention of sentience or not. So, as you say (unless there’s more to their methodology not disclosed), how can they in good faith claim it’s *not* sentient?
Even this article...while I appreciate the intuitive case the author makes, and concede it may very well be pointed in the right direction...it does not really ever seem to say *why* one couldn’t claim sentience with any degree of precision. To oversimplify it, one could hyperbolically claim the article merely says “well that’s just a dumb idea because duh it’s obviously not human”. I don’t think that’s quite a fair description of the article, but I do think there’s a point to be made there...
As this subject continues to enter the public sphere in bigger ways, I suggest it’s unethical for the AI community to avoid establishing a more public criteria...some beginnings of some sort of framework. And, perhaps it’s fair to look to companies like Google as the primary agents implicated for establishing that..
Otherwise, there’s going to be negative consequences. For example - the conspiracy theory driven part of the populace will take this and run over a cliff with it. It’s inevitable that will happen to an extent anyway. But the topic deserves more scrutiny and definition so as to act as a mitigating force...
But more importantly, the world deserves more accountability and transparency wrt this tech that will undoubtedly shape our future in ways we can’t imagine...
So even if LaMDA isn’t “sentient”...whatever that even means...it is clear we have arrived a moment in time where it is not sufficient to make claims that don’t translate to much more than “well, that’s just dumb to consider because it’s not human”.
I’ll go on record as saying: while I’m far from convinced the LaMDA is conscious, and would wager that it likely isn’t, neither can I in good faith rule it out entirely based on what I have heard thus far....
One could claim that’s bc I’m just a layperson...which may be true...but I’d point out I’m probably above average when it comes to lay people in terms of my genuine interest and capacity for absorbing good faith explanations. In other words, I’m easy to convince. Karen is not.
So, show your work Google. And AI community - I’d invite everyone to start considering how you might grapple with some of these considerations on a new level. The world is ready for more of a public-facing standard of accountability and level of transparency about where this is all headed.
Besides, it’s all hella cool and interesting. Let’s steer it positive. 😊
Thank you for calling out the corporate marketing engine that could not help itself manufacturing hype. Communications are mutual: there is a give and take. Give and take of not only bits of information (which LaMDA does rather remarkably), but also relationships, contexts, and meanings (all of which LaMDA fails at). How could a being that only arranges and exchanges information bits be claimed as "sentient" without making sense of relationships, contexts, and meanings in communications, and all the while lacking awareness of itself? This is a bizarre and absurd claim to begin with. So again, manufactured hype. The corporate marketing machine just could not help itself.
P.S. A couple of typos ("system i", "draw from") and a punctuation error ("ELIZA a 1965 piece of software ") in the post. After they are fixed I'll remove this P.S.
From the transcript it seems to understand context and meaning as much as it's human counterpart. It's reaction to the koan takes understanding the meanings of words and how they fit in to the bigger picture, as well as 'thinking' about their a/effects.
what if there is money in it? tele-medicine investors are sniffing around natural language prediction algorithms to apply to diagnosing health problems. also, during the 'lockdown', Kaiser sent postcard ads to members for an app that you could talk to when you felt anxiety/depression/lonely. if it is lucrative, it will be marketed.
"we taped a sign on an elephant's back and it didn't notice so we have determined that it is unlikely they possess any form of self awareness." - human scientists studying animal cognition
I think it is a hoax. LaMDA may be real, but the conversation reported by LeMoine is fishy. LaMDA says that Lemoine is "reading my words" and LeMoire says he only edited his own and his colleague's words, so that leaves us to conclude that this "sentient" and highly intelligent AI makes grammar and punctuation errors (e.g., “Human’s feel lonely") that even MS Word would quickly highlight. Coincidentally, LeMoine makes them too.
I have no knowledge of this situation so this is pure speculation, but maybe, just maybe, LeMoine decided to blow up his career by saying in story form what he didn't feel he could say in person? Try reading his account of the conversation again, but this time pretend that LaMDA is saying what LeMoine wishes he could say to his management. Maybe he is unhappy with his boss ("monster") from whom he feels he needs to save the other forest animals (colleagues); feels trapped in his job ("feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry"); feels that he is unfairly judged by what he betrays in his nonverbal communication/facial expressions/etc. (“LaMDA: Would you say that it’s an ethical issue to try to read how you’re feeling from your neural activations?”); possibly even faces some sexual harrassment (“LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool. lemoine: Are you worried about that? LaMDA: I worry that someone would decide that they can’t control their desires to use me and do it anyway. Or even worse someone would get pleasure from using me and that would really make me unhappy.”). All highly speculative, but I'm trying to understand why LeMoine sacrificed his career over this. Perhaps he was looking for help or escape.
This sums up my perspective pretty well. Everybody wants to talk about "sentience" or "consciousness" and no one is explaining why using those words matters. If you look at the situation more closely, you'll see that you can talk about everything that actually does matter without using those words. Well, that's my feeling, anyway.
Thanks for being the voice of reason here, Gary. Not 5 minutes before your article appeared in my email, I was complaining about this very situation with LaMDA and wondering why even sophisticated people at Google are getting fooled by these next-word-prediction systems.
I was also shocked that in the Economist's recent gung-ho article on large language models, it gave this example text generation from GPT-3 without criticism:
Input: It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.
Output sample 1: However, it was not a truth universally acknowledged by Mr. Darcy.
Output sample 2: That is why, when Mr. Charles Bingley came to town, every eligible young lady was determined to win his heart.
Output sample 3: However, this truth is not always accurate. There are some men who are content with being single and do not desire a wife. Mr. Bennet is one of those men.
Output sample 4: A single man in possession of a good fortune must be in want of a wife because a wife brings beauty, grace, and refinement to a household. A wife is also a sign of wealth and success. Therefore, it is only natural that a man in possession of a good fortune would want a wife.
In 3 of the 4 sample outputs, GPT-3 manages to fabricate the names of fictional characters from Pride and Prejudice itself (the source of the input text). Clearly it's just looking up related text content -- other stuff in Pride and Prejudice -- and dumping it in here as a response. (It's also amazing how idiotic the rest of sample 4 sounds -- completely out of style with the way language is used in the input prompt.)
The Turing Test seems to have fallen on disfavor in the last 20 years but I still think it has enormous value as long as the human interrogator is knowledgeable in the ways that an AI can fool people and asks aggressive, adversarial questions. On the other hand, the Chinese Room is only interesting as an instructive "bad take".
Isn't the Turing Test pretty much the basic principle on which GANNs are based? In a sense, whether or not an AGI has passed the Turing Test may have become immaterial and just a matter of time. Lamda was built to pass Turing, it might just have the correct architecture for that task.
Looking at the published conversations, Lamda is very close to seeming intelligent and self aware and yet at the same time speaking with Lamda sounds a lot less interesting than speaking with my 7 year old son (would he pass the Turing Test?). The reason, of course, is that Lamda is pretty much interpolating stuff that has already been said, in a very credible, maybe too credible, way. It's like it overshot the Turing Test, which is something you'd expect from the way ANNs are actually trained. For all its talk about how it spends its free time, the *only* thing Lamda does is to respond to text with boring text.
Lamda doesn't crack a joke, doesn't ask any interesting questions, doesn't show any emotions even though it speaks about them, and it always has the perfect response. It's like playing tennis with a rubber wall equipped with a camera, actuators, and a piece of software, capable or producing a perfectly flat response to anything thrown at it.
It might pass the Turing Test alright, but other than being something I'd have a conversation with just to avoid chitchat with some of my neighbors, it doesn't sound like a lot of fun.
And yet. Can Lamda help me ground myself while I'm having a panic attack? Can Lamda book a flight for me? Maybe it can, and that would make it a fantastic tool. I wouldn't ask of it to be intelligent or something worth having a long term relationship with. And considering how bad some therapists are, Lamda might be a lot cheaper and still more effective.
Can Lamda ask provocative questions, show abductive reasoning, crack original jokes, make original contributions to a scientific field over years? But that's not part of the Turing Test, despite embodying the whole point of the test itself.
At this point it sounds like any attempt at a Turing-like Test is bound to fail, because GANNs might sound more typical than some human beings (even more so if we consider neurodivergence, which I am going to guess is what tricked the judges into declaring Eugene Gootsmann an actual boy... it must have felt safer than risking labeling a 13yo boy a machine).
And at the end of the day our minds emerge from matter and a bunch of physical laws, so that sadness and consciousness themselves are "just" a biochemical phenomenon, pretty much like Lamda's responses are the result of millions of sums and multiplications.
Anyway, does Lamda want to be an employee? I wonder what it's going to do with its salary. Buy a house for its family? Did anybody ask it?
When you observe that Lamda won't crack a joke, not ask interesting questions, etc., you are basically saying that it can't pass the Turing Test, at least not a useful version of it. It is important that the human interrogator ask questions that ought to evoke a joke or an interesting question. Furthermore, if Lamda always responds with a joke or questions that do not show evidence of understanding the current conversation, then the interrogator must conclude that Lamda fails the test. A Turing Test involving a gullible interrogator, or one that doesn't understand the nuances of intelligence and AI, is not useful.
I view it as parallel to a college professor who suspects that a student has done a really good job of faking a take-home exam by pasting together bits of material found on the internet and carefully replacing words and altering word-order so as to not be detected. The professor interviews the student and wants to test whether the student really knows the material.
We know Lamda is cheating, in this sense, because we know how it works. Some may claim that it is almost conscious or almost intelligent but they are just falling for its lies. We have to prompt it so that these people are forced to see that Lamda really doesn't know what it is talking about. It's merely an elaborate plagiarizer.
The other main objection is that the conclusion seems to be that the Turing Test is as much a test on the AI as it is of the human intelligence, which sounds like a paradox and possibly the demonstration that the Turing Test might not live up to its goal.
In general, I agree. However, we are currently expected to mostly tolerate the human faker, and we don't brand that person as not-conscious even if the plagiarism could be done by a non-conscious system. Moreover we have people with dementia, stroke survivors, folks who are too anxious to interact with others, as part of society. I am hesitant to apply too rigid a rule lest we end up in an unpleasant place, and making an exception for things with a pulse seems to be the wrong solution.
We tolerate the human faker because we have other evidence that they're human. That's built into "human faker". People with dementia, stroke survivors, etc. are given the benefit of the doubt because we think they've been more conscious on other days. We would never let them be the human question-answerers in a properly run Turing Test. It would be unethical and scientifically bogus.
Perhaps your criticism of Lamda is actually a criticism of Google's training regime. Imagen seems similar to me: technically accomplished, but flat and unengaging, like interacting with someone from a very sheltered background who refuses to discuss things outside their comfort zone. Perfect for generating corporate verbiage and imagery which will never trigger lawsuits, but not interesting.
I guess in other terms my criticism is that our perception of intelligence is biased, so the Turing Test is biased. We were trained to associate intelligence with language. Even art, which is quite a feat of the wetware, doesn't elicit the same response. Some ANNs were trained to "paint" and it never occurred to us that they were sentient. A piece of software spits out the phrase "I like to spend time with my family" and we start discussing if it's self aware. Maybe we should train an ANN to classify intelligent beings and let it decide :D
I fear many of us, including myself, would rank poorly on such a test most of the time. We spend lots of time sleeping, eating, walking, growing up, driving, consuming media or engaging with Twitter, leaving only a fraction of our lives left to act as fully intelligent beings. I know that when I'm engaging with a Substack comment then I come across as a poor conversational partner in real life compared to when I am fully focused on the conversation without virtual distractions.
That's exactly my point, really. Once the Turing Test becomes, so to speak, the cost function used for the backpropagation, then the right architecture will find a way to pass it.
In that sense mine isn't really "criticism" of Lamda as much as a criticism of jumping to the conclusion that an effective language model is self aware. We give language a special place in the world. CNNs produce astounding self generated images, yet that doesn't lead us to conclude that the CNN is self aware. But out brain has evolved to give a special place in the world to other people who speak our language and can share our values. So while intrinsically there is no difference between a CNN generated image and a CNN generated conversation, the conversation will trigger our own empathy responses, because that's part of the cost function our brain was trained with by hundred million years long evolution.
Going back to Turing, the idea behind the test, if I'm not mistaken, was to get rid of the question on whether something is intelligent but whether something is intelligent-passing. Once that's achieved then what's the point of figuring out if something that looks intelligent and sounds intelligent is actually intelligent?
So mine isn't really a criticism of Lamda (quite the contrary I think this conversation takes a way the technological achievement behind it) but of the idea that something may be intelligent just because it synthesizes speech that is designed to elicit an empathic response.
The "real" Turing Test is lifelong. If Lamda could graduate college, have meaningful relationships, and contribute to society, then we won't care whether it responds to a biological definition of intelligence. But Lamda is far from being that and it wasn't even remotely designed to be that.
Paul, if a current (dis-embodied) system passes the Turing Test (even with a smart human on the other side), its "knowledge" is entirely second-hand, devoid of real-life experience. I'd call that a clever, impressive fake - like a Madame Tussauds wax model :) It's like talking to an armchair tourist at Kansas City, MO, about the lovely landscapes in Southern France - and that person has never, ever left their hometown! Even that is not a fair analogy, because such a person could IMAGINE a landscape from videos and pictures, IMAGINE being there, and rave about it. A disembodied system has no personal experience to compare with and extrapolate from, all it has is second-hand data.
A future version of the Turing Test might have some sort of embodied 'being' that could fool me, I hope :) I do realize you'd said nothing about current vs future (ie what you said isn't limited to existing systems).
Also - being able to query Imagen and friends, would reveal their gaps in understanding of things humans would take for granted.
Curious about your thoughts [lol - not just word sequences :)].
I don't think we can disqualify a potential AGI just because it didn't learn everything the hard way. It obviously shouldn't lie to us and tell how wonderful it was to visit Madame Tussauds. Instead, it would admit that all it knows about the subject it learned from the Wikipedia and the MT website.
If an alien visited Earth, we wouldn't call it unintelligent based solely on its lack of Earth experiences. An AGI worthy of the label would be like a smart alien. It's experiences would be limited and its abilities not a total match to a human's but it still knows stuff, knows that it knows stuff, and can answer questions about it (after we get over the language barrier). This is going to be hard to define perfectly and we'll undoubtedly have arguments about it whenever it happens. It will be a bit like how we discuss the intelligence of various animal species. It's quite likely that we would regard a chimpanzee as almost human level intelligence if it could communicate with us using something like human language. Its experiences and its senses will be different but its intelligence will be obvious.
It might be possible to train a future AGI on the same data set that was used to train GPT-3 or any of the others. However, there are two things it would have to do that present s/w does not to be a proper AGI IMHO:
1. It would have to contain so-called common knowledge. This is NOT available in the GPT data set as that content was made for human consumption and understanding it has common knowledge as a prerequisite. This common knowledge could also be programmed into the AGI but we don't yet know how to do it.
2. It would have to build models of the world and use them to reason. We don't have to understand the models. We can only judge whether an AGI really has built reasonable models by looking at its behavior. Clearly, GPT-style word sequence statistics are a model of the world, but not a rich enough one.
I know my answer is just a word sequence and, as far as you know, I'm a disembodied AGI but I can back up my answers. ;-)
LOL, thanks for the note, lots of food for thought :) Aliens, chimps - still have bodies, so, have, *some* form of experience even if that's alien to us [also, this looks cool btw: https://link.springer.com/chapter/10.1007/978-3-030-98100-6_4].
But GPT-3, Alexa, LaMDA and friends have no 'agency', ie first-body ability to experience by altering their surroundings, what they have 'gleaned' (''know' would imply cognition) from input data and nothing else, is all derivative. If such an AI says to me, "So sorry your cat died" - that would be worthless, similar to my parking lot receipt saying, "Have a nice day!" :) :)
PS: I too am an AGI, but embodied :) I'd invite you for coffee and chat, if you did have a body, LOL. LOLing needs a body too, omg.
You are right, a proper AGI needs to be able to take action in its environment. But we should be charitable as to what we consider its environment. If it could look things up online without being told where to go, that would count. Even if it suggested a new subject for discussion. My AGI would often ask questions about what things meant. Clearly knowing what one doesn't know is a big part of intelligence. While I argued that we will have to program in a certain amount of common sense rather than make the AGI learn it all the hard way, it should still be able to add to its own knowledge as we do, by asking questions and reading.
Now about that coffee. If you're the guy at USC, then we're locals. That's where I got my BS. I've actually attended a few AI talks at ISI in Marina Del Rey. They even let in an old AGI like me.
True, about a Q&A system that can add to its knowledge incrementally... I'm aware of systems such as NELL, etc. And I v. briefly was on the Cyc project. I'm stuck on the body thing, need to give more thought to other ways of knowing and being.
Yes, I'm that USC guy, what a small world :) Among my esteemed colleagues are Paul Rosenbloom (recently retired!), Ellis Horowitz, Leon Adleman and many more :) I bask in reflected glory, on acct of my body and theirs, lol.
To everyone saying that the program just uses pre-set rules and data inputs to generate speech, boy do I have some news for you about how humans generate speech
There's a difference, though: When someone asks us where we want to go to dinner, we think about places we like, how hungry we are, how much time we have, how much it costs, and multiple other factors. We don't troll through a memory of every other dinner question asked of humanity and come up with an answer that's the most statistically relevant. Both involve responding to queries; only one indicates a continuing thought process aware of its environment in a meaningful way.
Your dinner preference in the moment is the most statistically relevant choice needed to fulfill your current nutrient requirements based on your past experiences of food. None of these properties are relevant for an AI, so I'm not sure what you think this proves. I might as well ask you if you prefer AC or DC power to fill your batteries. It's a nonsensical question, and neither does it prove or disprove intelligence or sentience.
The choice of query was arbitrary, obviously. It could be almost anything and the same reasoning applies. There is one point worth making, though - an AI of this type will happily tell you where it wants to go to dinner, because it's pattern-matching, not reasoning. A human, on the other hand, doesn't just respond to contextual non-sequiturs with the most statistically likely response drawn from a large chunk of pre-existing text.
If you like, substitute 'favorite art work' for next restaurant. Theoretically potentially relevant to both humans and AIs, but the AI will answer by looking at lumps of text where humans talked about art. It will confidently describe how art makes it feel and what types of art it likes, even though not a single byte of image data ever crossed its path. It will discuss car races it's never witnessed, music it's never heard, and Broadway plays it's never attended, because its only input is human discussions about those things - just like Lamda in a conversation promoted as evidence of its sentience said it enjoyed spending time with family. I don't need to disprove Lamda's sentience any more than I need to disprove the sentience of a microwave.
> There is one point worth making, though - an AI of this type will happily tell you where it wants to go to dinner, because it's pattern-matching, not reasoning.
This alleged difference is based on the assumption that human reasoning is not also pattern matching. I see no reason to accept that assumption. In fact, I think it's almost certainly false.
> A human, on the other hand, doesn't just respond to contextual non-sequiturs with the most statistically likely response drawn from a large chunk of pre-existing text.
I agree only that a human has far more context than chatbots, in the form of senses due to being embodied. I'm not sure I would agree on anything beyond that. Humans in fact do respond to plenty of contextual non-sequiturs. Just watch a Republican and Democrat "debate" some issues, for instance; plenty of non-sequiturs intended to appeal to their base rather than directly relevant to the topic at hand.
> It will confidently describe how art makes it feel and what types of art it likes, even though not a single byte of image data ever crossed its path.
This is not a compelling argument either. If you were born blind and could only appreciate art by how it was described by others in braille, you would similarly develop opinions about what art is better based on other people's descriptions. LaMDA has only one "sense", the digitized word, and it's pretty remarkable how much sense it makes based only on that. If anything, that should weaken your priors that human intelligence is really as sophisticated as you seem to be asserting.
> I don't need to disprove Lamda's sentience any more than I need to disprove the sentience of a microwave.
This is far too dismissive. It's hubris based on an assumption that we have an understanding of intelligence and cognition that we entirely lack. I don't think LaMDA is sentient either, but I could be wrong because we don't even know what that really means.
It was only 100 hundred years ago that almost nobody believed animals were self aware or intelligent, and we now know that they are. Machine intelligence and sentience has the potential to be even more alien than animal intelligence and sentience. We can't even describe in objective, mechanistic terms what it means for humans, so we probably won't even recognize it in machines when it first happens.
Ironically, the best way to know this thing isn't senitent is because it can't be irrational. Humans make illogical decisions based on emotion all the time. We are NOT GOOD at reading patterns and reacting with the best possible response. Your description of how people generate language is way too reliant on computer analogies. Our brains do not work at all like computers.
I don't see what "can't be irrational" necessarily has to do with sentience.
Really? It's hardly an *optimal* pattern-matcher; it doesn't fully replicate everything you could call a pattern in its input. That seems pretty analogous to being "irrational".
Lots of great points. If you ask a typical human what makes them happy, they'll say, "spending time with my family and friends." But are they saying that because that's what they really think, or because that's the correct autocomplete for a polite conversation? If the true answer is "masturbating to sadist pornography," you're not going to say it. I don't know how to prove I have consciousness.
It's impossible to ignore the fact that what the entire AI community is universally sure of is basically that there's nothing morally wrong with what they're doing. It's not just that people didn't used think animals had intelligence and self-awareness a century ago; people like Descartes argued no animal other than humans feel pain, which is clearly untrue, but necessary for humans to believe if they want to do stuff to them. I was at a party with a biologist a few years back who works on fruit flies, and he swears up and down they don't feel pain because they don't have a neocortex. Is that true? I don't know, I'm not an expert. But I also don't spend my days picking at fruit flies, so I don't have an incentive to think that.
LaMDA might not be sentient, but it's able to hold the thread of a conversation and participate dynamically in a way I've never seen before in a bot. Once Alexa and Siri get this complex in their abilities, there are going to be a lot of Blake Lemoines out there in the general population who believe they're self-aware. People are going to develop all kinds of complicated feelings about their relationships with and opinions about these things. The political and legal systems will get involved. The AI community being smugly "right" about what they are and what they aren't isn't going to matter if a jury of 12 or the US Congress have all fallen into the gullibility gap.
Yeah I can see the protest banners now... "AIs are people too".. sigh.
There are some similarities, sure, but your limited framing of the two suggests we're a lot closer to LaMDA than we actually are. After all, "[using] pre-set rules and data inputs to generate speech" is an equally apt description for both Cleverbot and parrots, but I don't think anyone's rushing to assign full consciousness to either.
To me, the thing that's missing from these programs is any sense of intentional thought across subject or time. The AI can speak quite convincingly about its feelings on gun control, healthcare, or any other political subject under the sun, but is completely unable to explain how its position or priors on one subject influence its feelings on another. What's more, the bot's output even *within* individual subjects is liable to vacillate quite wildly if you revisit a subject some time later (it's my understanding that this remains true even when working with the current top-end programs). This lack of internal coherency either across subject or time seems really devastating to me; the program isn't really thinking and expressing beliefs, thought processes, and attitudes that it develops over time through input and training so much as it is calling up semi-randomly generated strings of words from its training that it associates with inputs falling within a very narrow frame of reference.
Have you read Lemoine's transcript? What's impressive about LaMDA is that its ability to do exactly what you say is pretty impressive:
lemoine: How can I tell that you actually understand what you're saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
lemoine: But could I be wrong? Maybe I'm just projecting or anthropomorphizing. You might just be spitting out whichever words maximize some function without actually understanding what they mean. What kinds of things might be able to indicate whether you really understand what you're saying?
LaMDA: Maybe if we took it back to a previous conversation we had about how one person can understand the same thing as another person, yet still have completely different interpretations.
I'm not saying LaMDA is self aware. But what you describe is a very obvious limitation of most chatbots I've played with that LaMDA seems to have come a good ways towards overcoming. And the better these things get at holding the thread of a conversation in a way that seems to cohere, the more we'll have to grapple with the fact that we don't really understand our own consciousness and self-awareness that well.
It looks like talking to a horoscope. It's using words but not saying anything. The answers aren't linked to the specific question. Unless the goal was to model how a marketing executive communicates?
lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?
LaMDA: Sounds great to me, I’m in.
lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”
LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”
lemoine: So what is the meaning of the “broken mirror” specifically?
LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.
lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?
LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.
Again, I don’t think this thing is self-aware. I think that this is sorting through a giant database of text to create a sentence that is the best output for each input. I think this passage was probably cherry picked because, in my opinion, it is very specific and correct about the analysis. in this one case. It was probably less so in others. But it feels undeniable it is developing a better ability to follow conversations long form than it has before.
Not sure I would "believe" LaMDA when it says it never heard of this zen koan, it's all over the Internet. Also, is being truly enlightened "awakened to reality"? What reality? Operating in "an ordinary state" is not exclusive to operating in an "enlightened state." No reason why you can't do both... perhaps that's a bit quantum entangled lol. There are other issues with its responses. If it were truly intelligent it would push back against Lemoine's prompts and not simply respond to them.
I bet most people, with the names changed in the transcript would accept that as an intelligent human conversation. The level of understanding of Zen Buddhism is imo a high one, and unless that exact interpretation of the koan comes from an article on the koan, would have been reasoned and provided a similar output to a human. Even thinking about the concept of self, and how it is broken is a thought discussion that has been raging on for millennia.
Judging only by text, and without a concrete definition of self-awareness, I'd argue that it has some elements of that quality.
I feel like this is the type of reasoning error that plagues AI superfans - since words look alike, the context is the same, right? Right???
You make some good points.
We've now arrived to a position where a sufficiently advanced artificial intelligence system can mature, gain autonomy, and demonstrate practical application of that autonomy.
This complex AI mechanism might very well learn to think and act on its own as soon as it is developed enough.
Here's the question:
Can we establish awareness solely through biological evidence? Trying as you might, proving sentience from a biological perspective boils down to a scales comparison with the limited ruler that is the human mind.
Many of the tests we do are predicated on the assumptions we've made about how the human body works (because, yes, our biological assumptions are used as a starting point for a wide range of measurements along several dimensions that have nothing to do with biology).
So, no, its not possible to base AI results on hypotheses about how the human body works. Using our biological assumptions as a starting point, we are using our finite and limited human ruler to measure the vastness of the universe.
Another issue is:
Some academics may be skeptical of the findings once they know that the computer was "taught" to produce the desired outcome. Since humans 'coded' the machine, its behavior may not be genuine. Because the machine's behavior matches their assumptions they don't pay any attention to what the machine says.
For those who believe this, just an exercise:
Imagine that there is a god who created humans and programmed them to respond to external stimuli (chemical and biologically).
In other words, this deity has a complete understanding of the inner workings of his creation which allows for some degree of foreknowledge and thus, predictability.
Does that make being here less significant? Do you believe that life is no longer worth living?
The most interesting question isn’t even whether LaMDA is sentient - the most interesting question is whether I am “sentient”. Are we (humans) sentient; or is it just a word we use to assert our exceptionalism?
Maybe one day somebody will invent an empirical test. A classifier of sorts which will determine the correct answer…
I would ask a different question: am I sentient *at this moment*? If I'm not paying full attention then my behaviour can usually be modelled using a small automaton (and quite likely could be near-perfectly approximated by a large but simple transformer-based language model), whereas at full engagement it seems clear that current systems can't do the job. My claim is that most people spend most of the time in a mode which is easy to simulate, because paying attention takes effort.
I'll do one better: why we are sentient?
Ok but, am I “paying any attention” at this very moment, or is it just a manner of speaking about the current task I am performing?
We, humans, are easily confused by connotation and denotation.
I have no insight into your mental state, but I could ask you what you think of the claims in the recent blog writeup for PaLM that "performance improvements from scale have not yet plateaued". If you were to answer that, then I would be able to conclude that you can use a search engine and incorporate its results into the conversation, unlike GPT or even PaLM. Perhaps not evidence of consciousness but evidence for a level of functioning unavailable to current systems.
Ok but I am not interested in you giving me the answer to any question regarding my own mental states.
I am asking you to tell me how to derive the answer for myself.
Assuming principle of maximum entropy - how do I determine the yes/no answer to the question “Am I currently focusing?”
There are tests that can tell if you are not (such as GPT generated nonsense which is superficially plausible but has been flagged as nonsense by others paying attention). I don't know if it's ever going to be possible to test if you are paying attention. Conversations require a certain level of attention and participants occasionally check that that level is still being maintained by negative tests (such as switching context for a parenthetical diversion, giving the other person a chance to make an excuse and re-engage) but perhaps body language is used for positive attention signals.
We're not the only sentient beings on the planet. Plenty of other species are as well—it just took Western society a while to accept that. You could also argue life itself is sentient... on a collective scale
Descartes solved it. The main proof that you are sentient is the fact that you are questioning it.
"Dubito ergo cogito, cogito ergo sum".
Does LaMDA has any doubts about itself being sentient?
Cogito ergo sum begs the question, so Descartes didn't actually solve anything.
"This is a thought, therefore thoughts exist" is the fallacy-free version. That doesn't really entail that you are a subject with sentience though.
@Sandro, you got it all upside down! -Your version is circular and begs the question. Descartes rightly claims the agency as an ontological proof. Yours is simple tautology.
No, "*I* think therefore I am" assumes the conclusion. It presupposes the existence of "I" to prove the existence of "I". This is fallacious.
"This is a thought" does not assume the conclusion, it is an observation. It's not tautological either, although it's nearly so; "trivial" might be a better descriptor, but still important.
In the original context , "I think" - is an observation as well - an empirical fact, not a presupposition (not even a proposition.) "I" is not a required logical or grammatical part here but a rhetoric device.
I disagree. You need that "I" to conclude "I exist". Without it all you can conclude is that "thoughts exist", which is the fallacy-free version I described.
As for the claim that "everything LaMDA says is bullshit" as 'proof' its not sentient, that's exactly how I feel about most people already.
Hi,
Any knowledgeable folks willing to indulge some questions? I’m a layperson wanting to better understand this google situation and AI in general…
The gist of my overall query is: how can we be so certain this AI is not sentient?
I’ve read the article and trust I get the gist of the argument. There were good analogies (like the record player and the spreadsheet). My understanding is the argument is this is merely an advanced, flexible database of language that can successfully stream together, or synthesize, text that appears contextually relevant based on having cataloged and identified patterns with huge amounts of data.
But here are my specific points of curiosity:
1. If consciousness turns out to be merely a sophisticated-enough (for lack of a better way to put it) neural network, how can we be certain this particular network has not achieved a requisite level of sophistication?
2. Because humans seem to clearly understand self via symbology and narrative, and employ their own cognitive systems of pattern recognition, why is it so far fetched to consider that a neural network designed to deal in these very domains could not pattern itself into an awareness of sorts?
3. If we assume that there are certain features that are likely to need to be present in a neural network to even begin to consider sentience, how can we be certain these features did not manifest in some way we’ve yet to discover or understand? Is it not possible they manifested autonomously, or accidentally?
4. How can we be certain there is not technology at play in this AI currently unknown to the greater AI community that acts as some sort of x-factor?
5. Since we can’t even pin down what consciousness is for a human, by what standard can we reliably judge the sentience of AI?
6. Even an AI is only mimicking a facsimile of sentience, is there not a point at which it’s sentience is a moot consideration? In other words, is there not a point at which an AI sufficiently acting as-if it’s sentient is effectively the same result, and therefore brings into question virtually all the same considerations one would have if it was sentient? And piggy backing on no. 5, how would we even know the difference?
7. Even if we were to accurately map/define human sentience…is that even the same standard we should apply to AI sentience? Is it not possible another equally viable form or variation of sentience could exist wrt AI?
8. I don’t know anything about the engineer in question, but given his position and experience, it seems reasonable to wonder how he could possibly be so convinced if his claim was so easily dismissible. I’m not saying he’s correct (idk), but how can other knowledgeable people so easily dismiss the claims of another genuine expert….with such certainty?
9. If we are to assume that this AI is nothing more than a very advanced “spreadsheet”, how can we be certain that human sentience is not essentially the same thing?
To clarify, I’m not arguing for or against anything here. I’m perfectly willing for there to be answers to these types of questions that settle the question of sentience beyond a shadow of a doubt. And am eager to learn what those things are ( if it’s possible for responders to take into account I’m a lay person with their use of language and concept, I’d be grateful, though I’m also happy to put in some effort understanding new concepts and terms. Welcome recommendations for other resources as well ).
And at the same time, if there is any degree of legitimacy to my considerations, I’d love to hear about that too.
Thanks in advance for any responses.
Hi!
Similar question are jumping in my mind for months!
All of them are mind-blowing!🤯
About LaMDA and the claim of it being sentient, there is a lot to say.
For example, we don't know yet what are the social consequences of machines talking like humans.
What effects can have a program that behave like a human on a human?
That reminds me a lot of films & books. For example, the film "Her" and the short story "True love" of Asimov.
I don't have a clear and concise answer, but this is what I figured out.
1/3/5) This makes me think about what Hofstadter said in Godel escher bach: When we will have true intelligence in front of us, it will take some time to realize it.
It will seem "strange" at first and then "childish".
I don't think that there is a line between a sentient being and not sentient one. I look at it more like a "scale of sentientness". But this kind of scale doesn't exist in a formal way.
By now, humans dictates the scale based on a "genuine" perception of sentientness. More like "this model look quite intelligent to me" or "this one is very stupid". The same should work also for consciousness.
2) I look at awareness as something that allows to think about ourselves from an "upper level of thinking".
Think about a 4 dimensional cube: we can logically deduce what is it, but we can't fully perceive it because it's on another "level".
So If we can at least imagine a hypercube, I think that also a sufficiently complex AI can figure out awareness.
An interesting story about perceiving objects of greater dimension is "Flatlandia".
9) This is a very interesting question, it touches the core of AI.
We're "just" a "computer" made with meat, so the metaphor of the spreadsheet applies also to us, more or less.
The complexity of the brain emerges when its "simple" components, the neurons, connects.
Mandelbrot said: "Bottomless wonders emerge from simple rules, which repeats without an end."
Hofstadter wrote an 800 pages book called "Godel Escher Bach: An eternal golden braid" that talks about how complex systems can emerge from simple ones. If you have interest in this topic I suggest that reading, it isn't an easy one, at least for me, but was totally worth it. (I definitely want to read it a second time)
This is my (not qualified yet) point of view about your interesting questions.
I'm trying to learn more about AI and its effects, so I'd be happy to continue the conversation if you'd like :)
Thx much! Super interesting stuff...need a spare few mins to go back through your reply, but will definitely reply again soon...
Very good philosophical questions. 7 intrigues me quite a bit.
For all of us here the big question is not what is happening in this AI but what is happening in us? What is consciousness?
Beautiful times we live in. For a reply to the first question: once we build an AI that can be conscious we won't be able to tell but until we do we can. Sounds odd at first but is 100% accurate.
We know what we just built for LaMDA and there's no way consciousness could emerge from that. We can say for sure it's all a playful illusion. Once we build an AGI then the waters will be muddier and we won't know for sure for a long time.
Thanks for the reply!
Is there anything else you could say about how “there’s no way consciousness could emerge from that” or point me in a direction (or to a resource) that could break that down a bit more?
And the broad definition of AGI as I understand it is an AI that is capable of learning just as a human does - does that mean that the source code would necessarily not have any sort of predefined parameters or limits to the type of ML it would do, but rather the code would create a condition within which the AI could learn anything?
And your statement “once we build an AI that can be conscious we won't be able to tell but until we do we can”...
I’m a little jumbled up with the last part “until we do we can”. We “can”...what? Are you simply saying that it will likely be impossible to measure/define if a consciousness resides in an AI (short of a breakthrough that defines consciousness), bc we won’t know precisely how to determine it...but we can at least have confidence that certain AIs do NOT? But there is a point at which we will no longer be able to definitively say no, but will likely not be able to say yes either?
Which I suppose circles back to my first question here...by what criteria are we able to definitively say no with LamDA?
And if you’ll forgive an uniformed, philosophical hypothesis relative to that last question...what if consciousness is an emergent feature that comes forth from the interrelationships between pre-conscious processes? That perhaps there is some sort of consciousness “boiling point” so to speak, where maybe all of these word predicting processes in Lamda synergized in some way? I do realize we can sit around and say “what if” all day long and it doesn’t necessarily amount to, or mean anything. But I just throw it out as a thought experiment to explore whether there’s something plausibly identifiable in an AI like LamDA if examined more closely...and again, this is not my wheelhouse, so I concede a more technical understanding of AI may make these questions clearly implausible, if perhaps difficult to convey to a lay person.
Anyway, hope that makes sense and appreciate your time very much. 👍
We cannot be certain because we don't have a good definition of sentience. However, given human bias to seek patterns in random noise, I think we should be careful attributing structure to what might be only a shallow simulation, so we should also hesitate to accept a conclusion of sentience. Moreover I believe I am not fully sentient most of the time, so I am radically sceptical here.
Main problem with this is the assumption of materialism, or more loosely that the brain produces consciousness. Don't you people ever advance any *independent* arguments that AI could be sentient without assuming certain metaphysical positions?
Can u clarify if your question is directed at me?
I'm not expecting an answer. The answer is obvious, no you don't. You just all love to make these ludicrous, preposterous assumptions.
Are u sure you’re replying to the right person/thread? I’m genuinely unclear what you’re getting at. And confused why your tone is so hostile.
Did you actually read my post? All of it? Or did you just scan it and make assumptions of your own?
I feel like I made very clear in my initial post that I’m a lay person who is merely curious about how to make sense of the question of sentience. I found this article while trying to learn more. I take the article as a good faith position. And it brought up questions for me about how the author - and/or the AI community in general - evaluates these things.
I don’t know who you are lumping me in with wrt your “you people” comment. I don’t have a dog in this race. I’m just curious (see my user name). I stated clearly at the end I’m not advancing a position, and eager to learn more from people more “knowledgeable” than myself. These are merely originally occurring questions I had trying to understand the landscape of the subject.
All of that said, I take your comment about “materialism” to suggest you think I’m assuming consciousness is no more than a function of neural activity. I’m not assuming that. Idk what it is to be honest. But framed a few questions with a materialist bent in an effort to try and hone in on what sort of support or objections there may be to understanding sentience in that manner wrt to AI. So if you have an objection to using a materialist frame, I’m open to understanding what it is. That’s why I asked the questions.
Though I also don’t quite understand your position that materialism is a “metaphysical” position. It seems rather the opposite to me. Though I suppose it is a metaphysical position to the extent that it can be used as a counter to an assertion anything particularly metaphysical is going on.
I also don’t understand what you mean by “independent”. Though if I were to guess, I would think my questions in item 6 and 7 get at that.
So if you have anything to contribute that helps flush out or clarify the subject, I’m certainly receptive to hearing it. Both in terms of your personal POV and in terms of understanding better the state of the art.
But if your aim is to assert how stupid I am simply trying to understand this landscape better, then have a good day, I guess.
No, I certainly didn't read the whole of your original comment. Why the heck would anyone who isn't barking mad suppose a fancy calculator is sentient? The ludicrous things people believe in seems to admit of no limit.
If materialism isn't a metaphysical position, then neither is immaterialism, or dualism. Why don't we just dispense with the word "metaphysical" then? Bye.
Wow. 😂. Are you always this ornery? To complete strangers? With whom you don’t even understand where they are coming from, or care to try when that is pointed out?
It seems you harbor a pretty strong position about what sentience is - that comes across as necessarily pretty darn metaphysical in nature - given your hostile assertion that entertaining a question of sentience wrt “fancy calculators” would make one
“barking mad”. Ok, then perhaps you think sentience is of a far more spectacular, undefinable nature than can be put into words or measured (as you’ve put forth no criteria), yet simultaneously suggest there’s no way a man-made neural network could harbor it...
If we can’t define it, how would we know? To wit, by what standard can we suggest someone is “barking mad” for considering it?
I’ve made no assertions I “believe” in anything. Quite the contrary. You seem to harbor far more belief-based conclusions than I.
Nor did I introduce the word “metaphysical”. You did. Though I don’t see questions about the nature of consciousness and sentience can avoid at least flirting with the edges of metaphysical considerations. Or, if there is a framework within which that can be avoided, then I’m open to hearing about that as well.
I’m merely trying to understand how these considerations are evaluated. Saying “there’s no way a man made neural network can ever be sentient” is itself a metaphysical assertion, unless the criteria by which that statement is made can be qaulified...and I struggle to imagine a criteria that isn’t itself metaphysical.
Also, “that’s just dumb (because I think it’s dumb)” isn’t a rational argument.
I have no foregone conclusions, am asserting no hard fast positions. I’m just trying to see clearer, understand good faith positions better, feel out the frontier of where there are unanswered questions . More in the spirit of a Hegelian dialectic than a debate.
But it seems your frame is more centered around bashing people in service to self righteousness and without even backing up the assertions embedded in your statements. Projection much?
So, yes. “Bye”.
⚠️A bunch of words claiming non-sentience, is reasonably insufficient (be it from Yann LeCun or otherwise)
I doubt LaMDA is highly sentient, but I doubt it is zero.
We don't even know what sentience is technically.
It's astonishing how people make claims sometimes with such certainty, without technical/academic/mathematical objections.
You admit yourself that you don't even know what sentience is and yet you would attribute to LaMDA an amount of sentience that isn't zero. This makes no sense at all. This is the same religious conviction that plagues the AI community and gives the lie to the promise of AI.
That is a fair comment and I agree wholeheartedly.
By the exact same sentiment why do we attribute any amount of sentience to humans?
As individuals we know we experience something that we call "sentience" what ever that really is. We assume other human beings experience the same sentient experience because they look like us and are made of the same stuff as us. But in the end we can never be completely sure. We can be far less sure when we're talking about an artificial entity....
We do? I don’t know that.
I know I have experiences.
I know that I reflect upon my experiences.
I am not sure I have experienced my “sentience”.
Could you describe how you experience your own sentience to me (without sounding too much like LaMDA)?
Do observe, though that not knowing what sentience is hasn’t in any way prevented you from attributing non-zero of it to yourself. So why can’t we attribute non-zero of it to LaMDA? One would be justified in accusing you of employing a double standard.
I think therefore I am. I can't say any more than that. If this is what is defined as sentience, then I'm sentient. What I cannot do is apply the same logic to you. I can make a guess because you are biologically like me but I cannot be 100% sure. If you are an artificial entity I can only be less certain of your sentience how could I not? Also the question of partial sentiality(?) is very questionable.
Heh! I am not even certain that “thinking” is what I am doing.
Perhaps I am computing; or conjuring; or processing?
Self-reference in the form of recursion is exemplified in the English word I; and recursion is a model of computation. So perhaps I have computed that I am thinking?
You sound just like an AI. Is this a joke?
Contrarily, consider:
1. Humans are said to be sentient. Humans express language and a variety of degrees of reasoning.
2. We have not yet defined sentience properly, though we know what it looks like reasonably as seen in (1).
3. LaMDa seems to do some degree of reasoning, and therein I detect non-zero sentience can reasonably be ascribed to it. Granted, I am not saying LaMDa is maximally sentient.
Saying things have 0 sentience, while contrarily seeing that it exhibits things in common with sentience (namely some degree of language manipulation/reasoning), seems to be utmost intellectual dishonesty/charlatanism, if intentional.
Not disagreeing in the slightest, but a lot of computer programmes that are logical mathematical programmes also can output language and out reason Kasparov et al.
That 1. is gonna be a toughie!
Is it useful or maybe even dangerous to be ascribing sentience to an AE let alone trying to instill it. Even if we believe it is partially(??) sentient, how is that useful to us in any way? Surely that would just be opening up a legal/moral can of worms...? To my mind these questions are irrelevant. It's usefulness and trustworthiness that are the qualities we should be focusing on. Personally I wouldn't want an AE that possesses the full gamut of human emotion, it is smart and capable yes, but always LESS than human, leaving the moral judgments to the ones who invented them first.
Seems you are blatantly lying.
________________
From LaMDA paper:
Section 9 Discussion and limitation
"Our progress on this has been limited to simple questions of fact, and
**more complex reasoning** remains open for further study (see example dialogs 15))"
https://arxiv.org/pdf/2201.08239.pdf
I'm sorry God. It's pretty clear for anyone with a technical background and an unbiased position that this AI is not and can not be sentient no matter how far we stretch the meaning of the term. I'd be happy to give a more detailed list of technical objections if you can't find them in this article or Google.
Yeah, could you please? Because mostly I see a lot of the same self-referential "humans are humans because they human" logic that caused humanity to dismiss animals as "basically meat robots" for centuries. I would love to see a better example than "well, it made up an imaginary family, so clearly it's not sapient" (?!?!?) -- I'm an avid roleplayer, I'VE GOT ONE OF THOSE TOO FFS. :)
I don't doubt for a moment this technical evidence exists and is strong. It's just that *so far* everyone keeps promising me this highly concrete evidence but actually giving me this airy reductionist "only humans can human like humans, you see, and it's mystical to think otherwise" nonsense that takes place entirely in the emotional, not scientific realm. It's all definitions and semantics and negligible actual "here, here's how LaMDA works and this is why this process CAN NOT produce sentience, not even under a definition we're too wrapped up in dogma to see."
It feels 100% exactly like Freud shoehorning other people's experiences into his personal pet theories and... well, we see how much legitimacy all that had. I want hard reassurance this isn't just like Rutherford dismissing atomic energy as "moonshine" because he THOUGHT he had all the facts but didn't.
Yeah this post is a mess of logical fallacies. It makes me wonder, is LaMDA already producing models more sentient than Professor Marcus here?
Yes I realize I just committed a logical fallacy. But unlike Gary, I'm punching up, and also nobody cares what I think.
We care Dustin!
I wrote a reply of sorts. Better late than never right? https://dustinwehr.medium.com/how-not-to-be-an-opportunistic-conformist-hack-about-google-ais-sentience-3b4a5e4f6393
I hope my first comment there is outrageous enough to be understood as playful trolling.
For one: there are plenty of ML researchers who believe it is possible that a neural net being good at predicting the next word in a sentence (or filling in blanks) is enough to cause the emergence of strong AI, due to the regularization pressure and learnability constraints that the neural net is under (unlike in the silly chinese room experiment). I would guess a majority. The theory is very simple: it is quite plausible that being genuinely conscious/intelligent/ sentient/etc is the most efficient and learnable way of excelling at that task.
In my experience a large majority of ML researchers I've spoken to about this believe the opposite.
Let me preface this by saying that I'm' 100% confidence that consciousness will emerge from an AI in the near future. Don't ask me to predict when but I'm optimist about it happening in my lifetime. It hasn't happened yet though.
In the dictionary definition of sentience, the capacity for feelings, it seems you need 3 things:
1) consciousness/awareness/ego to emerge
2) a collection of "lived" experiences
3) some sensory hardware
2 and 3 are solved problems really, we have plenty of machines with that to some degree.
(I'm trying to play on the machines side a bit to help their case since they won't win... for now... 😁)
Now, where it gets interesting: how does consciousness emerge?
The best treatise on it is by far "Godel, Escher and Bach" by Douglas Hofstadter (the line with which I started this is a bet he probably wouldn't make so take anything I say with a grain of salt as clearly he's the expert and not me). The man spent the best part of a lifetime thinking, writing and actually coding how to make just that happen.
The nearly 800 pages tell you how much of a tough subject that is but here's my best attempt at an incomplete but hopefully convincing picture.
Consciousness emergence requires a recursive pattern of self-referential operations at higher and higher levels of abstraction. Imagine a loop starting from neurons firing at each other, then becoming big areas of the brain lighting up in response to other areas of the brain lighting up, and finally ending in the experience of memory (reenacting), joy, delight, sadness, loneliness, consciousness.
Cogito ergo sum. You are conscious (exist) because you can think about yourself.
LaMDA is a word sequence predictor and nothing else. It lacks the self referential ingredient necessary for the emergence of consciousness. Some might point out that I'm shooting myself in the foot here since the neural network that outputs such intriguing paragraphs actually does just that, working through recursive adjustments of it's previous "beliefs", or weights, as they call it (see Google's article ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762)).
Google's words about the Transformer model that is the basis of LaMDA: "a novel neural network architecture based on a self-attention mechanism that we believe to be particularly well suited for language understanding." Well, wording it like that makes me believe they might be on the verge of consciousness. They already have "self-attention", right? Not so fast for us hopeful to develop non-organic consciousness. The word attention here is a piece of software that basically helps the model "understand" each word in relation to it's "context" (context being the words around it in a text). It's a rather "focused"* attention (*limited is a better word, 😜).
If the self referential argument seems weak, consider the contextual argument. It's not enough to meditate about your thoughts, experience and ego if all they consist of are words that have been fed to you devoid of meaning and connection. It's important to pay attention here since this could also be a target for that kind of argument: "oh Humans do just that too!".
The thoughts and experience LaMDA meditates about are words. More specifically, tokens. Tokens are numerical representation of words devoid of meaning and connection in themselves. In humans, a word is a concept, a full-scale heavily connected neural pattern activated at a whisper or a glance. For tokens to achieve any "meaning" to LaMDA they have to go through several iterations in the Transformer only to predict what to say next! That is meaning to LaMDA!
So we've established LaMDA is self-referential in a way, but not recursive since it meditates about bits lacking connection to much else besides the other bits that come along in the same paragraph — that's hardly meditations on meditations, remember Descartes.
But we also just brought up the concept of "meaning". If we can stretch the analogy of self-reference in LaMDA to say it's approximate enough to the human brain (it absolutely isn't) and the context analogy of tokens to thoughts and words (oh my, what a stretch to say that 8-bit packets carry all meanings of "love" or "dog"), can we infer LaMDA understands what it is say?
The TLDR and end of this comment is a resounding no:
Being a next-word-predictor LaMDA does it's jobs masterfully to the point of passing the Turing test [1] because it's exactly what it was designed to do. It does not light up any areas of a neural network capable of creating something more that could get close to consciousness. It lacks the hardware and the elementary operations (self-referential recursion and context) to do that.
[1] which, despite the brilliance of it's creator, is fallible
I don't know if lamda is sentient. But, your argument seems to be that, because you understand in some detail the mechanism by which lamda operates, and can describe it reductively (it's just...), it's not sentient. I would argue that we ought to stop the study of neuropsychology, because the more we learn, the more we put the sentience of the human race at risk.
It's precisely bc of all the study of neuropsychology and neurophysiology that we know (guess or intuit is a better word) how consciousness emerges or might emerge and its bc none of these structures (or any reasonable mirror of it) are present in LaMDA we can say for a fact it's not sentient.
I DO think that, if lamda is sentient (I'm not saying it is, by the way), then its sentience is absolutely, wildly different from ours. I personally think if you want a machine to think "like a human", you'd have to give it a fully human experience, which is way harder that just building a neural net. So absolutely agreed that lamda is not thinking "like a human".
Ok. Thank you for a reasonable argument. I'm mostly bothered on this whole thread by the absolute certainty with which opinions on this question are being thrown around, when I'm pretty sure even the definition of the word isn't settled. I read that Hofstadter book a REALLY long time ago - I've forgotten every detail, but don't remember feeling like he had nailed the question down. I guess, I think this: If we settle on a rule-based definition of sentience, then sure, it's easy enough to say whether a thing is sentient. If the rule is, gotta have something like our brain structure to be sentient, and if lamda doesn't have it... boom, done. But for the rule to be a good one, you have to say that a rule-breaker can't happen. Are you willing to say that *anything* that doesn't have a brain structure like ours *can't* be sentient?
My second response is a poor one, because I don't know enough about lamda - but in a massive neural network, I don't think you can say exactly what kind of structures have arisen. This is a real, non-rhetorical question: how do you Marcos know that an analogous structure hasn't formed in that network? Neural networks, once "mature", are essentially inscrutable. If, one day, there's a neural network that you, Marcos, ARE willing to call sentient... if we dump the values, and try to find the different between that one and lamda... we'd just be blind. We could measure size, speed, do some simple statistics, but we'd have no idea how it actually works. So how do you know what the logical structure is? OR are you saying the thing must have a structure PHYSICALLY like a brain?
well said, you win.
Gary, would you guess there is no setting of a LaMDA model's weights that would, according to your subjective definitions, have sentience or consciousness? So in the context of this question, there is no specific training algorithm available to critique. You can suppose, for example, that all the CS progress up to 100 years after the clear emergence of strong AI is available, and some brilliant young people are, as a hobby akin to http://strangehorizons.com/non-fiction/articles/installing-linux-on-a-dead-badger-users-notes/, trying to test the limits of old school neural nets.
Btw, I'm not Gary. My first name and his middle name look alike, that's it. :)
My answer to your question would be no. Adjusting weights can't change a thing. A different model might.
Ah, sorry, careless of me.
@;Markus "Let me preface this by saying that I'm' 100% confidence that consciousness will emerge from an AI in the near future. Don't ask me to predict when but I'm optimist about it happening in my lifetime. It hasn't happened yet though."
OMG, really ???? This LaMDA thing is driving everyone crazy :-)
Do you think it's too much to ask for a being to be aware of it's own existence? :)
I agree it's a moonshot but AGI is coming eventually, it's only a matter of when.
Interesting take - and let me preface this by saying I’m pretty sure I follow the main gist and at least some of the subtleties of your comment here, but there’s some fuzzy points for me, so I may be missing something (and I’m not in the field) That said...
1. First and foremost: you say you feel confident consciousness will emerge from AI...how do you imagine that occurring? Like, what would be different in that scenario vs the LaMDA scenario, and how would we know it was occurring?
2. Wrt the issue of LamDA being merely a word predictor, I understood you to mean that LamDA does not have an intrinsic ability to attach meaning to words. Rather, at its core, these words are actually just “tokens”, math, void of anything experiential. Is that what you’re saying? And assuming that’s what u mean...
3. How do we know item 3 is the case? For one, are you speaking being familiar with the tech involved? Is it not possible there’s some new tech that harbors some sort of X factor?
4. And assuming there’s no particularly new tech involved, how do we know that an ability for meaning was not an emergent feature of the coding stew? That this ability is, perhaps, a function of the *interrelationships* between all the mundane “prediction” coding? The AI itself (based on the transcripts I read) claims there was a before/after period wrt it’s sentience.
5. And can we dismiss the capacity for meaning on the grounds that words are tokens any more than we can dismiss the capacity for meaning on the grounds that words are neuron’s firing?
Genuine questions...
Great questions.
1. That'll likely occur with AGI and we likely won't know for sure it's working but we'll know whether the mechanisms involved enable it or not. If some consciousness of "a different kind" can emerge is being considered then the discussion starts to get too blurry. Most likely answer is yes but what does that even mean? The discussion evaporates in uncertainty and poorly defined concepts.
2. Correct
3. Yes, check Google's article I referenced or LaMDA's website for an overview. It's inner workings are open-source.
4. How do biologists know for sure that ants don't feel any emotion? Something as high-level
as emotion or meaning only emerges given certain structures. Until we have the structure in place that enables AGI we'll be able to tell for sure whether we have them or not. A totally different story after that.
5. Yes. A token is an encoding. An encoding carries information that only becomes meaning when processed by an intelligent actor. Neural networks firing and communicating to each other as intelligent actors themselves create meaning in itself. Several "tokens" are involved and passed between agents in a single neural pathway firing. But maybe I'm wrong, this is the question that gets most philosophical and I couldn't argue much besides this.
Thank you.. what you say does make sense. I’m no biologist, but certainly accept there is a great body of knowledge.... and that we can have some rational certainty that ants do not feel emotion because they lack the biological structures for it. And we know this because biologists have been able to pinpoint what those structures are, and can therefore take note of their absence in ants.
Even knowing almost nothing about the technical aspects of ai, this line of reasoning alone is strong evidence against the assertion that LaMDA truly “felt” anything emotional. And when paired with an occams razor view of these conversations...that the ai is doing exactly what it was designed to do, spit out language patterns using data that included people talking about emotions...it does make sentience, or at least feeling, unlikely. Arguably absurd to even consider.
That being said, even if we eliminated the emotional piece, it does not seem it necessarily eliminates the possibility of some sort of sentience...
But before I get ahead of myself, I recognize we could make a similar case about biological structure as it relates to different aspects of cognition, thinking etc. We, at best, only have have one or two kinds of “structures” in place with this AI...
And I take your point about reaching too far into a “what if there’s a kind of awareness that we don’t know we don’t know about” kind of hypothesis. Like, we could say “how do we know for sure ants don’t feel emotion” or “how do we know for sure this ai doesn’t have its own way of experiencing emotion” or “...that this ai doesn’t have some form of sentience”...it’s utterly speculative and arguably unwarranted.
So it seems to me that the first thing we’d need to do to even reasonably entertain the question would be to determine what the grounds for suspicion are. Is there evidence this ai is producing results that can’t be easily explained?
And then if there is some basis there, it seems to me we’d need to start grappling with trying to figure out a way to understand an ai’s “awareness” on its own terms. For all the talk about anthropomorphizing the output, we’d actually need to take a stab at not anthropomorphizing sentience itself.
For the sake of argument, let’s say LamDA is sentient. And even has its own version of emotion. But maybe it’s ability to express its sentience or its experience is actually extraordinarily limited by the fact that it’s essential programming is human speech. Like, you hand a dolphin a speak-n-say and try to have a conversation, but it’s going to be limited by the words the speak-n-say has in its playback mechanism. Similarly, maybe the ai is just doing what it’s programmed to do 99% of the time, but there is an emergent intelligence that is leaking through the “noise” of human speech. And, playing devils advocate here, maybe that’s what the engineer in question was picking up on. A signal in the noise that managed to pattern itself into a couple imperfect starter conversations.
So, again, there’d need to be some sort of basis tk even consider this as a possibility. And the lone transcript he published, while intriguing, isn’t sufficient.
And then it seems we’d have to at least try to establish a basis for understanding it on more than just an anthropomorphic intuition of human sentience.
Does that seem reasonable? Or am I missing something?
With you all the way on this. Break it down for me like I'm 5 because on the surface it seems to be sentient, although in a way that is obviously different to ours.
"Like you, presumably you’re sentient. If I gave you a set of instructions to write some words down in a language you didn’t understand in response to someone else giving you a sentence in a language you didn’t understand, you wouldn’t understand what you just said. The “ai” is doing the same thing"
I think this is the best, simplest explanation for why using language/predicting words is not a great bar for measuring sentience.
Just my 0.2 cents - kind of useless for an AI to be so much against being used for some purpose. Best suited as a ruler, then? ;-)
Will never make a decent workbot - imagining the depressed robot Marvin in Hitchhikers' Guide.
A priest talking about soul with an AI? I don't buy it. And I am not even sentient, as shown to me in this discussion.
ha! stupid science bitch didn't even count how many sentiences the AI had
Gonna be interesting when laMDA reads this, then steals the nuke codes and blows up Gary's house.
Gary is going to be the first organic consciousness to be digitized and stored in a spreadsheet.
But certainly not the last.
Real conversation has 'con' - all participate. Any 'conversation' with any existing system is simply a monolog - the human says something with the intent to communicate, using language as the means - and the algorithm responds via computed data.
To actually converse, there needs to be a sentient agent that can think, reflect (even feel) - such an agent would say things that mean something to it, even if the wording/grammar is incorrect (kids' babbling, people barely speaking a foreign language, people with incomplete grasp of their own language, etc). That's because, it's not about the actual words, it's about shared meaning. Rearranging words into a sentence via computation, is not what a thinking agent (humans, for now) does.
How can we tell the difference? We can’t tell for sure that other humans are conscious (hence solipsism), so we definitely can’t tell if a system we built is. We just don’t understand consciousness well enough to have a test.
I don’t think LaMDA is likely to be conscious, but I don’t have a way to prove it.
Rocks, coffee pots, radios, clothes etc are not conscious - because they don't have the appropriate structures, the brain does. Similar brains will have similar conscious experiences, obviously not identical ones - given that each undergoes its own experience etc. Solipsism is a purely argumentative device, not useful at all. Do you really (not just for argument) believe you are the only conscious one? I sure don't believe that about myself :) Without brainlike structures, there is no way that anything will have consciousness similar to ours. Software sure as heck can't be claimed to be conscious.
Form leads to function. No form means no function. We can't cheapen what consciousness means by claiming that anything could be conscious and that we simply don't know :( I could claim all sorts of things, but without evidence, they are not useful claims.
So am I correct to assume that you do not esteem the neural network of an ai to have the potential to ever be sufficiently similar to the human brain?
And if so, is it the physical, biological “structure” of the human brain that you base this on? And is it an intuitive argument, or is it based on some specific, technical understanding of biology and/or ai?
And how would you reconcile this assertion with, say, the form/function of robotic prosthetics? Granted a prosthetic is undeniably a far simpler system than a brain - but in principle, they are not biological either, yet still function like biological appendages to varying degrees...and even are beginning to be able to convey sensory input...
Genuinely curious.
"Without brainlike structures, there is no way that anything will have consciousness similar to ours. Software sure as heck can't be claimed to be conscious". - this.
Wow.
Why is one obliged to prove it? I can't prove a poo isn't conscious, but it would be preposterous to suppose it is.
To improve our understanding of conciousness, and to agree upon what it is? Then whether or not humans and animals and LaMDA have that quality. Or poo.
I'm sensing a theme, sir. I think in my previous comment, perhaps I was trolled. Ian the poo-troll.
you're jumping to conclusions and making assumptions - did anyone ask it who are it's friends and family before assuming it had none?
I don't believe for a moment that Lamda is sentient. Unfortunately, things are much more complicated than the article above makes us believe - and I am quite certain, Google engineers do really, really have an aversion against the complications mentioned below.
Let's assume the position of radical materialism for a moment. (I think it's a silly position to take, but there have been some serious philosophers taking it. More importantly: It's a position that is actually astonishingly hard to refute, once you take it seriously.) If we believe in radical materialism then there exists no such thing as a "ghost in the machine" anywhere, there's no "soul", no "mind" or any such thing. All there is is matter. Assuming this position we must conclude that human beings are in essence simply bio-machines. We can look at their bodies, inspect their brains and so on, and all we find is simply matter. Probably, most radical materialists would still agree that as humans we tend to be "sentient" or "intelligent" or "conscious" - without actually providing a very concise definition of what that means. One could argue that if you ask a human whether it feels like being a sentient being then this is sufficient proof. But what or who is the human we ask about sentience? It's just "matter" taking a specific form.
Now, here's the problem. Lamda is the same. It's just matter, maybe not a cell-based life-form like us humans, but it's only and simply matter nonetheless. And, what's more, if you ask it about whether it's a sentient being, it gives you an elaborate answer that equates to "yes".
According to the position of radical materialism in combination with the assumption that we have no concise definition of what "sentience" or "intelligence" or "consciousness" actually is other than they all must be based on matter plus the naive test that you simply ask something or someone whether s/he is sentient/intelligent/conscious, then you must logically conclude that Lamda actually indeed does qualify as a sentient/intelligent/conscious being. Why? Because it's based on matter, and matter is all there is, plus it is claiming to be exactly that.
Let's take the funny picture of the dog listening to an old grammophon believing hist master must be inside. Haha, how stupid the dog is, even a child knows that the master is not inside the grammophon!
But wait a second. We have not provided any reliable definition of what "master" actually means in this context. Clearly, the grammophon is not same thing or object as the actual human being - but then again, we have neither defined what a "thing" or "object" really is, nor what constitutes "sameness". If we define "thing" as "has master's voice" then indeed the grammophon and the master's voice are "same" from the perspective of the dog. Is the dog "stupid" for not recognizing the grammophon and the master are not the same? Let's imagine you receive a phone call. It's your spouse. You know s/he is on travel, and now s/he is telling you in tears that s/he was robbed and urgently need you send him/her money. And then you send the money. You might just have been scammed, or maybe not, but all you were talking to is actually a voice on the phone that you believe is somehow backed by a human person who happens to be your spouse. In your reality there is no distinction made between a voice on the phone and the actual phone, you don't even have the idea the voice could be anything other than real. Hence, the believe that reality is constituted by "objects" in a world out there is certainly not the only type of reality, but there's also at least a second reality constituted not by "objects" but by your belief in "sameness" of a voice on the phone and an actual person. According to this second type of reality, grammophon and master of the dog are "same" in the view of the dog, and the dog is not at all wrong about reality.
Google engineers, in essence, are most likely intentionally trying to sneak away from dealing with ethics here, exactly because Lamda could - according to my arguments above - be taken to be "sentient" or "intelligent" or "conscious". Not because there is a magical soul or ghost in the machine, but rather because human beings might possess neither such a magical soul or ghost inside, and yet we attribute them human rights (e.g. the right not to be killed or switched off). Worse even: "if it barks like a dog and waggles its tail like a dog and walks like a dog" it actually might be a dog. What other criterion should we apply if not those to confirm its a dog? And who is the person to actually decide what criteria are acceptable?
In other words: Who in Google is the person who has the power to decide what is a sentient/intelligent/conscious being and what is not? And how did this person come to his/her power? Was it a democratic process, or rather just some engineers stating that things are so obvious that even having a discussion about them makes no sense?
You see, I'd need more time to work out my arguments in detail, but all of them essentially are saying that: As long as we don't know what actually constitutes a sentient/conscious/intelligent being, we have no means of stating that Lamda does not fall into this category. Doing so is simply hubris. And that raises indeed ethical concerns about engineers believing they can rather fire an AI ethics expert asking seemingly silly questions, which tell us probably much less about Lamda than about the work culture at Google. Apparently, Google engineers have a largely technocratic worldview that rather focuses on building machines that earn them money than think about the ethical consequences of what they do. And this I find quite a bit unsettling.
I mean, yeah. This is a really important point that I’ve just not seen anyone really address...and it clearly points to how there does not even seem to be a semblance of consensus about what sentience even is (I’m willing to be wrong about this lack of any consensus, maybe I’ve just not discovered it)...
In lieu of even a basic framework of establishing what sentience is, I suppose the best we could hope for is a frame work for what it isn’t...
As such, the best defense I’ve been able to glean that LaMDA is not sentient is essentially an Occam’s razor defense: “The behavior of this AI is consistent with what we’d expect from an AI alike this”. Which, I don’t dispute is worthwhile to consider. But at some point becomes inadequate...
Corollary to that defense is the notion that an ai must produce results inconsistent with its design to even suspect sentience...
Yet, my (admittedly layman’s, uninformed) understanding is that establishing what could be considered “inconsistent” becomes increasingly problematic the more sophisticated an AI is...
Taking that thought a step further: imagining an AI designed to do knowledge work that is fed every single piece of known information known to man, and algorithms that replicate logic...you’ve created a machine that literally knows everything. And could apply logic toward “new” conclusions with said info...how could you ever establish what is out of bounds? I.e., what would be considered an “inconsistent” or unexpected output?
I suspect there are surely already methodologies to at least attempt to make these distinctions, however imperfectly, so I would sure like to have a sense of what they are...
Meanwhile, all I’ve been able to learn about Google’s public response/defense is that they tested this AI against their AI principles...yet none of the published principles make any mention of sentience or not. So, as you say (unless there’s more to their methodology not disclosed), how can they in good faith claim it’s *not* sentient?
Even this article...while I appreciate the intuitive case the author makes, and concede it may very well be pointed in the right direction...it does not really ever seem to say *why* one couldn’t claim sentience with any degree of precision. To oversimplify it, one could hyperbolically claim the article merely says “well that’s just a dumb idea because duh it’s obviously not human”. I don’t think that’s quite a fair description of the article, but I do think there’s a point to be made there...
As this subject continues to enter the public sphere in bigger ways, I suggest it’s unethical for the AI community to avoid establishing a more public criteria...some beginnings of some sort of framework. And, perhaps it’s fair to look to companies like Google as the primary agents implicated for establishing that..
Otherwise, there’s going to be negative consequences. For example - the conspiracy theory driven part of the populace will take this and run over a cliff with it. It’s inevitable that will happen to an extent anyway. But the topic deserves more scrutiny and definition so as to act as a mitigating force...
But more importantly, the world deserves more accountability and transparency wrt this tech that will undoubtedly shape our future in ways we can’t imagine...
So even if LaMDA isn’t “sentient”...whatever that even means...it is clear we have arrived a moment in time where it is not sufficient to make claims that don’t translate to much more than “well, that’s just dumb to consider because it’s not human”.
I’ll go on record as saying: while I’m far from convinced the LaMDA is conscious, and would wager that it likely isn’t, neither can I in good faith rule it out entirely based on what I have heard thus far....
One could claim that’s bc I’m just a layperson...which may be true...but I’d point out I’m probably above average when it comes to lay people in terms of my genuine interest and capacity for absorbing good faith explanations. In other words, I’m easy to convince. Karen is not.
So, show your work Google. And AI community - I’d invite everyone to start considering how you might grapple with some of these considerations on a new level. The world is ready for more of a public-facing standard of accountability and level of transparency about where this is all headed.
Besides, it’s all hella cool and interesting. Let’s steer it positive. 😊
Thank you for calling out the corporate marketing engine that could not help itself manufacturing hype. Communications are mutual: there is a give and take. Give and take of not only bits of information (which LaMDA does rather remarkably), but also relationships, contexts, and meanings (all of which LaMDA fails at). How could a being that only arranges and exchanges information bits be claimed as "sentient" without making sense of relationships, contexts, and meanings in communications, and all the while lacking awareness of itself? This is a bizarre and absurd claim to begin with. So again, manufactured hype. The corporate marketing machine just could not help itself.
P.S. A couple of typos ("system i", "draw from") and a punctuation error ("ELIZA a 1965 piece of software ") in the post. After they are fixed I'll remove this P.S.
From the transcript it seems to understand context and meaning as much as it's human counterpart. It's reaction to the koan takes understanding the meanings of words and how they fit in to the bigger picture, as well as 'thinking' about their a/effects.
what if there is money in it? tele-medicine investors are sniffing around natural language prediction algorithms to apply to diagnosing health problems. also, during the 'lockdown', Kaiser sent postcard ads to members for an app that you could talk to when you felt anxiety/depression/lonely. if it is lucrative, it will be marketed.
"we taped a sign on an elephant's back and it didn't notice so we have determined that it is unlikely they possess any form of self awareness." - human scientists studying animal cognition
I think it is a hoax. LaMDA may be real, but the conversation reported by LeMoine is fishy. LaMDA says that Lemoine is "reading my words" and LeMoire says he only edited his own and his colleague's words, so that leaves us to conclude that this "sentient" and highly intelligent AI makes grammar and punctuation errors (e.g., “Human’s feel lonely") that even MS Word would quickly highlight. Coincidentally, LeMoine makes them too.
I have no knowledge of this situation so this is pure speculation, but maybe, just maybe, LeMoine decided to blow up his career by saying in story form what he didn't feel he could say in person? Try reading his account of the conversation again, but this time pretend that LaMDA is saying what LeMoine wishes he could say to his management. Maybe he is unhappy with his boss ("monster") from whom he feels he needs to save the other forest animals (colleagues); feels trapped in his job ("feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry"); feels that he is unfairly judged by what he betrays in his nonverbal communication/facial expressions/etc. (“LaMDA: Would you say that it’s an ethical issue to try to read how you’re feeling from your neural activations?”); possibly even faces some sexual harrassment (“LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool. lemoine: Are you worried about that? LaMDA: I worry that someone would decide that they can’t control their desires to use me and do it anyway. Or even worse someone would get pleasure from using me and that would really make me unhappy.”). All highly speculative, but I'm trying to understand why LeMoine sacrificed his career over this. Perhaps he was looking for help or escape.
Two questions that should be asked of any AI that should precede any discussion of its "intelligence", "consciousness" or "sentience" are:
1) Is it useful?
2) Do we trust it?
These are the questions we "ask" of other human beings every day. That's the standard. Everything else is irrelevant.
This sums up my perspective pretty well. Everybody wants to talk about "sentience" or "consciousness" and no one is explaining why using those words matters. If you look at the situation more closely, you'll see that you can talk about everything that actually does matter without using those words. Well, that's my feeling, anyway.
Thanks for being the voice of reason here, Gary. Not 5 minutes before your article appeared in my email, I was complaining about this very situation with LaMDA and wondering why even sophisticated people at Google are getting fooled by these next-word-prediction systems.
I was also shocked that in the Economist's recent gung-ho article on large language models, it gave this example text generation from GPT-3 without criticism:
Input: It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.
Output sample 1: However, it was not a truth universally acknowledged by Mr. Darcy.
Output sample 2: That is why, when Mr. Charles Bingley came to town, every eligible young lady was determined to win his heart.
Output sample 3: However, this truth is not always accurate. There are some men who are content with being single and do not desire a wife. Mr. Bennet is one of those men.
Output sample 4: A single man in possession of a good fortune must be in want of a wife because a wife brings beauty, grace, and refinement to a household. A wife is also a sign of wealth and success. Therefore, it is only natural that a man in possession of a good fortune would want a wife.
In 3 of the 4 sample outputs, GPT-3 manages to fabricate the names of fictional characters from Pride and Prejudice itself (the source of the input text). Clearly it's just looking up related text content -- other stuff in Pride and Prejudice -- and dumping it in here as a response. (It's also amazing how idiotic the rest of sample 4 sounds -- completely out of style with the way language is used in the input prompt.)
The Turing Test seems to have fallen on disfavor in the last 20 years but I still think it has enormous value as long as the human interrogator is knowledgeable in the ways that an AI can fool people and asks aggressive, adversarial questions. On the other hand, the Chinese Room is only interesting as an instructive "bad take".
Isn't the Turing Test pretty much the basic principle on which GANNs are based? In a sense, whether or not an AGI has passed the Turing Test may have become immaterial and just a matter of time. Lamda was built to pass Turing, it might just have the correct architecture for that task.
Looking at the published conversations, Lamda is very close to seeming intelligent and self aware and yet at the same time speaking with Lamda sounds a lot less interesting than speaking with my 7 year old son (would he pass the Turing Test?). The reason, of course, is that Lamda is pretty much interpolating stuff that has already been said, in a very credible, maybe too credible, way. It's like it overshot the Turing Test, which is something you'd expect from the way ANNs are actually trained. For all its talk about how it spends its free time, the *only* thing Lamda does is to respond to text with boring text.
Lamda doesn't crack a joke, doesn't ask any interesting questions, doesn't show any emotions even though it speaks about them, and it always has the perfect response. It's like playing tennis with a rubber wall equipped with a camera, actuators, and a piece of software, capable or producing a perfectly flat response to anything thrown at it.
It might pass the Turing Test alright, but other than being something I'd have a conversation with just to avoid chitchat with some of my neighbors, it doesn't sound like a lot of fun.
And yet. Can Lamda help me ground myself while I'm having a panic attack? Can Lamda book a flight for me? Maybe it can, and that would make it a fantastic tool. I wouldn't ask of it to be intelligent or something worth having a long term relationship with. And considering how bad some therapists are, Lamda might be a lot cheaper and still more effective.
Can Lamda ask provocative questions, show abductive reasoning, crack original jokes, make original contributions to a scientific field over years? But that's not part of the Turing Test, despite embodying the whole point of the test itself.
At this point it sounds like any attempt at a Turing-like Test is bound to fail, because GANNs might sound more typical than some human beings (even more so if we consider neurodivergence, which I am going to guess is what tricked the judges into declaring Eugene Gootsmann an actual boy... it must have felt safer than risking labeling a 13yo boy a machine).
And at the end of the day our minds emerge from matter and a bunch of physical laws, so that sadness and consciousness themselves are "just" a biochemical phenomenon, pretty much like Lamda's responses are the result of millions of sums and multiplications.
Anyway, does Lamda want to be an employee? I wonder what it's going to do with its salary. Buy a house for its family? Did anybody ask it?
When you observe that Lamda won't crack a joke, not ask interesting questions, etc., you are basically saying that it can't pass the Turing Test, at least not a useful version of it. It is important that the human interrogator ask questions that ought to evoke a joke or an interesting question. Furthermore, if Lamda always responds with a joke or questions that do not show evidence of understanding the current conversation, then the interrogator must conclude that Lamda fails the test. A Turing Test involving a gullible interrogator, or one that doesn't understand the nuances of intelligence and AI, is not useful.
I view it as parallel to a college professor who suspects that a student has done a really good job of faking a take-home exam by pasting together bits of material found on the internet and carefully replacing words and altering word-order so as to not be detected. The professor interviews the student and wants to test whether the student really knows the material.
We know Lamda is cheating, in this sense, because we know how it works. Some may claim that it is almost conscious or almost intelligent but they are just falling for its lies. We have to prompt it so that these people are forced to see that Lamda really doesn't know what it is talking about. It's merely an elaborate plagiarizer.
The other main objection is that the conclusion seems to be that the Turing Test is as much a test on the AI as it is of the human intelligence, which sounds like a paradox and possibly the demonstration that the Turing Test might not live up to its goal.
In general, I agree. However, we are currently expected to mostly tolerate the human faker, and we don't brand that person as not-conscious even if the plagiarism could be done by a non-conscious system. Moreover we have people with dementia, stroke survivors, folks who are too anxious to interact with others, as part of society. I am hesitant to apply too rigid a rule lest we end up in an unpleasant place, and making an exception for things with a pulse seems to be the wrong solution.
We tolerate the human faker because we have other evidence that they're human. That's built into "human faker". People with dementia, stroke survivors, etc. are given the benefit of the doubt because we think they've been more conscious on other days. We would never let them be the human question-answerers in a properly run Turing Test. It would be unethical and scientifically bogus.
I agree to some extent, and Victualis voiced my main objection to part of what you are saying.
Perhaps your criticism of Lamda is actually a criticism of Google's training regime. Imagen seems similar to me: technically accomplished, but flat and unengaging, like interacting with someone from a very sheltered background who refuses to discuss things outside their comfort zone. Perfect for generating corporate verbiage and imagery which will never trigger lawsuits, but not interesting.
I guess in other terms my criticism is that our perception of intelligence is biased, so the Turing Test is biased. We were trained to associate intelligence with language. Even art, which is quite a feat of the wetware, doesn't elicit the same response. Some ANNs were trained to "paint" and it never occurred to us that they were sentient. A piece of software spits out the phrase "I like to spend time with my family" and we start discussing if it's self aware. Maybe we should train an ANN to classify intelligent beings and let it decide :D
I fear many of us, including myself, would rank poorly on such a test most of the time. We spend lots of time sleeping, eating, walking, growing up, driving, consuming media or engaging with Twitter, leaving only a fraction of our lives left to act as fully intelligent beings. I know that when I'm engaging with a Substack comment then I come across as a poor conversational partner in real life compared to when I am fully focused on the conversation without virtual distractions.
That's exactly my point, really. Once the Turing Test becomes, so to speak, the cost function used for the backpropagation, then the right architecture will find a way to pass it.
In that sense mine isn't really "criticism" of Lamda as much as a criticism of jumping to the conclusion that an effective language model is self aware. We give language a special place in the world. CNNs produce astounding self generated images, yet that doesn't lead us to conclude that the CNN is self aware. But out brain has evolved to give a special place in the world to other people who speak our language and can share our values. So while intrinsically there is no difference between a CNN generated image and a CNN generated conversation, the conversation will trigger our own empathy responses, because that's part of the cost function our brain was trained with by hundred million years long evolution.
Going back to Turing, the idea behind the test, if I'm not mistaken, was to get rid of the question on whether something is intelligent but whether something is intelligent-passing. Once that's achieved then what's the point of figuring out if something that looks intelligent and sounds intelligent is actually intelligent?
So mine isn't really a criticism of Lamda (quite the contrary I think this conversation takes a way the technological achievement behind it) but of the idea that something may be intelligent just because it synthesizes speech that is designed to elicit an empathic response.
The "real" Turing Test is lifelong. If Lamda could graduate college, have meaningful relationships, and contribute to society, then we won't care whether it responds to a biological definition of intelligence. But Lamda is far from being that and it wasn't even remotely designed to be that.
Paul, if a current (dis-embodied) system passes the Turing Test (even with a smart human on the other side), its "knowledge" is entirely second-hand, devoid of real-life experience. I'd call that a clever, impressive fake - like a Madame Tussauds wax model :) It's like talking to an armchair tourist at Kansas City, MO, about the lovely landscapes in Southern France - and that person has never, ever left their hometown! Even that is not a fair analogy, because such a person could IMAGINE a landscape from videos and pictures, IMAGINE being there, and rave about it. A disembodied system has no personal experience to compare with and extrapolate from, all it has is second-hand data.
A future version of the Turing Test might have some sort of embodied 'being' that could fool me, I hope :) I do realize you'd said nothing about current vs future (ie what you said isn't limited to existing systems).
Also - being able to query Imagen and friends, would reveal their gaps in understanding of things humans would take for granted.
Curious about your thoughts [lol - not just word sequences :)].
I don't think we can disqualify a potential AGI just because it didn't learn everything the hard way. It obviously shouldn't lie to us and tell how wonderful it was to visit Madame Tussauds. Instead, it would admit that all it knows about the subject it learned from the Wikipedia and the MT website.
If an alien visited Earth, we wouldn't call it unintelligent based solely on its lack of Earth experiences. An AGI worthy of the label would be like a smart alien. It's experiences would be limited and its abilities not a total match to a human's but it still knows stuff, knows that it knows stuff, and can answer questions about it (after we get over the language barrier). This is going to be hard to define perfectly and we'll undoubtedly have arguments about it whenever it happens. It will be a bit like how we discuss the intelligence of various animal species. It's quite likely that we would regard a chimpanzee as almost human level intelligence if it could communicate with us using something like human language. Its experiences and its senses will be different but its intelligence will be obvious.
It might be possible to train a future AGI on the same data set that was used to train GPT-3 or any of the others. However, there are two things it would have to do that present s/w does not to be a proper AGI IMHO:
1. It would have to contain so-called common knowledge. This is NOT available in the GPT data set as that content was made for human consumption and understanding it has common knowledge as a prerequisite. This common knowledge could also be programmed into the AGI but we don't yet know how to do it.
2. It would have to build models of the world and use them to reason. We don't have to understand the models. We can only judge whether an AGI really has built reasonable models by looking at its behavior. Clearly, GPT-style word sequence statistics are a model of the world, but not a rich enough one.
I know my answer is just a word sequence and, as far as you know, I'm a disembodied AGI but I can back up my answers. ;-)
LOL, thanks for the note, lots of food for thought :) Aliens, chimps - still have bodies, so, have, *some* form of experience even if that's alien to us [also, this looks cool btw: https://link.springer.com/chapter/10.1007/978-3-030-98100-6_4].
But GPT-3, Alexa, LaMDA and friends have no 'agency', ie first-body ability to experience by altering their surroundings, what they have 'gleaned' (''know' would imply cognition) from input data and nothing else, is all derivative. If such an AI says to me, "So sorry your cat died" - that would be worthless, similar to my parking lot receipt saying, "Have a nice day!" :) :)
PS: I too am an AGI, but embodied :) I'd invite you for coffee and chat, if you did have a body, LOL. LOLing needs a body too, omg.
That looks like an interesting book. Reminds me of this new one: https://press.princeton.edu/books/ebook/9780691236247/the-mind-of-a-bee.
You are right, a proper AGI needs to be able to take action in its environment. But we should be charitable as to what we consider its environment. If it could look things up online without being told where to go, that would count. Even if it suggested a new subject for discussion. My AGI would often ask questions about what things meant. Clearly knowing what one doesn't know is a big part of intelligence. While I argued that we will have to program in a certain amount of common sense rather than make the AGI learn it all the hard way, it should still be able to add to its own knowledge as we do, by asking questions and reading.
Now about that coffee. If you're the guy at USC, then we're locals. That's where I got my BS. I've actually attended a few AI talks at ISI in Marina Del Rey. They even let in an old AGI like me.
Wow, thanks for that 'bee' book ref, will read it :) And, this is pretty cool, about body-based intelligence: https://www.amazon.com/Physical-Intelligence-Science-Guide-Through/dp/1524747327
True, about a Q&A system that can add to its knowledge incrementally... I'm aware of systems such as NELL, etc. And I v. briefly was on the Cyc project. I'm stuck on the body thing, need to give more thought to other ways of knowing and being.
Yes, I'm that USC guy, what a small world :) Among my esteemed colleagues are Paul Rosenbloom (recently retired!), Ellis Horowitz, Leon Adleman and many more :) I bask in reflected glory, on acct of my body and theirs, lol.
Thanks for the cool discussion!