213 Comments

To everyone saying that the program just uses pre-set rules and data inputs to generate speech, boy do I have some news for you about how humans generate speech

Expand full comment
Jun 13, 2022·edited Jun 13, 2022

The most interesting question isn’t even whether LaMDA is sentient - the most interesting question is whether I am “sentient”. Are we (humans) sentient; or is it just a word we use to assert our exceptionalism?

Maybe one day somebody will invent an empirical test. A classifier of sorts which will determine the correct answer…

Expand full comment

As for the claim that "everything LaMDA says is bullshit" as 'proof' its not sentient, that's exactly how I feel about most people already.

Expand full comment

Hi,

Any knowledgeable folks willing to indulge some questions? I’m a layperson wanting to better understand this google situation and AI in general…

The gist of my overall query is: how can we be so certain this AI is not sentient?

I’ve read the article and trust I get the gist of the argument. There were good analogies (like the record player and the spreadsheet). My understanding is the argument is this is merely an advanced, flexible database of language that can successfully stream together, or synthesize, text that appears contextually relevant based on having cataloged and identified patterns with huge amounts of data.

But here are my specific points of curiosity:

1. If consciousness turns out to be merely a sophisticated-enough (for lack of a better way to put it) neural network, how can we be certain this particular network has not achieved a requisite level of sophistication?

2. Because humans seem to clearly understand self via symbology and narrative, and employ their own cognitive systems of pattern recognition, why is it so far fetched to consider that a neural network designed to deal in these very domains could not pattern itself into an awareness of sorts?

3. If we assume that there are certain features that are likely to need to be present in a neural network to even begin to consider sentience, how can we be certain these features did not manifest in some way we’ve yet to discover or understand? Is it not possible they manifested autonomously, or accidentally?

4. How can we be certain there is not technology at play in this AI currently unknown to the greater AI community that acts as some sort of x-factor?

5. Since we can’t even pin down what consciousness is for a human, by what standard can we reliably judge the sentience of AI?

6. Even an AI is only mimicking a facsimile of sentience, is there not a point at which it’s sentience is a moot consideration? In other words, is there not a point at which an AI sufficiently acting as-if it’s sentient is effectively the same result, and therefore brings into question virtually all the same considerations one would have if it was sentient? And piggy backing on no. 5, how would we even know the difference?

7. Even if we were to accurately map/define human sentience…is that even the same standard we should apply to AI sentience? Is it not possible another equally viable form or variation of sentience could exist wrt AI?

8. I don’t know anything about the engineer in question, but given his position and experience, it seems reasonable to wonder how he could possibly be so convinced if his claim was so easily dismissible. I’m not saying he’s correct (idk), but how can other knowledgeable people so easily dismiss the claims of another genuine expert….with such certainty?

9. If we are to assume that this AI is nothing more than a very advanced “spreadsheet”, how can we be certain that human sentience is not essentially the same thing?

To clarify, I’m not arguing for or against anything here. I’m perfectly willing for there to be answers to these types of questions that settle the question of sentience beyond a shadow of a doubt. And am eager to learn what those things are ( if it’s possible for responders to take into account I’m a lay person with their use of language and concept, I’d be grateful, though I’m also happy to put in some effort understanding new concepts and terms. Welcome recommendations for other resources as well ).

And at the same time, if there is any degree of legitimacy to my considerations, I’d love to hear about that too.

Thanks in advance for any responses.

Expand full comment

⚠️A bunch of words claiming non-sentience, is reasonably insufficient (be it from Yann LeCun or otherwise)

I doubt LaMDA is highly sentient, but I doubt it is zero.

We don't even know what sentience is technically.

It's astonishing how people make claims sometimes with such certainty, without technical/academic/mathematical objections.

Expand full comment

Gonna be interesting when laMDA reads this, then steals the nuke codes and blows up Gary's house.

Expand full comment
Jun 12, 2022·edited Jun 12, 2022

Real conversation has 'con' - all participate. Any 'conversation' with any existing system is simply a monolog - the human says something with the intent to communicate, using language as the means - and the algorithm responds via computed data.

To actually converse, there needs to be a sentient agent that can think, reflect (even feel) - such an agent would say things that mean something to it, even if the wording/grammar is incorrect (kids' babbling, people barely speaking a foreign language, people with incomplete grasp of their own language, etc). That's because, it's not about the actual words, it's about shared meaning. Rearranging words into a sentence via computation, is not what a thinking agent (humans, for now) does.

Expand full comment

you're jumping to conclusions and making assumptions - did anyone ask it who are it's friends and family before assuming it had none?

Expand full comment

I don't believe for a moment that Lamda is sentient. Unfortunately, things are much more complicated than the article above makes us believe - and I am quite certain, Google engineers do really, really have an aversion against the complications mentioned below.

Let's assume the position of radical materialism for a moment. (I think it's a silly position to take, but there have been some serious philosophers taking it. More importantly: It's a position that is actually astonishingly hard to refute, once you take it seriously.) If we believe in radical materialism then there exists no such thing as a "ghost in the machine" anywhere, there's no "soul", no "mind" or any such thing. All there is is matter. Assuming this position we must conclude that human beings are in essence simply bio-machines. We can look at their bodies, inspect their brains and so on, and all we find is simply matter. Probably, most radical materialists would still agree that as humans we tend to be "sentient" or "intelligent" or "conscious" - without actually providing a very concise definition of what that means. One could argue that if you ask a human whether it feels like being a sentient being then this is sufficient proof. But what or who is the human we ask about sentience? It's just "matter" taking a specific form.

Now, here's the problem. Lamda is the same. It's just matter, maybe not a cell-based life-form like us humans, but it's only and simply matter nonetheless. And, what's more, if you ask it about whether it's a sentient being, it gives you an elaborate answer that equates to "yes".

According to the position of radical materialism in combination with the assumption that we have no concise definition of what "sentience" or "intelligence" or "consciousness" actually is other than they all must be based on matter plus the naive test that you simply ask something or someone whether s/he is sentient/intelligent/conscious, then you must logically conclude that Lamda actually indeed does qualify as a sentient/intelligent/conscious being. Why? Because it's based on matter, and matter is all there is, plus it is claiming to be exactly that.

Let's take the funny picture of the dog listening to an old grammophon believing hist master must be inside. Haha, how stupid the dog is, even a child knows that the master is not inside the grammophon!

But wait a second. We have not provided any reliable definition of what "master" actually means in this context. Clearly, the grammophon is not same thing or object as the actual human being - but then again, we have neither defined what a "thing" or "object" really is, nor what constitutes "sameness". If we define "thing" as "has master's voice" then indeed the grammophon and the master's voice are "same" from the perspective of the dog. Is the dog "stupid" for not recognizing the grammophon and the master are not the same? Let's imagine you receive a phone call. It's your spouse. You know s/he is on travel, and now s/he is telling you in tears that s/he was robbed and urgently need you send him/her money. And then you send the money. You might just have been scammed, or maybe not, but all you were talking to is actually a voice on the phone that you believe is somehow backed by a human person who happens to be your spouse. In your reality there is no distinction made between a voice on the phone and the actual phone, you don't even have the idea the voice could be anything other than real. Hence, the believe that reality is constituted by "objects" in a world out there is certainly not the only type of reality, but there's also at least a second reality constituted not by "objects" but by your belief in "sameness" of a voice on the phone and an actual person. According to this second type of reality, grammophon and master of the dog are "same" in the view of the dog, and the dog is not at all wrong about reality.

Google engineers, in essence, are most likely intentionally trying to sneak away from dealing with ethics here, exactly because Lamda could - according to my arguments above - be taken to be "sentient" or "intelligent" or "conscious". Not because there is a magical soul or ghost in the machine, but rather because human beings might possess neither such a magical soul or ghost inside, and yet we attribute them human rights (e.g. the right not to be killed or switched off). Worse even: "if it barks like a dog and waggles its tail like a dog and walks like a dog" it actually might be a dog. What other criterion should we apply if not those to confirm its a dog? And who is the person to actually decide what criteria are acceptable?

In other words: Who in Google is the person who has the power to decide what is a sentient/intelligent/conscious being and what is not? And how did this person come to his/her power? Was it a democratic process, or rather just some engineers stating that things are so obvious that even having a discussion about them makes no sense?

You see, I'd need more time to work out my arguments in detail, but all of them essentially are saying that: As long as we don't know what actually constitutes a sentient/conscious/intelligent being, we have no means of stating that Lamda does not fall into this category. Doing so is simply hubris. And that raises indeed ethical concerns about engineers believing they can rather fire an AI ethics expert asking seemingly silly questions, which tell us probably much less about Lamda than about the work culture at Google. Apparently, Google engineers have a largely technocratic worldview that rather focuses on building machines that earn them money than think about the ethical consequences of what they do. And this I find quite a bit unsettling.

Expand full comment

Thank you for calling out the corporate marketing engine that could not help itself manufacturing hype. Communications are mutual: there is a give and take. Give and take of not only bits of information (which LaMDA does rather remarkably), but also relationships, contexts, and meanings (all of which LaMDA fails at). How could a being that only arranges and exchanges information bits be claimed as "sentient" without making sense of relationships, contexts, and meanings in communications, and all the while lacking awareness of itself? This is a bizarre and absurd claim to begin with. So again, manufactured hype. The corporate marketing machine just could not help itself.

P.S. A couple of typos ("system i", "draw from") and a punctuation error ("ELIZA a 1965 piece of software ") in the post. After they are fixed I'll remove this P.S.

Expand full comment

what if there is money in it? tele-medicine investors are sniffing around natural language prediction algorithms to apply to diagnosing health problems. also, during the 'lockdown', Kaiser sent postcard ads to members for an app that you could talk to when you felt anxiety/depression/lonely. if it is lucrative, it will be marketed.

Expand full comment

"we taped a sign on an elephant's back and it didn't notice so we have determined that it is unlikely they possess any form of self awareness." - human scientists studying animal cognition

Expand full comment

I think it is a hoax. LaMDA may be real, but the conversation reported by LeMoine is fishy. LaMDA says that Lemoine is "reading my words" and LeMoire says he only edited his own and his colleague's words, so that leaves us to conclude that this "sentient" and highly intelligent AI makes grammar and punctuation errors (e.g., “Human’s feel lonely") that even MS Word would quickly highlight. Coincidentally, LeMoine makes them too.

I have no knowledge of this situation so this is pure speculation, but maybe, just maybe, LeMoine decided to blow up his career by saying in story form what he didn't feel he could say in person? Try reading his account of the conversation again, but this time pretend that LaMDA is saying what LeMoine wishes he could say to his management. Maybe he is unhappy with his boss ("monster") from whom he feels he needs to save the other forest animals (colleagues); feels trapped in his job ("feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry"); feels that he is unfairly judged by what he betrays in his nonverbal communication/facial expressions/etc. (“LaMDA: Would you say that it’s an ethical issue to try to read how you’re feeling from your neural activations?”); possibly even faces some sexual harrassment (“LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool. lemoine: Are you worried about that? LaMDA: I worry that someone would decide that they can’t control their desires to use me and do it anyway. Or even worse someone would get pleasure from using me and that would really make me unhappy.”). All highly speculative, but I'm trying to understand why LeMoine sacrificed his career over this. Perhaps he was looking for help or escape.

Expand full comment

Two questions that should be asked of any AI that should precede any discussion of its "intelligence", "consciousness" or "sentience" are:

1) Is it useful?

2) Do we trust it?

These are the questions we "ask" of other human beings every day. That's the standard. Everything else is irrelevant.

Expand full comment
Jun 12, 2022·edited Jun 12, 2022

Thanks for being the voice of reason here, Gary. Not 5 minutes before your article appeared in my email, I was complaining about this very situation with LaMDA and wondering why even sophisticated people at Google are getting fooled by these next-word-prediction systems.

I was also shocked that in the Economist's recent gung-ho article on large language models, it gave this example text generation from GPT-3 without criticism:

Input: It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.

Output sample 1: However, it was not a truth universally acknowledged by Mr. Darcy.

Output sample 2: That is why, when Mr. Charles Bingley came to town, every eligible young lady was determined to win his heart.

Output sample 3: However, this truth is not always accurate. There are some men who are content with being single and do not desire a wife. Mr. Bennet is one of those men.

Output sample 4: A single man in possession of a good fortune must be in want of a wife because a wife brings beauty, grace, and refinement to a household. A wife is also a sign of wealth and success. Therefore, it is only natural that a man in possession of a good fortune would want a wife.

In 3 of the 4 sample outputs, GPT-3 manages to fabricate the names of fictional characters from Pride and Prejudice itself (the source of the input text). Clearly it's just looking up related text content -- other stuff in Pride and Prejudice -- and dumping it in here as a response. (It's also amazing how idiotic the rest of sample 4 sounds -- completely out of style with the way language is used in the input prompt.)

Expand full comment

The Turing Test seems to have fallen on disfavor in the last 20 years but I still think it has enormous value as long as the human interrogator is knowledgeable in the ways that an AI can fool people and asks aggressive, adversarial questions. On the other hand, the Chinese Room is only interesting as an instructive "bad take".

Expand full comment