15 Comments

To communicate interlocutors need to share some context. To lie, the liar needs to have a private context unknown to the other party. If lie is about the shared context and can be easily called, it is not lie, it is stupidity.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

Yes, GPT is a liar and is dumb. Bard is worse, but it is very good at getting work done. For some intricate things I'd have to do manually I now just ask the chatbot and it gives good and detailed solutions, even with code.

This is the path forward. Useful but imperfect tools, which get smarter as people use them more and companies reinvest profits. Bottom-up to AGI.

Expand full comment

When someone says GPT is lying, they are granting it a distinctly human trait. A trait it clearly does not have.

I'd suggest that both Santos and GPT had faulty training data. Maybe Santos believes that American business is based on lies, so he feels no guilt when he lies. I won't deal with companies that lie to me, but that is another subject.

GPT seems capable of producing good code, perhaps it's because there is less trash code on the Internet.

For other subjects the amount of careless data, incompetent ramblings and outright lies is much greater. GPT does not have the ability to discern the validity of that data, and that is the source of its less than accurate results.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

Lying is intentional. Similarly immorality is intentional. It is impossible to be immoral by accident. Machines are not immoral; even animals are not immoral. They do what they do inexorably. Dogs bite children, lions kill babies, neither is morally culpable. Dogs are dogs; lions are lions; machines are machines. That is the foundation of morality: conscious intent. To lie, which in most instances is immoral, is to know you are intentionally deceiving another for an unjustifiable reason. Accidental falsehoods, however, are without intent; one is not morally culpable for lying if the falsehood is a matter of stupidity or ignorance or even a lack of due diligence. Santos is a liar; he intended to lie. ChatGPT is amoral; it intends nothing. Chat has no intentions at all, because it is entirely without consciousness. But we who are foolish enough not to do our due diligence are culpable, not for lying but for, as we will see as law suits arise, criminal negligence. We all know Chat is not conscious. It cannot lie. We know damn well it does not care about anything including the truth. Yet when we put lives in the hands of uncaring machines and people die, we who were negligent will pay the price. Santos will go to jail for being Santos; ChatGPT nor the creators of ChatGPT will go to jail, but the fool who deludes himself and uses ChatGPT recklessly may well serve some jail time for criminal negligence. We do not punish the sword but the one holding it. ChatGPT is a chainsaw that is hard to control, but damn does it cut.

Expand full comment

Recently heard of the concept of 'data voids'. Does this apply to LLM models too? Any answer is better than no answer it seems. A bit like Trump - who hates to admit ignorance, and makes stuff up, I suspect.

Expand full comment

The issue of lying seems to provide an interesting test case for the governance of AI. I published a case study (https://www.linkedin.com/pulse/chatgpt-when-do-hallucinations-turn-deceit-mike-baxter/) a few weeks back describing my request to ChatGPT4 for quotations on a specific topic (business strategy). Having been given several such quotations, I realised one was not actually a quotation. I challenged ChatGPT4 on this and got this response: "'I apologize for the confusion earlier, but upon further research, it seems that the line you mentioned doesn't appear to be a direct quotation from the article. I must have paraphrased their main idea rather than quoted them directly". Then, later in our interaction, it offered a "relevant direct quote from the article", which also turned out to be a hallucination. Reflecting on this issue, it dawned on me that, whilst a LLM has a syntactic comprehension of a quotation (words enclosed in quotes and attributed to a source), it cannot have a semantic understanding of a quotation (a verbatim extract from a cited source, reproduced unchanged or modestly changed and recoded as such using conventional notations). This suggests that quotations are a known case of 'very high probability hallucinations' with a clear signal - the use of the word stem 'quot*' in the prompt. This also has a clear solution - give the warning up-front that ChatGPT will eventually provide after it has been challenged "For exact quotations, always refer to the original article". Clearly this won't stop the problem of hallucinations but it would show that steps to prevent users from being misled are being introduced into AI governance practices.

Expand full comment

When you converse with a human, you take it as a given that the other party has a somewhat static persona. But when you converse with an LLM, it uses the things you say to update, in mid-conversation, the persona it is presenting to you.

This unsettling aspect of LLMs always struck me as being similar to how a con-person might interact with a mark, altering their persona or their backstory 'on the fly' to maximise the likelihood of the con succeeding.

Expand full comment

Do you think "backpropagation" is itself the root problem here? Perhaps the loop re-propagates some sort of invisible flaw. As it washes out errors some errors may simply be impervious for some reason. Backpropagation was the revolutionary moment in AI, so it would make sense that some flaw there could be leading to the hallucinations. I do not have the mathematical understanding to know how it works but perhaps someone here could simply say no that is nonsense or maybe. (please be patient with a dolt)

Expand full comment