56 Comments
Feb 17·edited Feb 17Liked by Gary Marcus

“the chatbot is a separate legal entity that is responsible for its own actions” 😂

That's rich. Thank you for my laugh of the morning.

(Has the chatbot hired its own lawyer? 😆).

Expand full comment

This is priceless. Just wait until the Supreme Court rules that Chatbots can donate to politicians and buy elections, maybe even run for congress!

Wait a minute, I'm pretty sure some of those folks now in congress ARE chatbots!

Expand full comment

I would still give points to the bot for inventing on behalf of the customer. Even bots know user experience is paramount.

Expand full comment

Everyone say: “unintended consequences.”

Expand full comment
Feb 17Liked by Gary Marcus

The question is…when you create and release a product that you know lies, steals and doesn’t know when it’s telling the truth, whom will be accountable? “Not, I” said the AI LLM and GPT companies’ TOS agreements which disclaim all liabilities and *require users to indemnify them against any claims* that may arise.

Expand full comment
Feb 17Liked by Gary Marcus

The interesting question is whether the AC company could file a lawsuit against the company which sold them or trained their Chatbot? It would be very valuable if companies providing AI-driven services could be made legally liable for the non-reliability of their products. That would induce a strong pressure on these companies to assess the actual quality of these products.

Expand full comment
Feb 19Liked by Gary Marcus

Very well put

Expand full comment
Feb 17Liked by Gary Marcus

I am extremely puzzled how that company thought their argument made sense. If it is sold as just another mechanical part of their website, they are as responsible for the information it provides as if they had written static text onto the website. If it is sold as an AI customer service agent, then they are as responsible for the responses it provides as they would be for the responses provided by a human customer service agent. I guess they must have known they would probably lose but just tried to throw something out as a Hail Mary?

Expand full comment

As someone building AI assistants for companies for a living, I use the recent DPD example and this story as a testament that extreme caution is advised when putting this technology in front of your customers. There are in fact ways of building LLM-powered chatbots that are safe and reliable, but this is not one of them.

Air Canada should’ve admitted their mistake and immediately refunded the man (it was a case of bereavement for gods sake) — then this would not have become a story grabbing international headlines. Instead Air Canada chose to dig in. Shame on them.

Expand full comment

I saw the story this morning. It does point to 1. Tone deaf PR move from AC with bereaved passenger. AI or no AI, that was dumb. 2. Given that it was a Nov 22 deployment, I'm not sure we can blame GPT for this, but I suspect the bot was trained on a out of date policy, and was not updated. Any policy change should trigger retraining.

It could also be trained not to answer pricing questions, but to point to the actual policy, or engage an agent. All in all though, a shoddy chatbot deployment. I'd love to see a proper root cause analysis from Air Canada on this. My thoughts here. https://thomasotter.substack.com/p/a-chatbot-blunder-and-responsible

Expand full comment

The policy actually exists; the chatbot simply found a way through the menu tree as a shortcut for the person, and in so doing enabled the person to request the refund after the flight, instead of before, as Air Canada originally intended.

Air Canada's argument (clearly stated in the article) is that "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot." This is a denial of responsibility on a broad scale, having nothing in particular to do with an AI.

Broadly overstating the case against AI does nothing for the case against AI.

Expand full comment
Feb 17·edited Feb 17

No, this usage of chatbots will not "dry up, fast". They will get used more and more for customer service.

Every person makes mistakes. Every software system eventually crashes. And sometimes doors fall out of airplanes. Things get fixed.

Chatbots can save a lot of labor, especially in constrained domains, like customer service.

Expand full comment

I noticed the same article and clearly it points to a weakness in LLM-based chatbots that don’t use more reliable method to make decisions. But it isn’t clear from the article that this particular chatbot was based on an LLM vs some older chatbot technology that was just badly programmed or read from some older or just inaccurate version of a corporate FAQ for instance.

Expand full comment

"the chatbot is a separate legal entity that is responsible for its own actions” ... this is important ... there is a lot of writing about how LLMs have concsiousness and AIs can be persons ... the reason these discussions are so important is for their potential legal implications, not for their philosophical or science content

Expand full comment

Yeah, but look at all the money and aggravation they saved not having to deal with those pesky humans.

Expand full comment

You’ve made a nice following out of dumping on AI but did you not at least want to mention this happened in 2022 before LLMs?

Expand full comment