The Road to AI We Can Trust

Share this post

Why *is* Bing so reckless?

garymarcus.substack.com

Why *is* Bing so reckless?

And how did some prominent journalists utterly miss this initially?

Gary Marcus
Feb 20
39
50
Share this post

Why *is* Bing so reckless?

garymarcus.substack.com
Twitter avatar for @jensenharris
Jensen Harris @jensenharris
A wild story in which I probe what Bing's chatbot is capable of if you take away the rules: it gave wrong facts, wrote raunchy jokes, ordered a pizza, taught me how to rob a bank, burglarize a house, hot wire a car, and also cheated at a game of hangman! youtu.be/pBtD2JUXfLU
Image
7:15 PM ∙ Feb 18, 2023
26Likes8Retweets

Anyone who watched the last week unfold will realize that the new Bing has (or had

1
) a tendency to get really wild, from declaring a love that it didn’t really have to encouraging people to get divorced to blackmailing them to teaching people how to commit crimes, and so on.

A lot of us were left scratching our heads. ChatGPT tended not to do this kind of stuff (unless you used “jailbreaking” techniques to try to trick it), whereas from what I can tell, Bing went off the rails really fast. And the thing is, the two systems are basically close siblings; OpenAI built ChatGPT, and is now presumed to be working very closely with Microsoft, using the same technology. ChatGPT was, I believe, mainly powered by GPT 3.5 plus a module known as RLHF (which combines Reinforcement learning with human feedback, to put some guardrails in place). We all assumed that Bing’s Chatbot was more or less the same thing, but powered instead by a bigger, newer version of 3.5, which I’ll call GPT 3.6. (Or maybe it’s GPT-4; Microsoft has been very coy about this.)

Princeton professor Arvind Narayanan has the best thread I have seen on what’s gone wrong. Let’s start there; I mostly agree with he said. His take, with a few annotations, and then three important follow-ups:

Twitter avatar for @random_walker
Arvind Narayanan @random_walker
Considering that OpenAI did a decent job of filtering ChatGPT’s toxic outputs, it’s mystifying that Bing seemingly decided to remove those guardrails. I don't think they did it just for s***s and giggles. Here are four reasons why Microsoft may have rushed to release the chatbot.
1:11 PM ∙ Feb 19, 2023
484Likes90Retweets

Great question; but maybe they didn’t remove the guardrails; maybe the guardrails just didn’t work? I will return to that possibility—call it Possibility #5– below.

Twitter avatar for @random_walker
Arvind Narayanan @random_walker
The bot's behavior has enough differences from ChatGPT that it can't be the same underlying model. Maybe the LLM only recently completed training (GPT-4?) If so, Microsoft may have (unwisely) decided to roll it out promptly rather than delay it for RLHF training with humans.
1:11 PM ∙ Feb 19, 2023
71Likes5Retweets

Narayanan’s Possibility 1 is that what we are seeing is a new model, possibly GPT-4, naked, unadorned by guardrails. As I wrote in Inside the Heart of ChatGPT’s Darkness, there’s a lot of nastiness lurking inside large language models; maybe MSFT did nothing to protect us from that.

Twitter avatar for @random_walker
Arvind Narayanan @random_walker
Possibility #2: they built a filter for Bing chat, but it had too many false positives, just like ChatGPT's. With ChatGPT it was only a minor annoyance, but maybe in the context of search it leads to a more frustrating user experience.
1:13 PM ∙ Feb 19, 2023
49Likes4Retweets

Possibility 2 was that there filter was too annoying to use inside a real search engine. Possibility 3 was that “Bing deliberately disabled the filter for the limited release to get more feedback about what can go wrong.” Possibility 4 was that “they thought that prompt engineering would create enough of a filter and genuinely didn't anticipate the ways that things can go wrong.”

Naranyan then wraps up, dead-on

Twitter avatar for @random_walker
Arvind Narayanan @random_walker
These are all just guesses. Regardless of the true reason(s), the rollout was irresponsible. But now that companies seem to have decided there's an AI "arms race", these are the kinds of tradeoffs they're likely to face over and over.
1:18 PM ∙ Feb 19, 2023
89Likes9Retweets

§

I want to extended beyond Narayanan’s excellent analysis in three ways I want to suggest an extra possibility, make some policy recommendations, and consider how it is that journalists initially failed us.

The fifth possibility: Maybe Microsoft did try to stick their existing, already trained RLHF model on top of GPT 3.6—and it just didn’t work.

The thing about reinforcement learning is that it is notoriously finicky; change the circumstances sligthly, and it may no longer work.

DeepMind’s famous DQN reinforcement learning set records on Atari games, and then broke down under minor alterations (like moving the paddle a few pixels up, in the game of Breakout). Maybe every new update of a large language model will require a complete retraining of the reinforcement learning module.

This would be very bad news, not just in terms of human and economic costs (which would mean more underpaid layers doing awful work) but also in terms of trustworthiness, because it would be mean that we have zero guarantee that any new iteration of a large language model is going to safe.

That’s especially scary for two reasons: first, the big companies are free to roll out new updates whenever they like, without or without warning, and second it means that they might need go on testing them on general public over and over again, with no idea in advance of empirical testing on the public for how well they work.

We don’t handle new pharmaceuticals like that (we demand careful tests before they go out to the public) and we shouldn’t handle LLMs this way, either—especially if billions of people might use them and there are potentially serious risks (e.g to people’s mental health, or marital status).

Policy: the public has (or strictly speaking should insist on) a right to know what wrong here, with the Bing situation, so that we can create policy to keep incidents from happening like this again. Right now, as I have said before, the AI is basically the Wild West; anyone can post any chatbot they want. Not good. Congress needs to to find out what happened, and start placing some restrictions, especially where emotional or physical injury could easily result.

Journalism: The media failed us here. I am particularly perturbed by Kevin Roose’s initial report, in which he said he was “awed” by Bing. Clearly, he had not poked hard enough; shouting out prematurely in The New York Times that there is a revolution without digging deep (or bothering to check in with skeptics like me, or the terrific but unrelated Mitchells, Margaret and Melanie) is not a good thing.

With respect to the later, Ernest Davis and I gave some advice in Rebooting AI that bears both repeating and updating. Here’s what we wrote then, in 2019; every word still applies:

To all this I would add, for 2023, #7 are their guardrails adequate? Have they been investigated thoroughly? Please don’t tell us you are in “awe” of some new system without considering all 7.

Last word goes to Narayanan:

Twitter avatar for @random_walker
Arvind Narayanan @random_walker
It feels like we're at a critical moment for AI and civil society. There's a real possibility that the last 5+ years of (hard fought albeit still inadequate) improvements in responsible AI release practices will be obliterated.
1:24 PM ∙ Feb 19, 2023
130Likes33Retweets

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast on AI and the human mind, this Spring.

Share

1

I say had because we can all hope that some of this wildness will quickly get patched.

50
Share this post

Why *is* Bing so reckless?

garymarcus.substack.com
50 Comments
macirish
Feb 20Liked by Gary Marcus

I really appreciate your article - it is thoughtful and raised some very important issues.

I suppose we could hope for responsible rollouts - but it doesn't seem likely. Is there a single software company that has the integrity of Johnson & Johnson when they took a $100 million dollar hit - because they cared more about the health and wellbeing of their customers than they did about their bottom line???

Congress setting policy? This would be a truly bad idea. AI is a rapidly developing technology that only a handful of people have any detailed understanding of. Letting Congress set policy would be like asking them to set tolerances for a blacksmith. They would muddy the waters so badly that all development would go offshore - and we would be pirating the code.

Journalism? Most of the media is corrupt and lazy - a really bad combination. What journalist took the time to really work with ChatGPT? After all, it is a very complex product, the more you understand it - the better (or the worse, depending on your goal) the output.

The Public has a way of defending itself. Right now there's a lot of laughter. Inadequate guardrails do a lot of harm to the credibility of a product - bad guardrails might be more of a plus than a minus. They show how far off the mark the product is. The real problem with guardrails is that they are inevitably based on the bias of the coder.

Hope this helps the conversation. I'm really new to AI and my guardrails aren't all that great.

Expand full comment
Reply
6 replies by Gary Marcus and others
Jeff Ahrens
Feb 20Liked by Gary Marcus

There seems to be a disturbing lack of awareness about how this tech works and, as has been articulated by Gary, that can have detrimental impact on the layperson. Journalists writing in these topics should know better. One even wonders if the tech CEOs truly understand what they have and how brittle these models are. As remarkable as that is to type their actions in the past few weeks doesn’t give a strong indication of full comprehension. I agree that we do seem to be at an inflection point and I’m concerned about what May follow.

Expand full comment
Reply
3 replies by Gary Marcus and others
48 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing