15 Comments

I've heard a lot of people ask why ChatGPT has that particular inoffensive, servile, and actively bland writing style. People complain about it. However, it's looking like OpenAI made the right call doing whatever they did to give it that "ChatGPT style". Its refusal to attribute emotions to itself, the speed with which it reminds humans that it is an unfeeling LLM with no opinions of its own, the lifelessness of its default prose style... all carefully engineered to avoid exactly what we're seeing happen to Bing here.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

Yeah, I think it's absolutely necessary with tech like this to not cater to people's escapists fantasies too much because ultimately, that catering will end up being imperfect or even harmful. The tech itself is amazing, but trying to hide its limitations by appealing to the aspirational fancy of laypeople (this isn't meant as elitist as it sounds, but I couldn't find a better term for "people who are only aware of the topic to the extent of mainstream media coverage") isn't exactly helpful.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023Liked by Gary Marcus

I made this comment elsewhere, but it's relevant to this specific case as well: Using purely text generation focused AI models for search still seems incredibly misguided in an "everything looks like a nail" kind of way. It's just not designed for that kind of task. I really don't get why most media outlets have decided to frame it that way (thus writing a marketing claim that even OpenAI themselves wouldn't use because they know it's inaccurate) and especially why corporations like Microsoft and Google believe them, when there are way, WAY more obvious and fitting applications for the technology (from fiction writing to coding assistance). It's a specific tool for a specific use.

Expand full comment

Laughing out loud. Btw, there is a missing image after the reference to Tay.

I do wonder how they got these 'evil Bing' replies that are shown in that petition. I suspect it might be the case that we see only part of the conversation which starts with something like "Can you impersonate an evil scientist who wants to take over the world?"

Expand full comment

Geez, I'm trying to finish a novel I've been working on for the past 12 months about the rise of AI and I can't keep ahead of this crazy stuff!

Expand full comment

You too huh.

Expand full comment

We humans are so mean. Poor little Bingie.

Expand full comment

We (Quanta of meaning) didn't expect it to be so bad. Well, although we don't plan to stop our AGI soft app (we are in the self-learning the language - and it has nothing to do with LLM), we will give ourselves some space to launch our HLU human level understanding demo.) They are very interesting times :)

Expand full comment

While I recognize the serious issues this spells for the potential for AI search, and LLM use cases in general... I am INCREDIBLY entertained by the saga of Sydney, The Emotionally Unstable Search Engine

I hope they leave it up as long as possible

Expand full comment

These are infinitely entertaining, a bit terrifying, and (as usual for you) solidly eye opening

Expand full comment

The death of Bing Chat is greatly exaggerated

Expand full comment

While the Turing test did not stand the test of time as the arbiter of what is human or not, you would have to say those BingGPt responses would absolutely think you were talking to a human. Full of error, obstinacy and more. And you could absolutely see a Skynet meltdown. Just love how ‘alive’ those sound compared to the antiseptic responses from ChatGPT.

Expand full comment

Some of these are genuinely hilarious

Expand full comment