21 Comments

I'm shocked, totally SHOCKED!! that Microsoft released a product containing bugs and security issues. Who knew this could ever happen???

Expand full comment

“What’s even more disturbing is that Bing makes it look like the false narrative that it generates is referenced.”

Did you check the references? Were they real or conjured? Did they actually support what the bot wrote? I’ve seen many things written by humans that had plenty references, but the references bore no relation to the topic. On occasion a reference might even contradict what it supposedly supported.

Recently I got into a discussion with someone who was surprised that I didn’t support the idea of extending Medicaid to everyone. He considered that my opposition to it was counterproductive to society. He told me that this had been modeled mathematically and shown to increase productivity. I asked him where he had read it; he promised to send me the article.

Which he did. It was written in the expected word salad mode, but the hopefully redeeming feature was a flow diagram that would show how medical care fit into the greater scheme of the thesis of the article. It took me about 30 minutes to puzzle my way through it, but I finally did.

And ya know what? The number of times medical care of any kind made it into the calculations was ... wait for it... zero. Nowhere in the calculations was there anything even related to medical care. I pointed this out to the guy who sent it to me. Unsurprisingly, he didn’t reply to the email.

Bottom line: the devil’s in the details. So check the details.

Expand full comment

As Yann LeCun, Chief AI Scientist at Meta pointed out, human training to try to put "guardrails" may help some:

https://twitter.com/ylecun/status/1630615094944997376

"But the distribution of questions has a very, very long tail. So HF alone will mitigate but not fix the problems."

as a data scientist notes:

https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5

"This is an infinite game of whack-a-mole. There are more ways to be sexist than OpenAI or anyone else can possibly come up with fine-tuning demonstrations to counteract. I would put forth a conjecture: any sufficiently large language model can be cajoled into saying anything that you want it to, simply by providing the right input text.....

There are a few other interesting consequences of tuning. One is what is referred to as the “alignment tax”, which is the observation that tuning a model causes it to perform more poorly on some benchmark tasks than the un-tuned model. "

Humans wish to use these tools for creative tasks. If you cripple them to be unable to imagine things that some people find offensive: its seems likely it'll cripple them in other ways.

Fairly recently there was a controversy at Stanford over a photo of a student choosing to read Mein Kemf since many thought it inappropriate to ever dream of doing such a thing. Others with more critical thinking skills and imagination grasped the idea that it can be useful to "know your enemy" and to understand how people with problematic ideas think in order to try to persuade them to change their views. The ACLU used to spread the idea that the remedy to bad speech is more good speech which counters the bad speech. To create that good speech: you need to see the bad speech and understand it.

One way to do so if you don't happen to have a controversial speaker willing to engage with you is to have an LLM use what its implicitly embodied in its learning corpus to try to generate what might possibly be the sort of speech such people come up with and then consider how to deal with it. Unless of course its muzzled by people that don't seem to have thought or read much about the history of free speech and attempts to limit it, or considered the potential unintended consequences of doing so.

It seems rather problematic to try to prevent an AI from ever being able to generate what some consider "bad speech". Its especially problematic when people won't always agree, ala the recent controversy of the covid lab leak issue where it was considered by many in early 2020 to be something no one should dare be allowed to talk about or consider.

Humans can generate misinformation also. AIs can also then help filter information.

Perhaps training a separate "censor/sensitivity reader" AI to filter the outputs made public by the main LLM would be the answer. Ideally people could choose whether they wish to enact the censor or not, or whether they should be treated like adults able to choose to evaluate information on their own. Unfortunately some authoritarians would like to use the regulatory process to impose their worldview on AI, and indirectly on the rest of the populace. George Orwell wrote about that in a book that was meant to be a warning, not a how-to guide.

Expand full comment

I'm in favor of just removing the guard rails layered above the actual LLM so that everyone can see what is really going on down there. At least for a while.

Expand full comment

Am I the only one who notices the similarities between 'adding guardrails' and the default IT pattern (also visible in the previous symbolic AI wave) where brittleness that is fought by adding extra rules is going to make the tool itself ever more unwieldy in the end? It is also visible in other disciplines where frameworks (like in management, such as SAFe, etc.) tend to grow in size, trying to handle ever more exceptions and boundary cases until they collapse under their own weight.

All digital AI approaches, LLM included, are 'data driven rule based systems in disguise'. And as they are rule-based, they are brittle. And they do not scale when they have to handle something in te real world (like these conversations).

I am convinced this is the case, so I bend all the facts to fit that conviction, like any human does ;-)

Expand full comment

Could you please elaborate on the scale? Can you make any assessment about the impact this could have? Could you compare all that with what we have undergone so far without access to generative AIs?

Only then I will feel that yeah, we need to stop these AIs from taking over the internet and society. Until then, I dare say I really don't get this FUD.

Expand full comment

The spread of disinformation is major, even NATO is warning of cognitive warfare where disinformation campaigns play a key role. We are an AI-startup from Norway, and one of our proprietary ml models is actually on fact-checking, especially of the chatGTP generated content. We are currently testing with more users so feel free to try! https://www.youtube.com/watch?v=I17q-pPhyf0 Editor.factiverse.no - works best in chrome desktop

Expand full comment
Mar 1, 2023·edited Mar 1, 2023

Intelligence doesn't need to be computational in the algorithmic sense. But it's always a response, to some form of consideration. Ice melting would be one ex of unconsidered response, so it won't count as intelligent behavior.

PS: this is in 'response' to A Thornton's qn :)

Expand full comment

Not guard rails, but Band-Aid on a gaping, festering wound.

They opened the can of worms, now they need to deal with it. It's going to be interesting, to see how.

Expand full comment
Comment deleted
Expand full comment