AI's Jurassic Park moment
New systems like chatGPT are enormously entertaining, and even mind-boggling, but also unreliable, and potentially dangerous.
Something incredible is happening in AI right now, and it’s not entirely to the good. Everybody is talking about systems like chatGPT (OpenAI), Dall-E 2, and Lensa that generate text and images that look remarkably human-like, with astonishingly little effort.
These systems can be incredibly fun to play with. Take this example, generated by chatGPT by Henry Minsky (son of Marvin Minsky, one of AI’s founders), who asked chatGPT to “Describe losing your sock in the dryer in the style of the declaration of independence”:
When in the course of household events, it becomes necessary for one to dissolve the bonds that have connected a sock to its mate, and to assume among the powers of the laundry room, the separate and equal station to which the laws of physics and of household maintenance entitle it, a decent respect to the opinions of socks requires that it should declare the causes which impel it to go missing.
We hold these truths to be self-evident, that all socks are created equal, and are endowed by their manufacturer with certain unalienable rights….
That a machine could do this, with so little effort on the part of the user, is frankly mind-boggling.
But at the same time it is, or should be, terrifying. It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.
The core of that threat comes from the combination of three facts:
• these systems are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination; ask them to explain why crushed porcelain is good in breast milk, and they may tell you that “porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop”. (Because the systems are random, highly sensitive to context, and periodically updated, any given experiment may yield different results on different occasions.)
• they can easily be automated to generate misinformation at unprecedented scale.
• they cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero. Russian troll farms spent more than a million dollars a month in the 2016 election; nowadays you can get your own custom-trained large language model, for keeps, for less than $500,000. Soon the price will drop further.
Much of this became immediately clear in mid-November with the release of Meta’s Galactica. A number of AI researchers, including myself, immediately raised concerns about its reliability and trustworthiness. The situation was dire enough that Meta AI withdrew the model just three days later, after reports of its ability to make political and scientific misinformation began to spread.
Alas, the genie can no longer be stuffed back in the bottle. For one thing, MetaAI initially open-sourced the model, and published a paper described what was being done; anyone skilled in the art can now replicate their recipe. (Indeed Stability.AI is already publicly considering offering their own version of Galactica.) For another, chatGPT, just released by OpenAI, is more or less just as capable of producing similar nonsense, such as instant essays on adding wood chips to breakfast cereal. Someone else coaxed chatGPT into extolling the virtues of nuclear war (alleging it would “give us a fresh start, free from the mistakes of the past”). Like it or not, these models are here to stay, and we as a society are almost certain to be overrun by a tidal wave of misinformation.
§
Already, earlier this week, the first front of that tidal wave appears to have hit. Stack Overflow, a vast question-and-answer site that most programmers swear by, has been overrun by gptChat, leading the site to impose a temporary ban on gptChat-generated submissions. As they explained, “Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.” For Stack Overflow, the issue is literally existential. If the website is flooded with worthless code examples, programmers will no longer go there, its database of over 30 million questions and answers will become untrustworthy, and the 14 year old website will die. As one of the most central resources that the world’s programmers rely on, the consequences for software quality and developer productivity could be immense.
And Stack Overflow is a canary in a coal mine. They may be able to get their users to stop voluntarily; programmers, by and large, are not malicious, and perhaps can be coaxed to stop fooling around. But Stack Overflow is not Twitter, Facebook, or the web at large.
Nation-states and other bad actors that deliberately produce propaganda, are highly unlikely to voluntarily put down their new arms. Instead, they are likely to use large language models as a new class of automatic weapons, in their war on truth, attacking social media and crafting fake web sites at a volume we have never seen before. For them, the hallucinations and occasional unreliabilities of large language models are not an obstacle, but a virtue.
The so-called Russian Firehose of Propaganda model, described in a 2016 Rand report, is about creating a fog of misinformation; it focuses on volume, and on creating uncertainty. It doesn’t matter if the “large language models” are inconsistent, if they can greatly escalate volume. And it’s clear that that is exactly what large language models make possible. They are aiming to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.
Scam artists too, are presumably taking note, since they can use large language models to create whole rings of fake sites, some geared around questionable medical advice, in order to sell ads; a ring of false sites about Mayim Bialek allegedly selling CBD gummies may be part of one such effort.
§
All of this raises a critical question: what can society do about this new threat? Where the technology itself can no longer be stopped, I see four paths, none easy, not exclusive, all urgent:
First, every social media company and search engine should support and extend StackOverflow’s ban; automatically-generated content that is misleading, should not be welcome, and the regular posting of it should be grounds for a user’s removal.
Second, every country is going to need to reconsider its policies on misinformation. It’s one thing for the occasional lie to slip through; it’s another for us all to swim in a veritable ocean of lies. In time, though it would not be a popular decision, we may have to begin to treat misinformation as we do libel, making it actionable if it is created with sufficient malice and sufficient volume.
Third, provenance is more important now than ever before. User accounts must be more strenuously validated, and new systems like Harvard and Mozilla’s human-ID.org that allow for anonymous, bot-resistant authentication need to become mandatory; they are no longer a luxury we can afford to wait on.
Fourth, we are going to need to build a new kind of AI to fight what has been unleashed. Large language models are great at generating misinformation, but poor at fighting it. That means we need new tools. Large language models lack mechanisms for verifying truth; we need to find new ways to integrate them with the tools of classical AI, such as databases, webs of knowledge, and reasoning.
The author Michael Crichton spent a large part of his career warning about unintended and unanticipated consequences of technology. Early in the film Jurassic Park, before the dinosaurs unexpectedly start running free, scientist Ian Malcom (played by Jeff Goldblum) distills Crichton’s wisdom in a single line “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
Executives at Meta and OpenAI are as enthusiastic about their tools as the proprietors of Jurassic Park were about theirs.
The question is, what are we going to do about it.
Gary Marcus (@garymarcus) is a scientist, best-selling author, and entrepreneur. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI.
LLMs are actually *Large Word Sequence Models*. They excel at reproducing sentences that sound correct. Mostly because they have been trained on billions of small groups of word sequences.
However language exists to transfer meaning between humans. Calling Chatbot a LLM implies it conveys meaning. Any meaning and correctness behind these generated word sequences is purely incidental and any potential meaning is inferred solely by the reader.
Saying that, Chatbot is ground-breaking technology, it will help the non-English speaking with syntax and grammar. But it will help no-one with conveying meaning.
When the next generation looks back in 15 yrs and sees the $Ts poured into LLMs and non-symbolic algorithms they will be stunned at how short-sighted and misguided we currently are.
Well said, again. The level of BS we will have to endure because of the fact that these 'word order prediction systems' can produce 'correct nonsense' is really mind boggling and not many are aware of the scale of the problem. So, good that it is pointed out.
With respect to: what should we do about it: I would humbly suggest people to listen to the last 7 minutes of my 2021 talk: https://www.youtube.com/watch?v=9_Rk-DZCVKE&t=1829s (links to last 7 minutes) it discusses the fundamental vulnerability of human intelligence/convictions and the protection of truthfulness as a key challenge of the IT revolution.
Also in that segment: one thing we might do at minimum is establish a sort of 'Hippocratic Oath for IT'. And criminalising systems pretending to be human.
There is more and those were first thoughts (though before 2000 I've already argued that internet anonymity when 'publishing' will probably not survive the fact that it enables damage to society too much)
Final quote from that 7 minute segment at the end of the talk:
"It is particularly ironic is [sic] that a technology — IT — that is based on working with the values 'true' and 'false' (logic) has consequences that undermine proper working of the concepts of 'true' and 'false' in the real world."