The long shadow of GPT
We don’t really know what’s coming.
There are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
- Donald Rumsfeld
Large language models can be harnessed for good; let there be no doubt about that. But they are almost certain to cause harm as well, and we need to prepare for that, too. And what really scares me is that I don’t think we yet know the half of what that entails.
In previous posts, I had already pointed to at least five concerns:.
State actors and extremist groups are likely to use large language models to deliberately mass produce authoritative-sounding misinformation with fake references and data at unprecedented scale, attempting to sway elections and public opinion.
Chat-style search’s tendency to hallucinate is likely to accidentally produce medical misinformation.
Chatbot companions that offer emotional support have already been changed in arbitrary ways that left some users in serious emotional pain.
LLM-generated prose has already disrupted web forums and peer review processes, by flooding outlets with fake submissions.
The ladder is particularly worrisome because of the pace at which the problem is growing:
If misinformation (much harder to measure) grows at the same pace, we have a serious problem.
But the list I gave above is clearly just a start.
New concerns are emerging almost literally day,. Here are three examples that were forwarded to me, just in the last few days, ranging from one that relatively mild to that clearly more extreme.
The first is already starting to feel a bit familiar: gaslighting. But instead of a chatbot gaslighting its user, trying to persuade it that something untrue is true, a chatbot seems to have mislead a well-intention user (perhaps a student?) who was in turn trying to persuade a professor (who hadn’t consented to be part of an LLM experiment) to comment on a paper that the professor hadn’t actually written.
That one’s just a minor waste of time; things could be worse
The next example is a straight scam, made possible in new form by Bing,

The third is also disturbing:


And many of these attacks might of course be combined with advances in voice-cloning tech, which itself is already being applied to scamming, as discussed by the Washington Post on Sunday.


§
That’s a lot for one week.
Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. As I said in the most recent one (not yet aired), at the beginning of my remarks, the only intellectually honest answer is to say we don’t yet know. We don’t know how high the highs are going to be, and we don’t yet know how low the lows are going to be.
But one thing is clear: anybody who thinks we have come to grips with the unknown unknowns here is mistaken.
Update: The day after I posted the above, Tristan Harris posted this:




Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast, Humans versus Machines, this Spring.
Facts. Your interview with Sam Harris was phenomenal and again, you make very sound argument as to why a wider vigilance is required here. We cannot expect that people will just get it one day; the likelihood that they won’t is practically assured. We can, however, make up our minds to use and create applications at the highest practical value, with all of this in mind.
Please slow down. In the beginning of this article there are many mistakes in word spellings and words missing. It is best to proof your work before posting. Just trying to help.