There are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
- Donald Rumsfeld
Large language models can be harnessed for good; let there be no doubt about that. But they are almost certain to cause harm as well, and we need to prepare for that, too. And what really scares me is that I don’t think we yet know the half of what that entails.
In previous posts, I had already pointed to at least five concerns:.
State actors and extremist groups are likely to use large language models to deliberately mass produce authoritative-sounding misinformation with fake references and data at unprecedented scale, attempting to sway elections and public opinion.
Chat-style search’s tendency to hallucinate is likely to accidentally produce medical misinformation.
Chatbot companions that offer emotional support have already been changed in arbitrary ways that left some users in serious emotional pain.
LLM-generated prose has already disrupted web forums and peer review processes, by flooding outlets with fake submissions.
The ladder is particularly worrisome because of the pace at which the problem is growing:
If misinformation (much harder to measure) grows at the same pace, we have a serious problem.
But the list I gave above is clearly just a start.
New concerns are emerging almost literally day,. Here are three examples that were forwarded to me, just in the last few days, ranging from one that relatively mild to that clearly more extreme.
The first is already starting to feel a bit familiar: gaslighting. But instead of a chatbot gaslighting its user, trying to persuade it that something untrue is true, a chatbot seems to have mislead a well-intention user (perhaps a student?) who was in turn trying to persuade a professor (who hadn’t consented to be part of an LLM experiment) to comment on a paper that the professor hadn’t actually written.
That one’s just a minor waste of time; things could be worse
The next example is a straight scam, made possible in new form by Bing,
The third is also disturbing:
And many of these attacks might of course be combined with advances in voice-cloning tech, which itself is already being applied to scamming, as discussed by the Washington Post on Sunday.
§
That’s a lot for one week.
Lately I have been asked to participate in a bunch of debates about whether LLMs will, on balance, be net positive or net negative. As I said in the most recent one (not yet aired), at the beginning of my remarks, the only intellectually honest answer is to say we don’t yet know. We don’t know how high the highs are going to be, and we don’t yet know how low the lows are going to be.
But one thing is clear: anybody who thinks we have come to grips with the unknown unknowns here is mistaken.
Update: The day after I posted the above, Tristan Harris posted this:
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast, Humans versus Machines, this Spring.
Please slow down. In the beginning of this article there are many mistakes in word spellings and words missing. It is best to proof your work before posting. Just trying to help.
But we do know what's coming. AI will evolve in to yet another existential threat scale technology. And by the time we understand that, it will be too late to turn back.
ANSWER: The marriage between violent men and an accelerating knowledge explosion is unsustainable.
This one sentence is really all we need to learn to know what's coming.
Nobody can predict the exact how, when, where and why of coming technology fueled disasters. But if we zoom out from particular details to the larger picture, it's not that hard to see how giving violent men ever greater powers at an ever accelerating pace is going to turn out in the end.
Technically we are racing forward with impressive speed. But philosophically, in our relationship with all these emerging technologies, we are still stuck in the 19th century. We're clinging to a "more is better" relationship with knowledge that was entirely rational in the long era of knowledge scarcity, and cluelessly ignoring that we no longer live in that era.
Today we live not in the long old era of knowledge scarcity, but in a revolutionary new very different era characterized by knowledge exploding in every direction at an ever accelerating rate. We're refusing to adapt to the new environment we have created. And like any other species in any other time and place, the price tag for failing to adapt to changing conditions is death.
The AI "experts" everyone is worshipping today have good intentions, just as those working on the Manhattan Project had good intentions. But as the history of nukes should have taught us 60 years ago, good intentions are not enough. Just as was true in 1945, the well intentioned AI "experts" are opening a pandora's box that they won't know how to close once the price tag for AI becomes clear.
The marriage between violent men and an accelerating knowledge explosion is unsustainable.
Know that, and you'll know what's coming.
https://www.tannytalk.com/p/our-relationship-with-knowledge