The Open Letter Controversy
The letter wasn’t perfect; a lot of the criticism was misguided. What should we actually do?
Brief update, I signed the FLI letter, imperfect as it may be, and I promoted it, as well, which led (a) to my most liked and viewed tweet ever (over 5 million views) and (b) an immense amount of pushback, from practically every front imaginable, on issues ranging from what do about China to what to think about Elon Musk and his motivations.
Much (not all) of the pushback seemed off base to me; I wrote a long tweet about it this morning:
Sooo many of the attacks on the proposed 6 month ban on training super large #LLMs miss the point.
With so much at stake, here is a Twitter #longread sorting out what really matters:
👉 A lot of the attacks on the letter focused on who sponsored it, not who signed it. Most of the people who signed it (eg me, Yoshua Bengio, etc) have nothing to do with FLI. The letter should by judged by what it says, not who wrote it. The real news here is not that Elon Musk signed it but that so many people who are not natural allies (eg Bengio and l myself, famous for our heated 2019 debate) came together out of shared concern.
👉 It is perfectly fine to propose a different alternative, but most of the critiques of the letter have not.
👉 It is *not* fine to do nothing. Virtually everyone, even at OpenAI, has acknowledged that there are serious risks, but thus far few tangible steps have been taken to mitigate them—either by government or industry.
👉 Not everyone who signed the letter is principally concerned with long-term risk; many of us who signed are worried at least as much about short-term risk.
👉 The letter didn't call for a ban on AI. It didn't call for a permanent ban. It didn't call for a ban on GPT-4. it didn't call for a ban on the vast majority of AI research, only for a brief pause on one very specific project with a technology that has *known* risks with no known solutions. It actually called for *more* research. Did anybody even read the letter? 🙄
👉 I personally haven't changed; I still think that LLMs are unreliable, and still think that they are a very poor basis for factuality. I don't think they are close to AGI. But that doesn't mean that they don't have the potential to rip apart our social fabric—particularly given the current mix of unbelievably widespread and rapid deployment, corporate irresponsibility, the lack of regulation, and inherent unreliability.
§
To my mind, doing nothing is truly the most foolish option. I am with Stability AI founder Emad Mostaque in his read on how most people actually working on this stuff see things:
and would add this: None of the top labs actually is remotely transparent, governance basically doesn’t exist yet, and there are basically no safeguards actually in place.
This doesn’t mean we are doomed, but we do need to think hard, and quickly, about what proper measures look like, just as we have done for medicine, aviation, cars, and so on. The idea (which I have actually heard expressed) that AI should be exempt from regulation is absurd.
§
One criticism do I agree with is this
But I don’t think that legitimate concerns about the hypey-ness of letter in anyway undermines the real need to rein in the systems are currently being rolled out at unprecedented scale.
As I said yesterday to The New York Times, “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
§
In a couple weeks, I will be speaking at TED, discussing A(G)I risk—both short term and long – and what to do about it.
Please put your own suggestions in the comments below.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is deeply, deeply concerned about current AI but really hoping that we might do better.
Watch for his new podcast, Humans versus Machines, debuting later this Spring
Gary, I felt your earlier post brilliantly articulated the nuance of your position. Sadly (through no fault of your own) it has gotten lost through your association with FLI and others who ally themselves to AGI narratives. The commonality you appear to have with your fellow signatories is an appreciation for how powerful these systems are, and the damage they are poised to wreak across society. Your own position seems to be that the danger stems largely from the brittleness of these systems - they are terrifying not because they are robustly intelligent, or remotely conscious, but precisely because the opposite. It is because they are lacking in any grounding of the world, and are sensitive to inputs, that we have to be wary of them (along with the obvious threats they pose to our information ecosystem etc). Please continue to shift the focus away from the presumed dawning of superintelligence and remind people that AI is dangerous because it both powerful and mindless (and, dare I say, at times utterly stupid). This is no time to cede our human intelligence!
Remember the Morris Worm?
Quoting Wikipedia:
"November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities."
As I recall a lot of systems were damaged and a lot of angry sysadmins who had to fix their systems.
They criticized Mr. Morris, not just because he caused a lot of damage - but because there was nothing remarkable the code that he wrote. He hadn't created something special - it was second rate code.
My comment about the letter and proposed hold is this.
1. The Morris Worm was nothing remarkable - but caused widespread damage.
2. Consider the demonstrated ability of GPT-4 to "get out of the box".
3. You can't trust LLMs, and the people who built it don't even know how it works. It's said that they were surprised by GPT-3's abilities - I don't ever remember being surprised by a program I wrote.
It would seem like a good idea to move ahead with caution.
One final thought - the people coding LLMs should carefully consider the potential liability of what they are creating. The sysadmins that had to repair the damage caused by the Morris worm had no recourse to recover their costs - but you can bet that if a similar incident happens an unforgiving public will see that someone will be pay for it.