Discover more from Marcus on AI
The Open Letter Controversy
The letter wasn’t perfect; a lot of the criticism was misguided. What should we actually do?
Brief update, I signed the FLI letter, imperfect as it may be, and I promoted it, as well, which led (a) to my most liked and viewed tweet ever (over 5 million views) and (b) an immense amount of pushback, from practically every front imaginable, on issues ranging from what do about China to what to think about Elon Musk and his motivations.
Much (not all) of the pushback seemed off base to me; I wrote a long tweet about it this morning:
Sooo many of the attacks on the proposed 6 month ban on training super large #LLMs miss the point.
With so much at stake, here is a Twitter #longread sorting out what really matters:
👉 A lot of the attacks on the letter focused on who sponsored it, not who signed it. Most of the people who signed it (eg me, Yoshua Bengio, etc) have nothing to do with FLI. The letter should by judged by what it says, not who wrote it. The real news here is not that Elon Musk signed it but that so many people who are not natural allies (eg Bengio and l myself, famous for our heated 2019 debate) came together out of shared concern.
👉 It is perfectly fine to propose a different alternative, but most of the critiques of the letter have not.
👉 It is *not* fine to do nothing. Virtually everyone, even at OpenAI, has acknowledged that there are serious risks, but thus far few tangible steps have been taken to mitigate them—either by government or industry.
👉 Not everyone who signed the letter is principally concerned with long-term risk; many of us who signed are worried at least as much about short-term risk.
👉 The letter didn't call for a ban on AI. It didn't call for a permanent ban. It didn't call for a ban on GPT-4. it didn't call for a ban on the vast majority of AI research, only for a brief pause on one very specific project with a technology that has *known* risks with no known solutions. It actually called for *more* research. Did anybody even read the letter? 🙄
👉 I personally haven't changed; I still think that LLMs are unreliable, and still think that they are a very poor basis for factuality. I don't think they are close to AGI. But that doesn't mean that they don't have the potential to rip apart our social fabric—particularly given the current mix of unbelievably widespread and rapid deployment, corporate irresponsibility, the lack of regulation, and inherent unreliability.
To my mind, doing nothing is truly the most foolish option. I am with Stability AI founder Emad Mostaque in his read on how most people actually working on this stuff see things:
and would add this: None of the top labs actually is remotely transparent, governance basically doesn’t exist yet, and there are basically no safeguards actually in place.
This doesn’t mean we are doomed, but we do need to think hard, and quickly, about what proper measures look like, just as we have done for medicine, aviation, cars, and so on. The idea (which I have actually heard expressed) that AI should be exempt from regulation is absurd.
One criticism do I agree with is this
But I don’t think that legitimate concerns about the hypey-ness of letter in anyway undermines the real need to rein in the systems are currently being rolled out at unprecedented scale.
As I said yesterday to The New York Times, “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
In a couple weeks, I will be speaking at TED, discussing A(G)I risk—both short term and long – and what to do about it.
Please put your own suggestions in the comments below.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is deeply, deeply concerned about current AI but really hoping that we might do better.
Watch for his new podcast, Humans versus Machines, debuting later this Spring