Mar 30·edited Mar 30Liked by Gary Marcus

Gary, I felt your earlier post brilliantly articulated the nuance of your position. Sadly (through no fault of your own) it has gotten lost through your association with FLI and others who ally themselves to AGI narratives. The commonality you appear to have with your fellow signatories is an appreciation for how powerful these systems are, and the damage they are poised to wreak across society. Your own position seems to be that the danger stems largely from the brittleness of these systems - they are terrifying not because they are robustly intelligent, or remotely conscious, but precisely because the opposite. It is because they are lacking in any grounding of the world, and are sensitive to inputs, that we have to be wary of them (along with the obvious threats they pose to our information ecosystem etc). Please continue to shift the focus away from the presumed dawning of superintelligence and remind people that AI is dangerous because it both powerful and mindless (and, dare I say, at times utterly stupid). This is no time to cede our human intelligence!

Expand full comment
Mar 31Liked by Gary Marcus

Remember the Morris Worm?

Quoting Wikipedia:

"November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities."

As I recall a lot of systems were damaged and a lot of angry sysadmins who had to fix their systems.

They criticized Mr. Morris, not just because he caused a lot of damage - but because there was nothing remarkable the code that he wrote. He hadn't created something special - it was second rate code.

My comment about the letter and proposed hold is this.

1. The Morris Worm was nothing remarkable - but caused widespread damage.

2. Consider the demonstrated ability of GPT-4 to "get out of the box".

3. You can't trust LLMs, and the people who built it don't even know how it works. It's said that they were surprised by GPT-3's abilities - I don't ever remember being surprised by a program I wrote.

It would seem like a good idea to move ahead with caution.

One final thought - the people coding LLMs should carefully consider the potential liability of what they are creating. The sysadmins that had to repair the damage caused by the Morris worm had no recourse to recover their costs - but you can bet that if a similar incident happens an unforgiving public will see that someone will be pay for it.

Expand full comment

Yes, something needs to be done about the risks of LLMs and signing that letter, however imperfect it may be, is a good way to bring attention to the severity of the problem. Another problem is that LLMs, by their ability to suck attention and resources, are a detriment to real progress toward solving AGI, at least by mainstream researchers.

On the other hand, the LLM craze might make it impossible to recognize the arrival of true AGI on the scene. A number of independent researchers, by virtue of their contempt for mainstream ideas, may strike the mother lode, so to speak. Keep in mind that AGI does not have to be at human level to be extremely powerful. My fear is that anyone who is smart enough to crack AGI while no one in the mainstream is paying attention, may also be smart enough to use it surreptitiously for their own private goals that may not coincide with those of the mainstream. Knowledge is power and power corrupts.

We live in interesting times.

Expand full comment

Thanks for speaking up on this subject, Gary, and for maintaining your realism about AI's limitations while noting its dangers.

I don't have any helpful suggestions for you, but I am very interested in hearing your take on the impact AI may have on education. For the past decade or so, many teachers and educators have asked, "why teach it if you can google it?". Cognitive science provides an answer to that question -- because the knowledge you build in your head is essential to acquiring new knowledge.

Now, teachers and students are stampeding toward Chat GPT and it's only a matter of time before they will ask, "why do it if AI can do it?", where "it" may mean writing or math problems or any number of things that constitute formal education.

Are we at risk of entering the End of Knowledge?

Expand full comment

In the spirit of moving the conversation forward constructively and quickly:

- it would help for you and others to begin identifying useful analogs for this situation, as you see it, to help communicate to the public the risks, urgency, consequences (known and unknown) involved.

And then suggest a range of options for evaluation, assessment, risk scoring, risk rating, disclosure of various AI efforts?

One thing that'll come up is distrust in any institutional oversight efforts, especially if resulting in domestic regulation. Can see this rapidly falling into a politicized debate over US competitiveness etc.

Just a handful of analogs off top of the head:

- E.g. Is this FTX/Crypto

- Great Financial Crisis/MBS/Derivatives/Shadow Banking/Leverage

- Nuclear non-proliferation/confidence-building measures/verification regimes/non-binding international agreements

- Academic panel/oversight/research council

- Consortium of corporations/non-binding/pledge/

- ESG/public pressure/corporate social responsibility

- Standards/ICANN/

- Federal Regulation/Sarbanes Oxley/etc

Expand full comment
Mar 30Liked by Gary Marcus

It’s sad that half-informed people with fully armed keyboards get such a huge say in what is considered public consensus on a topic and that instead of looking at what normal, everyday people think about this, the mad town aware ofTwitter is being used as a proxy.

Expand full comment

I don't think we disagree actually, but I'll post some of my criticisms of the letter here anyway. I'm a lowly Master's student so I may lack research perspective in manners relevant to the argument.

A frustration I share with a lot of folks is the letter's conflation of long-term and short-term risk, particularly because the letter's proposal seems exclusively relevant to the latter and near-useless to the former (the broad show of support might be, but likely not the moratorium).

Secondly--and I think this is a slightly more original view as far as I can tell--ideally a moratorium would be paired with a plan of action, but the letter reads as something of a "vibe check". That is, it's kind of vague, but it imagines that 6-month period in mind is one in which researchers gain ground in relevant problems: i.e, the identification of AI spam, the installation of social network guardrails, etc. This seems highly-critical to the letter's project, but the organization of this is left largely implied.

I would like to make clear that the intuition I outlined above--pausing research to target specific problems---is reasonable. If the pandemic taught us anything, it's that placing pressure on scientific institutions in moments of crisis is not a hopeless endeavor.

Expand full comment

My interest is ethics, in relation to morals, and LLMs. And I do not mean whether or not AI is used ethically, but what would ethical modelling be for an LLM? The problem is distinguishing between ethics as an abstract enterprise, which I think LLMs can do well, albeit entirely thoughtlessly, and moral reasoning which remains entirely beyond the ability of mere optimization and pattern recognition. How would a deontologist or a utilitarian justify killing a baby or even one's neighbor or one's self? An LLM could come up with a slew of logically plausible explanations easily. The problem with the distinction between AI ethical reasoning and AI moral reasoning is does the decision matter. Do you care about the outcome? It is entirely different to add up the number of children in a statistical family and counting your own children. The number would be the same perhaps, but the differences are absolute. I guess it comes down to the Category Mistake that has plagued Philosophy of Mind from the start. Is thought an epiphenomenon of the brain, or is it just a way to talk about brain activity? Is the moral thing to do simply following the best logical manipulation of ethical principles or is it something entirely different? I would bet a dollar no LLM could make that distinction not now not ever. I explain that AI is absolutely stupid and absolutely stubborn and no matter how much data you feed it you only feed it stubbornness not it's Intelligence.

Expand full comment
Mar 30·edited Mar 30

The letter threatens lots of people's vested interests. Of course they're going to push back!

(BTW I also signed it. It doesn't matter if it's imperfect. We're at the point at which action is required.)

Expand full comment

My concerns about it are: I do not trust Musk (he's also by far the biggest funder of Foundation for Life and had proven himself untrustworthy time and again) or Altman so this stinks of being a PR stunt so that they can say they tried but the industry didn't play along. It only concerns itself with AI more powerful than GPT4, which is already powerful enough to do significant damage. It will be basically impossible to get everyone to play along even if a few do, but it might be enough to forestall more effective legislation being proposed. It also promotes AI hype, which is neither helpful note does it indicate good faith.

Expand full comment

This is ironic, because at heart I am both an anarchist and a libertarian. However, I do believe in the genuine existential risk that AI poses, and, even after speaking to my elders who lived through the Cold War, I do firmly believe that this threat is of an entirely different character & magnitude than that posed by nuclear, biological, etc. (traditional WMDs & EoWs). THAT SAID:

I *firmly* support any and all means to s..l..o..w.....d..o..w..n.. the relentless march of the AI beast. And no, I don't expect all the labs and nation-states to suddenly say: "Oh, 1,000 people signed a letter? Great! Time for a vacation!". But I do believe that a letter like this, with our collective social & reputational power, could motivate governments (ugh!) into regulatory & legal action, and that clusterf*ck of red tape (and inane senate hearings) would effectively slow global progress on this front.

Might Musk & China & others use the opportunity to "catch up" with OpenAI & Google in a Machievellian way? Sure! So? AI represents a genuine material threat to our species, culture and civilization, and ANY thing that might slow its trajectory at this point, is warranted (there is a hilarious meme on twitter of the COVID "slow the spread... flatten the curve" re-applied to AI dev).

The letter is imperfect. Sure.

Gary, you want to know what we should actually do?

Congressional subcommittee on AI, leading to rapid deployment of laws and legal frameworks, including risk assessment, safety certifications, and clear liability for harms. Yes, it will be a cluster. Yes, it is anti-free-market-capitalist. No, it is not the "spirit" of Silicon Valley cowboy-ism pirate-ism... and so?

Slow down, and breath. Live to see another day. We'll be OK.

Expand full comment

Hi Gary, I would laud the blending of a condemnation of a potentially dangerous and unsustainable trend, with a more hopeful and scientific manifesto of how to address the challenge - wisely getting the service innovation benefits while avoiding the harms (e.g., "tech for good"). See for example and inspiration "David Attenborough: A Life on Our Planet" - and keep in mind rewinding, rewilding, and resilience. Also for inspiration consider the Marie Curie quotes: 'We must believe that we are gifted for something and that this thing must be attained.' 'Nothing in life is to be feared; it is only to be understood.' 'I am one of those who think like Nobel, that humanity will draw more good than evil from new discoveries.' Best regards, -Jim

Expand full comment

Much has been said on this letter controversy so I have little to add. The one thing I've seen that bothers me is that the general press, and some who should know better, are confusing the short-term AI safety issues (fake news, malicious use, risk to health) with long-term ones (the AI apocalypse, turning us all into paper clips, etc). I haven't read the letter closely but my gut feel says that if it makes this distinction, it does so weakly. Obviously, the short-term and long-term risks are related but the former is very real and the latter in sci-fi territory.

Expand full comment
Mar 31·edited Mar 31

The short term risk that is most concerning is that AI chatbots pollute the only well we have.

Has the horse already left the barn, and what controls are currently in place? Carl Bergstrom raised this (https://fediscience.org/@ct_bergstrom/110071929312312906) asking "what happens when AI chatbots pollute our information environment and then start feeding on this pollution. As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at. https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation."

Are we already inhaling our own hallucinating AI fumes, and what is to stop this from becoming an irreversible "tragedy of the information commons" due to poisons we cannot filter out?

Expand full comment

I wrote a post looking at some of the not-unreasonable criticisms of the letter but arguing that it's still worth signing. Gary Marcus makes a cameo appearance.


Expand full comment

After the role played by spreadsheets in the Subprime debacle was uncovered, at least in theory, financial institutions started paying closer attention to them. ChatGPT is the spreadsheet here...it's a tool and a far more dangerous tool given the network effects that come from leveraging social media. The "AI" part of the discussion is a red herring; it's not different than say, pesticides or food additives at that point. Of course, we're not very good at pre-emptive anything...

That being said, I don't know how you regulate it. Two examples spring to mind. First, software engineering itself as a discipline has struggled with calls for certification and ethics. The problem is, anybody who goes to a coding bootcamp can call themselves a programmer and businesses do not generally have an incentive to enforce standards the same way hospitals must for, say, doctors and nurses. For ethics, as a software engineer, I cannot say to my employer, "this is unethical, if I withdraw my services, your website will no longer be designed by Certified Software Engineers". My employer will say, paraphrasing, the door is that way.

Second, in the United States, there are moral objections to cloning and other kinds of stem cell research. Some countries have no such qualms. As a result, two things have happened...in other countries, they have kept researching. In the US, we have found ways around the constraints. So if just one country, company, whatever has lower standards...it all falls apart. And unlike what might be required for a wet lab, poking around a LLM isn't very expensive, relatively speaking. And who would want to lose the advantage if you thought that your competitor wasn't following the terms of the standard? Since all notion of nuance and spirit instead of letter have left social discourse and public policy,.... :/

On a completely different but not unrelated topic, one of the things I find with NLP researchers is that they are very likely to read between the lines...supplying semantics where none exist. It seems to me that even if you did something as simple at ROT13 on your training data, the experiment would become blinded. You'd need to do the same on your evaluation data. You could have the model generate a prediction, look at the score, say "that's a good score" and then re-apply ROT13 and see what was actually done.

Of course, is the fact that the entire thing would still work if we applied ROT13 to all the training data evidence that this is all just probabilistic smoke and mirrors. ¯\_(ツ)_/¯ I haven't finished the thought experiment yet ;)

Expand full comment