Discussion about this post

User's avatar
Junaid Mubeen's avatar

Gary, I felt your earlier post brilliantly articulated the nuance of your position. Sadly (through no fault of your own) it has gotten lost through your association with FLI and others who ally themselves to AGI narratives. The commonality you appear to have with your fellow signatories is an appreciation for how powerful these systems are, and the damage they are poised to wreak across society. Your own position seems to be that the danger stems largely from the brittleness of these systems - they are terrifying not because they are robustly intelligent, or remotely conscious, but precisely because the opposite. It is because they are lacking in any grounding of the world, and are sensitive to inputs, that we have to be wary of them (along with the obvious threats they pose to our information ecosystem etc). Please continue to shift the focus away from the presumed dawning of superintelligence and remind people that AI is dangerous because it both powerful and mindless (and, dare I say, at times utterly stupid). This is no time to cede our human intelligence!

Expand full comment
macirish's avatar

Remember the Morris Worm?

Quoting Wikipedia:

"November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities."

As I recall a lot of systems were damaged and a lot of angry sysadmins who had to fix their systems.

They criticized Mr. Morris, not just because he caused a lot of damage - but because there was nothing remarkable the code that he wrote. He hadn't created something special - it was second rate code.

My comment about the letter and proposed hold is this.

1. The Morris Worm was nothing remarkable - but caused widespread damage.

2. Consider the demonstrated ability of GPT-4 to "get out of the box".

3. You can't trust LLMs, and the people who built it don't even know how it works. It's said that they were surprised by GPT-3's abilities - I don't ever remember being surprised by a program I wrote.

It would seem like a good idea to move ahead with caution.

One final thought - the people coding LLMs should carefully consider the potential liability of what they are creating. The sysadmins that had to repair the damage caused by the Morris worm had no recourse to recover their costs - but you can bet that if a similar incident happens an unforgiving public will see that someone will be pay for it.

Expand full comment
72 more comments...

No posts