Discussion about this post

User's avatar
Art Keller's avatar

Gary, this is huge. For background, I used to work for the CIA's Counter-proliferation Division, which existed to stop the creation and spread of Weapons of Mass Destruction like nukes, chemical weapons, and biological weapons. The development of chemical and nuclear weapons require chemicals, elements, and machinery that is distinctive, and thus easier to discover and shut down via treaty, sanctions, or covert avenues.

Biological weapons always were and always will be the toughest nut to crack, in terms of stopping their development, because so much of biological weapons development is identical with legitimate biological research. This means LLMs will make that already-hard task even harder. It also points out something every AI company will be loathe to admit: if you are improving a lethal technology like bioweapons, what you are developing is inherently dual-use, e.g. it can be used for civilian AND military ends. The most serious dual use technology always faces export restrictions for exactly that reason. I suspect one reason OpenAI's evaluation was, "Oh, this isn't statistically significant" is because if it WERE statistically significant, they've put LLMs in an entirely different regulatory category, and despite what they claim, IMO they do NOT want any meaningful regulation. Their valuation would PLUMMET if Uncle Sam said "Oh, hey, this is export restricted."

(of course, trying to enforce that would be a nightmare)

The fact that this study used GPT-4 with no safety guardrails in place (a model version the public can't access) is not a reason disregard the threat here. Meta's open-source LLAMA is only 6 months-1 year behind OpenAI, but because they've made their weights public, they've made the safety guardrails trivially east to shut down. We cannot pretend safety guardrails on ChatGPT will save us when LLAMA WILL catch up and LLAMA's guardrails can be disabled in an hour. That's one reason open-source models are potentially very dangerous. Meta will never admit that, anymore than OpenAI will admit LLMs can be dual use. Their whole business model depends on them never being classified that way. I posted something related to this a couple of weeks back. https://technoskeptic.substack.com/p/ai-safety-meme-of-the-week-d9e

Expand full comment
Lamb_OS's avatar

1. The result is still derivative, it’s just that the LLM groups had more access to information of the *contents* of the sources on the Internet.

2. Nonetheless, this the information equivalent of the easy access to firearms. We really want a more efficient predictive analytics model that allows *more* antisocial behavior? I hope not…

3. Never screw with an experimental psychologist. The study of behavior has so much more noise and so little signal relative to the older sciences that we’ve created what are among the most formidable experimental designs and statistical analyses among all areas of scientific inquiry. Gary is right. The chances of this study being published in a peer-reviewed journal - esp. a tier 4 or 5 one, is zero.

Expand full comment
41 more comments...

No posts