Discussion about this post

User's avatar
Aaron Turner's avatar

The safety-critical world (nuclear, railways, aerospace etc) would be a good starting point. Anyone who develops a safety-critical system is required to produce an evidence-based safety case for that system, in order for that system to be certified against technical standards. Only a system that has been so certified may be deployed. (See for example the UK Safety Critical Systems Club: https://scsc.uk). Any AI system sufficiently powerful to cause harm (either to individuals, or to society, e.g. democracy) is effectively a safety-critical system, and should be required to be certified against strict technical standards prior to deployment. Given that we don't really understand how complex neural-net-based systems even work, I very much doubt that any NN-based system (such as an LLM) would meet the requirements for safety-critical certification. Which immediately means that anyone proposing such regulation is going to be accused of "stifling innovation" (i.e. wealth generation / tax dollars) at the expense of "us" (the US, UK) vs "them" (China, Russia, etc). It's a classic Molochian Trap, where every actor behaves according to their own short-term self-interest, thereby leading to an endgame that is a massively sub-optimal for everyone. The real AI problem is not the technology per se, but the global coordination problem.

Expand full comment
Ted Wade's avatar

I am (very) glad to hear your success in reaching possible regulators. It is stunning how many people are talking about AI but without any knowledge of the real and often subtle issues. Your leadoff for the Bleak Future identifies one cause: the abyss between those concerned with Safety versus Ethics hinders and limits public understanding. I think the numerous possibilities for harm need to be made concrete in as many ways as possible. Your illustration of the overt and visible calamity development in the bleak future scenario is a good example of what will help people grasp the risks. There can also be cryptic risks, and those need story-telling as well. I took a stab at illustrating how instrumental AI goals of persuasiveness could lead quite stealthily to human loss of control. https://tedwade.substack.com/p/artificial-persuasion I wish more it had more exposure.

Expand full comment
86 more comments...

No posts