Discussion about this post

User's avatar
Russell Hogg's avatar

This seems exactly right. The risk right now is not malevolent AI but malevolent humans using AI.

Expand full comment
Spherical Phil - Phil Lawson's avatar

Very well said!

And while there is a growing recognition of the potential harm, damage or even destruction, from misused LLMs or other forms of AI, there is already immediate harm, growing harm, that will lead to much pain, suffering and death, many deaths. Though as you have said causality will be difficult to prove. The US, and much of the rest of the world are experiencing a mental health crisis. It’s very complicated with profound confusion about how to address this. This has created an explosion of billions in funding for mental health and well-being apps (sixty-seven percent had been developed without any guidance from a healthcare professional), some if which are already using LLMs and GPT and we know in advance the results. It is a bit like social media has many benefits, but many years later society at large is just starting to understand the significant harm, especially to children and the vulnerable, but also to society as a whole. This current form of AI is doing real harm today, that will grow to great harm, even if the AI is not malevolent or does not eventually lead to the end of civilization.

Expand full comment
53 more comments...

No posts