Discussion about this post

User's avatar
Roman Peczalski's avatar

“Society is placing more and more trust in a technology that simply has not yet earned that trust.”

The society does not place trust, the society simply gives way, abandon itself to this technology. Because of the fundamental weaknesses underlying our society: greed of the companies, indolence of authorities, less effort and comfort seeking of users. I am afraid, globally, our society will be quickly very happy with AI and will not want to hear about the critical threats. Companies will make big money, governments will have the ultimate control tool over their populations, ordinary people will feel supported and smarter with it. People will be pleased by AI systems, will get used to them and depending on them. The game seems to be over yet.

Expand full comment
Rebel Science's avatar

"As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats."

This is the real existential threat in my opinion. It's impossible to detect that a superintelligent AI is imminent. Scientific breakthroughs do not forecast their arrival. It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies. A highly distributed AGI in the cloud would be impossible to recognize. I lose sleep over this.

Expand full comment
34 more comments...

No posts