AI red lines
“We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026.”
Today, one of my personal heroes, the Nobel Laureate Maria Ressa, delivered the “Global call for AI red lines to prevent unacceptable AI risks” to the UN General Assembly, referring to a letter signed by 200 world leaders, including multiple Nobel Laureates, AI experts, and former heads of state. (Stuart Russell gives a good discussion of criteria for devising red lines, with possible examples here.)
Although I can’t fully endorse the word “soon” and have a different take on what precisely current models are doing, I signed, too—because we have let too much slide, and done too little to face the risks.
I will give the last words to Turing Award winner Yoshua Bengio, one of the letter’s signatories:
The reality is, there will be a never ending supply of people eager to exploit AI to satisfy their agenda, enrich their bank accounts, and harm others. Drawing lines in the sand, while important, will often just be symbolic …
I'm much more concerned about the deceitful and harmful behavior of humans using AI than about the deception of AI itself, which is a learned pattern from its training by humans. The myth of mass unemployment has no rational basis and serves as a distraction from the real dangers. The most devastating threat comes from the human use of AI for political propaganda and anti-climate change disinformation.