18 Comments

Kudos for keeping your gloves up and not giving up the fight, the world needs you, it must be frustrating to have seemingly intelligent people in front of you closing their eyes on common sense by pretending that jumping into an unknown of this magnitude without caution could ever be a good idea.

Expand full comment

Definitely can be frustrating. Love the boxing metaphor :)

Expand full comment

I've been speaking to lots of folks in the AI governance space lately, primarily academics, and I'm struck by how many of them underscore how vital the need is for more research to figure out the nature and scope of many known AI risks. What's also clear is that there's a real schism between those who think the AGI question is helpful even if overstated because it is focusing policy attention and resources on AI risk generally, or whether it's ultimately crowding out the conversation on more immediate, human-driven harms. While I'd like to say "why not both?" there may need to be a reckoning among promoters of trustworthy AI if we are going to make progress.

Expand full comment

"Lake hopes to tackle this problem by studying how people develop a knack for systematic generalization from a young age, and incorporating those findings to build a more robust neural net." https://www.nature.com/articles/d41586-023-03272-3

Expand full comment

Lake has not responded to the queries I posed to him on Twitter about that work

Expand full comment

Will Cern for AI change your comment Gary, that ‘few other ideas are even on the table’ ?

Expand full comment

depends on how it’s run, clearly

Expand full comment

Gary, you dropped a teaser about Asimov’s three laws being insufficient. Have you elaborated that point in an article or essay? If so, can you provide a link? I’d love to read it. Thanks, Joel

Expand full comment

a “global, neutral, non-profit International Agency for AI" - led by the USA with veto power, no Chinese, Russians, Iranians or Klingons allowed ? Get real.

Expand full comment

We'd probably try to avoid past mistakes, an international AI agency would have to give every member similar power and voice, even if it's at the cost of potency. You can't have one country override consensus and turn arab children into paperclips because they deem their lives to be worthless.

Expand full comment

People who say stuff like this:

> We need research breakthroughs to solve some of today’s technical challenges in creating AI with safe and ethical objectives.

are a massive danger to Humanity, because they're *assuming* that such solutions are software issues that will come through Research (so, nothing to see here, carry on, all under control.)

But if f X creates Y, and Y is a million times cleverer than X, X is doomed: things that, by X's thinking, should constrain Y, will be toytown thoughts (or unnoticed entirely) in Y's thinking.

Ie ‪p(doom) is obviously close to 1 when you invent intelligences way way way above your own., *no matter how much you've convinced yourselves in advance that you have controls in place*.

This should be *blindingly obvious*.

Expand full comment

I recently wrote a novel on so many of these topics with the tagline: "In the battle over Advanced AI, will we lose our humanity or learn what truly makes us human?" Ironically, the group that rises to oppose AI is named the Prometheus Guard and, not to spoil too much, but their reaction ends up driving much of the chaos in the book.

Fundamentally, it's an exploration into the technology, sociology, and psychology about AI and how we react to it. It doesn't provide clear answers for what to do and that is why I titled it Paradox

Video preview here:

https://www.youtube.com/watch?v=k4_Ej2B2ZV4&ab_channel=PolymathicDisciplines

Buy it here:

https://www.amazon.com/Paradox-Book-One-Singularity-Chronicles-ebook/dp/B0C7NBZX89/

Let me know your thoughts by commenting.

Expand full comment

You have been tirelessly arguing about fundamental limitations of LLMs like hallucinations or unreliable reasoning. But if these problems aren't solved than use of LLMs won't proliferate much so we don't have to worry about AI safety. They won't me much more impactful then they are now. Also talking about the current risk of AI what are the worst things caused by AIs that are to be regulated (like ChatGPT, Bard, Claude or Pi) in the last year? Deep fakes in Slovakia elections weren't generated using them and regulation won't affect them anyway. It is hard to find an important product that is so benign and harmless as these four chatbots!

Expand full comment

The global AI control organization is doomed to be highly effective and waste money with a complete inability to control anything.

Expand full comment

I will send your love to those that have run ICAO and IAEA for decades

Expand full comment

The main danger of AI for society is the possibility of being used by authorities to control the population. None of the mentioned organizations has the right to interfere in the internal politics of the participating countries. It is likely that the proposed organization will follow this principle - otherwise it is simply impossible.

Expand full comment

Global governance of AI is a wishful thinking fantasy.

Whatever system is put in place will have little to no effect upon those who ignore rules, laws, and regulations, that is, those who present the biggest risk. EXAMPLE: I've yet to see any AI expert explain how AI regulation schemes will be enforced upon the Russians, Chinese, Iranians, North Koreans, drug cartels, terrorist groups and reckless teenagers, each of which is connected to the entire human population via a global Internet.

As to corporate giants, it seems reasonable to guess that they will use their vast wealth to buy off the lawmakers. As example, the tobacco industry has been literally killing about 400,000 Americans each and every year for decades, and they get away with it because the tobacco industry owns significant portions of the U.S. Congress. The same is true on a smaller, but still deadly, scale for the U.S. gun industry. With both tobacco and guns, some ineffective regulation is put in place so the lawmakers can cover their ass, and then the corporate killing goes on, and on, and on.

But let's say I'm wrong about the above, and imagine a global AI regulation scheme which somehow makes AI safe.

That doesn't matter.

What all AI experts seem unable to grasp is that the challenge presented by AI is just a symptom of, a subset of, the larger challenge presented by the knowledge explosion. So long as the knowledge explosion keeps generating new powers of vast scale at an accelerating rate, it doesn't really matter what happens with any one particular technology.

When it comes to powers of vast scale, it's not good enough to make this or that technology safe. ALL powers of vast scale need to be made safe. All of them, because any one power of vast scale contains the potential to bring down the entire system, thus ending the opportunity for learning and course correction.

The fatal flaw in all AI safety analysis is that experts remain stuck in the outdated concept of addressing the challenges presented by particular emerging technologies one by one by one. As explained here many times already, this is a LOSER'S game, because powers of vast scale are emerging faster than we can learn how to make them safe.

While we're scratching our heads about nuclear weapons, genetic engineering and AI emerge. And while we're scratching our heads about genetic engineering and AI, even more powers of vast scale will emerge. AI is not the end of the 21st century, but only the beginning. The pile of vast powers we don't know how to manage keeps growing, because we refuse to address the process creating them.

A fundamental problem underlying this entire process is that we want revolutionary new powers, as many as possible, but we are unable or unwilling to embrace the revolutionary new thinking that is the price tag for these revolutionary new powers. Here's a quick example to illustrate...

If we keep giving violent men ever more, ever larger powers, at what seems an ever accelerating pace, we are signing the death certificate for the modern world. If we are to continue to passively ride the knowledge explosion where ever it takes us, the violent men simply have to go, as we can no longer afford them.

Here's a quick guide to revolutionary thinking. If everyone finds your ideas to be ridiculous, you're on the right track.

Expand full comment

We regulate the nuclear power industry with an international agency, it's only logical to regulate the AI industry similarly since a rogue AI can cause much more damage than a nuclear accident.

Expand full comment