16 Comments
User's avatar
Peter's avatar

Kudos for keeping your gloves up and not giving up the fight, the world needs you, it must be frustrating to have seemingly intelligent people in front of you closing their eyes on common sense by pretending that jumping into an unknown of this magnitude without caution could ever be a good idea.

Gary Marcus's avatar

Definitely can be frustrating. Love the boxing metaphor :)

Benjamin Riley's avatar

I've been speaking to lots of folks in the AI governance space lately, primarily academics, and I'm struck by how many of them underscore how vital the need is for more research to figure out the nature and scope of many known AI risks. What's also clear is that there's a real schism between those who think the AGI question is helpful even if overstated because it is focusing policy attention and resources on AI risk generally, or whether it's ultimately crowding out the conversation on more immediate, human-driven harms. While I'd like to say "why not both?" there may need to be a reckoning among promoters of trustworthy AI if we are going to make progress.

Michael Molin's avatar

"Lake hopes to tackle this problem by studying how people develop a knack for systematic generalization from a young age, and incorporating those findings to build a more robust neural net." https://www.nature.com/articles/d41586-023-03272-3

Gary Marcus's avatar

Lake has not responded to the queries I posed to him on Twitter about that work

Beth Carey's avatar

Will Cern for AI change your comment Gary, that ‘few other ideas are even on the table’ ?

Gary Marcus's avatar

depends on how it’s run, clearly

Joel Allen's avatar

Gary, you dropped a teaser about Asimov’s three laws being insufficient. Have you elaborated that point in an article or essay? If so, can you provide a link? I’d love to read it. Thanks, Joel

Scott Foster's avatar

a “global, neutral, non-profit International Agency for AI" - led by the USA with veto power, no Chinese, Russians, Iranians or Klingons allowed ? Get real.

Peter's avatar

We'd probably try to avoid past mistakes, an international AI agency would have to give every member similar power and voice, even if it's at the cost of potency. You can't have one country override consensus and turn arab children into paperclips because they deem their lives to be worthless.

xxxx oooo's avatar

People who say stuff like this:

> We need research breakthroughs to solve some of today’s technical challenges in creating AI with safe and ethical objectives.

are a massive danger to Humanity, because they're *assuming* that such solutions are software issues that will come through Research (so, nothing to see here, carry on, all under control.)

But if f X creates Y, and Y is a million times cleverer than X, X is doomed: things that, by X's thinking, should constrain Y, will be toytown thoughts (or unnoticed entirely) in Y's thinking.

Ie ‪p(doom) is obviously close to 1 when you invent intelligences way way way above your own., *no matter how much you've convinced yourselves in advance that you have controls in place*.

This should be *blindingly obvious*.

Jan Matusiewicz's avatar

You have been tirelessly arguing about fundamental limitations of LLMs like hallucinations or unreliable reasoning. But if these problems aren't solved than use of LLMs won't proliferate much so we don't have to worry about AI safety. They won't me much more impactful then they are now. Also talking about the current risk of AI what are the worst things caused by AIs that are to be regulated (like ChatGPT, Bard, Claude or Pi) in the last year? Deep fakes in Slovakia elections weren't generated using them and regulation won't affect them anyway. It is hard to find an important product that is so benign and harmless as these four chatbots!

Mykola Rabchevskiy's avatar

The global AI control organization is doomed to be highly effective and waste money with a complete inability to control anything.

Gary Marcus's avatar

I will send your love to those that have run ICAO and IAEA for decades

Mykola Rabchevskiy's avatar

The main danger of AI for society is the possibility of being used by authorities to control the population. None of the mentioned organizations has the right to interfere in the internal politics of the participating countries. It is likely that the proposed organization will follow this principle - otherwise it is simply impossible.

Roumen Popov's avatar

We regulate the nuclear power industry with an international agency, it's only logical to regulate the AI industry similarly since a rogue AI can cause much more damage than a nuclear accident.