15 Comments

It's hard to have much confidence in governance schemes when, as best I can tell, such discussions never seem to mention what could be the most important threat presented by AI, further acceleration of the knowledge explosion.

Let's imagine for a moment that AI is made perfectly safe. Safe AI would still accelerate the knowledge explosion, just as computers and the Internet have. The real threat may come less from AI itself than from the ever more, ever larger powers which emerge from an AI accelerated knowledge explosion.

The knowledge explosion has already produced at least three powers of vast scale, which we basically have little to no idea how to make safe.

1) Nuclear weapons

2) Artificial intelligence

3) Genetic engineering

Instead of learning from this, we're using tools like AI to pore even more fuel on the knowledge explosion, which will almost certainly result in even more powers of significant scale which we will also struggle to make safe. As the emergence of AI illustrates, this processes is feeding back on itself, leading to ever further acceleration.

Experts are playing a losing game in trying to address emerging threats one by one by one as they emerge from the knowledge explosion assembly line, because that accelerating process is going to produce new threats faster than we can figure out how to defeat existing threats. Nuclear weapons were invented in 1945, before almost all of us were born, and we still have no clue how to get rid of them.

What we need are experts who are holistic thinkers. We need experts who will focus on the knowledge explosion assembly line which is producing all the emerging threats.

Taking control of the knowledge explosion so that it produces new powers at a rate which we can successfully manage is not optional. It's a do or die mission. Experts can declare this goal impossible all they want, but it will still remain a do or die mission.

The knowledge explosion has created a revolutionary new environment. Nature's primary rule is that creatures who can't adapt to changing conditions must die.

Expand full comment

Thanks for the likes folks, appreciate it.

Discussion is most welcome. If my comment above is generally correct, that topic seems like rather a big deal. If the comment is generally wrong, someone should explain that to me so I don't keep typing it over and over. :-) Many thanks!

Expand full comment

This is an exciting initiative! Building the technical basis for auditing and regulation is the highest priority. We are starting to see some papers on this, but we have a long way to go.

Expand full comment

I must say that my personal reaction was surprise at the boldness of the proposal. Also, that its chances of success are negligible. Not trusting my personal perspective I used GPT-4 to investigate, it concluded: “that while Marcus's proposal is innovative and necessary, there are significant concerns about its implementation. Transparency, inclusivity, and adaptability are crucial, as well as a keen awareness of cultural, ethical, and security implications. Despite these challenges, the consensus is that the idea has potential and could, with thoughtful execution, help expedite the development of comprehensive, international AI regulations.”

https://chat.openai.com/share/5f379a49-800e-428a-8a11-5e6de5697e52

Expand full comment

Things will almost certainly develop in unexpected ways. This is almost an iron law in human planning. I sincerely wish you and your associates the best Gary.

Expand full comment

Good to have a no BS person leading a coherent alternative to the "sky is falling" chicken little movement. Thank you.

Expand full comment

“People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.”

― Pedro Domingos

Expand full comment

This is fantastic! I hope that some of the training and consulting will be available to advocacy organizations as well, so they can learn more about the policies they can and should be pushing for.

Expand full comment
Jul 8, 2023·edited Jul 8, 2023

re: "so they can learn more about the policies they can and should be pushing for."

Without seeing these policies, what reason is there to be so confident, as a few commenters here seem to bee, that they will be what "should" be pushed? An expert in AI (or some aspects of it at least) doesn't necessarily know the best policy to push. This post at least finally acknowledges the concern of regulatory capture he seemed naive about in prior posts. Based on his prior posts he seemed to have little grasp of his lack of knowledge, which suggests even if he knows more now, there is a good chance he is overconfident still. Perhaps he is teaming up with folks that know better how to design policy, but I have grave doubts about this based on his prior posts. Unfortunately I suspect that many folks will, like the commenters here, just reflexively jump on board with assuming that someone who knows about AI is somehow going to necessarily come up with the "right" policies that "should" be pushed.

Expand full comment

This is an incredibly exciting development and I’m so pleased to see it launched. An initial challenge will be balancing the need to move swiftly to serve the growing appetite for AI regulation, while also thinking carefully about how to build a sustainable international nonprofit organization. Congratulations Gary to everyone involved in this project!

Expand full comment
Jul 8, 2023·edited Jul 8, 2023

I have been designing a cognitive architecture for AGI since 1985. The overriding objective of responsible AGI design is that the machine should behave in a manner that is maximally-aligned with human beings in perpetuity. Thus *alignment* should be the priority, not *intelligence* per se. That said, considerable intelligence is a necessary precondition for maximal alignment. As a corollary, any system falling below the intelligence threshold required for maximal alignment will necessarily be less than maximally aligned. (All contemporary AI systems fall into this category.) As a consequence of being less than maximally aligned, any such system will necessarily inflict otherwise avoidable societal harm (increasing to massive societal harm if deployed at scale), in parallel with generating value (and therefore wealth) - the two are not mutually exclusive. The good news is that any AI system failing to meet the intelligence threshold required for maximal alignment will be less intelligent than humans, and therefore (relatively) easily overpowered, and thus too dumb to represent an existential threat. But that of course offers only a temporary respite - given sufficient R&D effort, and therefore time, deployed AI systems will inexorably possess sufficient intelligence to pose a genuine existential threat should they be anything less than maximally aligned with human beings. And thus we're back to alignment being the key to AGI design.

This line of reasoning leads me to conclude that AI regulation must encompass the following:

(1) avoid x-risk (tightly regulate the development of potentially super-intelligent AGI)

(2) maximise AGI alignment (maximise societal benefit, minimise societal harm)

(3) develop super-intelligent AGI collaboratively, with benefits shared equally by all mankind

(4) resolve the Molochian coordination problem between both AI labs and nation states

(5) redistribute AGI-generated wealth (between both individuals and nation states)

(6) ease the pain (to individuals) of the likely 50-100 year transition from the current (normal

human employment) era to the post-super-intelligent AGI (zero human employment) era.

Just my two cents. (That said, I've been thinking about these things for a loooong time!) :-)

Expand full comment

Aaron Turner writes, "The overriding objective of responsible AGI design is that the machine should behave in a manner that is maximally-aligned with human beings in perpetuity."

It's taken as a truism that our goal should be to align AI with human values. When we say that do we mean the fantasy human values we wish we had, or our actual real world human values?

FANTASY VALUES: There is a LOT of discussion of a potential existential risk from AI. We feel like responsible citizens as we consider the possibility that AI MIGHT some day destroy the human race etc. These are our fantasy human values.

REAL VALUES: Meanwhile, right now, today, thousands of massive hydrogen bombs stand patiently in their silos patiently awaiting the order to launch and destroy the modern world in less than an hour. We rarely find this interesting enough to merit discussion. These are our real human values.

Real world evidence from the last 75 years strongly suggests that if AI were to present an existential risk we would quickly dive deep in to denial, sweep the threat under the rug, and largely pretend that it doesn't exist. Then, having become bored with that threat, we would enthusiastically race to create new threats.

This is who we are. These are the real world human values which we hope to teach AI. We're ignoring one very real existential threat, because we're busy creating what may be another one.

Expand full comment

Fantastic news. Some very good, thoughtful commentary here as well. I suspect we are running out of time - which is what big tech companies are hoping for - to create a neoliberal fait-accompli.

Expand full comment

Perhaps we NEED conscious systems to effectively address the needs (with himan oversight, of course). I see this as a necessity, rather than a [luxury, fantasy] given the [complexity, beyond-human-or-organisation] size] of current Large Language Models (LLMs) and entirely new concepts to come. [Ethics, morality, law] are perhaps what is most sought, but consciousness (eg the ability to [predict, see [error, side-effect]s, learn, evolve, adapt to changing human demands], who-am-I-the-machine, etc etc) may be a necessary precursor? Above all, my guess is that all such systems will operate in a [COMPETITIVE, COOPERATIVE] environment, and like [law, markets, social relationships, business, sport, etc] probably would have to [be designed, interact, "survive"] accordingly? Such systems wouldn't just interact in the AI (I prefer CI for computational Intelligence as am acronym - old style), they could also be used as [measures, unknown-unknown identifiers, hypothesis spinners, human and machine global teams-builders], and serve to focus on the needles in the hurricane of of haystacks that would be difficult to follow for even large organisations of humans. A diversity of such systems acting in the real world, all somewhat different, hopefully not all controlled by the same powers: maybe that's a huge potential benefit of concepts like Marcus etal's "Customizable governance-in-a-box, catalyzed by philanthropy".

Expand full comment

I believe that these systems will eventually operate without human oversight. GPT agrees, it says “ that the notion of AI systems operating without human oversight is conceivable in the distant future. It is vital to incorporate a multi-disciplinary approach in shaping the future of AI, ensuring all aspects - ethical, societal, security, cultural, and economic - are thoroughly deliberated. The consensus is that this transition will need careful, thoughtful, and globally coordinated management.“

Expand full comment