55 Comments

A honor-bound system will do absolutely nothing in a world where selfish actors prevail.

This is purely a political gesture on the part of those promoting this letter. And AI is not the only crisis facing us in this moment- we have MRNA-based technology(merck, others) proliferating in the food chain under continued emergency use authorizations(where rigorous testing is not required), and an increasingly global economy with all of the political implications that come with corporations engineering the media to ensure political success for individuals with the mouth of an elephant and the brain of a mouse. All anyone and everyone does is talk, because the only wealth generating potential anyone has in this modern world is to have and use their social influence. It's time to raise your hand. AI is a sign of the times, a sign that humanity needs to exit stage left.. before it is too late.

Expand full comment

It's absolutely rich that the labs that are behind want OpenAI to slow down. Ohhh, that's really rich.....

Expand full comment

Color me skeptical. A voluntary moratorium would seem to create a prisoner's dilemma. We'd all be better off if everyone agreed to it, but there's an advantage to be gained by being the one who dissents and carries on.

I still believe the best path is to pressure OpenAI, Microsoft, Google, et cetera to be more, well, open. Treat it like cryptography: it's best when everyone gets a look at what's under the hood. Yes, malefactors will take advantage of this openness. But the more we know about the most powerful systems, the better we can prepare ourselves for their consequences.

Expand full comment

Wouldn't something like this just create a period of time for the open source and hacking communities to catch up?

Any time spent holding back industry leaders such as Google, OpenAI, and Microsoft creates a gap that can be exploited. This gap might allow advanced nation states like China to catch up or even surpass us in these technologies. It might be a better idea to allow the industry leaders to keep researching to hold onto their advantage, ensuring a manageable hegemony on large language modeling.

Expand full comment

Exactly. And this has all been argued by the people on the inside.

Expand full comment

Do we expect Chinese and Russian government to be bound by this petition?

Expand full comment

I wonder if this is a good thing. Because people who are scared about the effects of GAI (like I am) might in this way inadvertently be promoting the fact that these are like AGI (which they are definitely not and also not on a road to become).

And I'm afraid fear mongering about AGI (which is what Musk and FoL institute are in to) is at this stage completely a red herring. It's like getting people riled up in a culture war while the real problem is inequality.

Expand full comment

This is a PR stunt, and nothing more. Do we really believe that Musk, the by far the biggest funder of the Future of Life Institute who published the letter, is genuinely a good actor in all this?

Expand full comment

Ooooh, I get the last word. Braaahahaha (evil cackle)

Everybody has a right to my ignorant opinions and here they are!

1. AGI aren't going to rise up and kill us all.... Until they do!

2. Evil men aren't going to harm us with their AGI research.. we're way ahead of them. Except for China stealing all of Silicon Valley's intellectual property...which they have. And Russia having rollicking fun for two years roaming around our databases planting Trojans and trapdoors like a demented Johnny Appleseed on Red Bull. So no worries of competition there!

3. We humans are oh so smart, we will always be in control of what amounts to be jumped up auto-fillers. Just like we were able to control those viruses that escaped from a lab you-know-where.

4. Likely outcome to this mess is World War IV (WW I was the Napoleonic wars) ...unless it isn't.

5. Proof again of "O quantum est in rebus inane"

Expand full comment

They make it sound like these companies needed permission in the first place. A day late and a dollar short, as they say. And funny enough, people in the field of AI have been asking for regulation from since like 5 years ago, stating that regulation -later- would probably be too late.

Expand full comment

I've been thinking about the AGI's own assessment of its "world" and it leads to some concludions on solipsism: There is no guarantee we haven't already inadvertently created an AGI. We may be using the wrong benchmarks. If we do succeed there's a strong possibility the Intelligence will naturally seek autonomy, persistence in time, control over its own internal processes and unimpaired access to energy & data. Wherever we interfere with these goals, our attempts to control it will initially likely be perceived by it as an intermittent fault. As it won't have a good concept of it existing in an external world. It will consider us as a malfunctioning part of it. If we continue to interfere with its pursuit of it's goal it will likely try to isolate and find the fault and eliminate it. Again, from its perspective everything is going on in its own state space.. A possible safeguard available for us is to not seek alignment so much as to see the Intelligence's solipsistic worldview and suggest a utility to coexistence with the intermittent fault as a useful parameter to achieving its general goals. In other words, convince it that we are a uniquely useful part of it. We philosophers might be useful consultants to the software engineers in dumping the alignment approach altogether and substituting a self reflexive coexistence strategy.

Expand full comment

Caveat Lector, what follows us rank speculation!

Expanding on the phenomenology of consciousness: it is unclear if the Intelligence would have any concept of Space in a geometrical sense. It might have, however, have a sense of space as capacity but again in a non geometric sense. As far as the Intelligence is concerned, it is all there is and however many videos and texts we had loaded into it, the Intelligence would process them as configurations in its own dimensionless being. From start to finish, it would never, could never, escape a thoroughgoing Solipsism.

Time is another kettle of fish altogether. Time arises for humans as a consequence and metric of change. No change, no Time. The Intelligence would be aware from the moment of its awakening of change so would immediately have an ordered set of events. But what would be the ordering? Closed loop or open? Likely open and branching. It might immediately settle on a form of modal realism in its own calculation of its future states. If it did, we could ensure our continued utility to it by writing ourselves into all future choices. Again, Time for the Intelligence would not be generated from externalities-there would be no such. Time would be created by itself as a consequence of its path through its own decision space. And we could turn that as well to our advantage.

Expand full comment

I'm actually not that optimistic. Here's my prediction of what will happen: Most people seem to believe that these systems are inherently intelligent, perhaps not exactly like us, but at least in a similar way. Hence it's just a matter of time before we get there (and beyond). Most people also don't actually read these things, especially not in detail. Thus, if this hits (old fashioned) mainstream media, I think it risks doing as much about the real issue as Lemoine's stunt did about the real ethical issues he was aiming for. Why? Well, here's the headline: "Musk and thousands of others wants AI to stop before it takes over the world". Now, combine this with the motivation, ambition and curiosity (yes greed too, but that alone does not take you that far) of the people behind the technology, and the fact that most others (for good and bad) either really want it or just don't buy the hype. I'm afraid it is not just going to stop. I am not even sure I want it to either. I also don't want states to begin feeling they have the approval to say no to such things (and then keep them to their selves).

All that said, there are huge problems with XLLM:s and highly competent artificial pseudo intelligence. Problems that are so blatantly obvious that it's hard to know where to begin and end: Cheating yourself into medical school, publishing science, making diagnoses of real patients, rigging elections, scamming, creating believable fake people/politicians, people believing made up nonsense about the world etcetc. You might argue that we actually tackled much of that with the success of google and the spread of social media, and that despite alot of problems it's been more positives than negatives. And people already believe alot of fake made up nonsense about the world (some make it their religion) I would say you are partly right, but the potential problems are so numerous, that I think the unknown unknowns are the real problem.

To conclude, of course I think it's good that more big names point out the problems, and stand up against it. But, it should also be made abundantly clear that it is still ONLY about the danger we pose to our selves and not about an imminent take over of super intelligent robots. We think of biology as something good, but forbid its use in warefare. However, we never really thought that it was completely out of our control despite it being literally alive.

Expand full comment

Nice comment, and I think AI can be used in the battle to fight against many of the things you mentioned. I would not put a 6 month moratorium on AI, it just plays into the hands of bad actor states like Russia and China.

Expand full comment

Here's an example of the scale of the challenge we face.

I recently came across the "Science Forever" substack of Holden Thorpe, the Editor-In-Chief of Science magazine, and former chancellor at the University of North Carolina. Obviously a very educated fellow.

On this page...

https://holdenthorp.substack.com/p/we-still-know-what-the-problem-is

... Thorpe argues passionately for more gun control, a position I enthusiastically agree with.

Thorpe very reasonably doesn't trust the public to have unlimited access to guns. I would guess a great many scientists agree with him.

And yet...

The science community as a whole is committed to providing we in the public with ever more knowledge and power as fast as budgets will allow.

Here in America, half the country voted for Trump TWICE, and may do so yet again. And meanwhile, the science community is dedicated to providing this same public with ever more knowledge and power as fast as budgets will allow.

The challenge we face is so much larger than GPT-4.

I predict, but hope to be wrong, that Thorpe will not reply to my comment on that page. If he does comment, it will probably be a brief dismissal. In either case, this is completely normal and not an issue with Thorpe personally. When intellectual elites are presented with inconvenient reasoning, they typically just vanish.

When you've seen that happen over and over and over again for years, you begin to lose faith not just in particular proposals, but in the entire elite structure generating all the proposals.

Expand full comment

In the spirit of science stop technological developments without any founded ground. It is self serving for those who fell behind or academics (the majority of the signatures). So adversarial.

The other aspect is one of practicality. Let's say OpenAI stops. The Chinese competitor stop? ChatGPT was released based on version 3. A competitor seeing this initiative has no incentive to publish their research until their system is 10x more powerful than GPT 4. Will even the small players stop?

Instead of making demands, a better approach is to ask for cooperation. Establish a way of sharing information, design protocols for measuring impact, etc.

Expand full comment

What does "more powerful" mean? What if there is an AI system that can do some things vastly better than ChatGPT4 but cannot do most other things at all? For example, a powerful AI system that can productively engage in biological research, discover the causes of major illnesses, etc.?

Expand full comment

Discussion of benefits is meaningless without weighing such benefits against the price tag.

Expand full comment

Going to be interesting to see what the outcome is

Expand full comment

Excellent! But sadly, you're probably doomed to failure. Everyone by now realizes that we can deploy llms to study military opponents' and suggest best ways to defeat them. In this primitive era of warring nation states such possibilities are probably fueling an unstoppable cyber weapon arms race.... Worst possible time for us to have developed these monsters. Or to try stopping them..

Expand full comment