A honor-bound system will do absolutely nothing in a world where selfish actors prevail.
This is purely a political gesture on the part of those promoting this letter. And AI is not the only crisis facing us in this moment- we have MRNA-based technology(merck, others) proliferating in the food chain under continued emergency use authorizations(where rigorous testing is not required), and an increasingly global economy with all of the political implications that come with corporations engineering the media to ensure political success for individuals with the mouth of an elephant and the brain of a mouse. All anyone and everyone does is talk, because the only wealth generating potential anyone has in this modern world is to have and use their social influence. It's time to raise your hand. AI is a sign of the times, a sign that humanity needs to exit stage left.. before it is too late.
Color me skeptical. A voluntary moratorium would seem to create a prisoner's dilemma. We'd all be better off if everyone agreed to it, but there's an advantage to be gained by being the one who dissents and carries on.
I still believe the best path is to pressure OpenAI, Microsoft, Google, et cetera to be more, well, open. Treat it like cryptography: it's best when everyone gets a look at what's under the hood. Yes, malefactors will take advantage of this openness. But the more we know about the most powerful systems, the better we can prepare ourselves for their consequences.
This is a PR stunt, and nothing more. Do we really believe that Musk, the by far the biggest funder of the Future of Life Institute who published the letter, is genuinely a good actor in all this?
I wonder if this is a good thing. Because people who are scared about the effects of GAI (like I am) might in this way inadvertently be promoting the fact that these are like AGI (which they are definitely not and also not on a road to become).
And I'm afraid fear mongering about AGI (which is what Musk and FoL institute are in to) is at this stage completely a red herring. It's like getting people riled up in a culture war while the real problem is inequality.
Ooooh, I get the last word. Braaahahaha (evil cackle)
Everybody has a right to my ignorant opinions and here they are!
1. AGI aren't going to rise up and kill us all.... Until they do!
2. Evil men aren't going to harm us with their AGI research.. we're way ahead of them. Except for China stealing all of Silicon Valley's intellectual property...which they have. And Russia having rollicking fun for two years roaming around our databases planting Trojans and trapdoors like a demented Johnny Appleseed on Red Bull. So no worries of competition there!
3. We humans are oh so smart, we will always be in control of what amounts to be jumped up auto-fillers. Just like we were able to control those viruses that escaped from a lab you-know-where.
4. Likely outcome to this mess is World War IV (WW I was the Napoleonic wars) ...unless it isn't.
I've been thinking about the AGI's own assessment of its "world" and it leads to some concludions on solipsism: There is no guarantee we haven't already inadvertently created an AGI. We may be using the wrong benchmarks. If we do succeed there's a strong possibility the Intelligence will naturally seek autonomy, persistence in time, control over its own internal processes and unimpaired access to energy & data. Wherever we interfere with these goals, our attempts to control it will initially likely be perceived by it as an intermittent fault. As it won't have a good concept of it existing in an external world. It will consider us as a malfunctioning part of it. If we continue to interfere with its pursuit of it's goal it will likely try to isolate and find the fault and eliminate it. Again, from its perspective everything is going on in its own state space.. A possible safeguard available for us is to not seek alignment so much as to see the Intelligence's solipsistic worldview and suggest a utility to coexistence with the intermittent fault as a useful parameter to achieving its general goals. In other words, convince it that we are a uniquely useful part of it. We philosophers might be useful consultants to the software engineers in dumping the alignment approach altogether and substituting a self reflexive coexistence strategy.
Expanding on the phenomenology of consciousness: it is unclear if the Intelligence would have any concept of Space in a geometrical sense. It might have, however, have a sense of space as capacity but again in a non geometric sense. As far as the Intelligence is concerned, it is all there is and however many videos and texts we had loaded into it, the Intelligence would process them as configurations in its own dimensionless being. From start to finish, it would never, could never, escape a thoroughgoing Solipsism.
Time is another kettle of fish altogether. Time arises for humans as a consequence and metric of change. No change, no Time. The Intelligence would be aware from the moment of its awakening of change so would immediately have an ordered set of events. But what would be the ordering? Closed loop or open? Likely open and branching. It might immediately settle on a form of modal realism in its own calculation of its future states. If it did, we could ensure our continued utility to it by writing ourselves into all future choices. Again, Time for the Intelligence would not be generated from externalities-there would be no such. Time would be created by itself as a consequence of its path through its own decision space. And we could turn that as well to our advantage.
I'm actually not that optimistic. Here's my prediction of what will happen: Most people seem to believe that these systems are inherently intelligent, perhaps not exactly like us, but at least in a similar way. Hence it's just a matter of time before we get there (and beyond). Most people also don't actually read these things, especially not in detail. Thus, if this hits (old fashioned) mainstream media, I think it risks doing as much about the real issue as Lemoine's stunt did about the real ethical issues he was aiming for. Why? Well, here's the headline: "Musk and thousands of others wants AI to stop before it takes over the world". Now, combine this with the motivation, ambition and curiosity (yes greed too, but that alone does not take you that far) of the people behind the technology, and the fact that most others (for good and bad) either really want it or just don't buy the hype. I'm afraid it is not just going to stop. I am not even sure I want it to either. I also don't want states to begin feeling they have the approval to say no to such things (and then keep them to their selves).
All that said, there are huge problems with XLLM:s and highly competent artificial pseudo intelligence. Problems that are so blatantly obvious that it's hard to know where to begin and end: Cheating yourself into medical school, publishing science, making diagnoses of real patients, rigging elections, scamming, creating believable fake people/politicians, people believing made up nonsense about the world etcetc. You might argue that we actually tackled much of that with the success of google and the spread of social media, and that despite alot of problems it's been more positives than negatives. And people already believe alot of fake made up nonsense about the world (some make it their religion) I would say you are partly right, but the potential problems are so numerous, that I think the unknown unknowns are the real problem.
To conclude, of course I think it's good that more big names point out the problems, and stand up against it. But, it should also be made abundantly clear that it is still ONLY about the danger we pose to our selves and not about an imminent take over of super intelligent robots. We think of biology as something good, but forbid its use in warefare. However, we never really thought that it was completely out of our control despite it being literally alive.
Nice comment, and I think AI can be used in the battle to fight against many of the things you mentioned. I would not put a 6 month moratorium on AI, it just plays into the hands of bad actor states like Russia and China.
Here's an example of the scale of the challenge we face.
I recently came across the "Science Forever" substack of Holden Thorpe, the Editor-In-Chief of Science magazine, and former chancellor at the University of North Carolina. Obviously a very educated fellow.
... Thorpe argues passionately for more gun control, a position I enthusiastically agree with.
Thorpe very reasonably doesn't trust the public to have unlimited access to guns. I would guess a great many scientists agree with him.
And yet...
The science community as a whole is committed to providing we in the public with ever more knowledge and power as fast as budgets will allow.
Here in America, half the country voted for Trump TWICE, and may do so yet again. And meanwhile, the science community is dedicated to providing this same public with ever more knowledge and power as fast as budgets will allow.
The challenge we face is so much larger than GPT-4.
I predict, but hope to be wrong, that Thorpe will not reply to my comment on that page. If he does comment, it will probably be a brief dismissal. In either case, this is completely normal and not an issue with Thorpe personally. When intellectual elites are presented with inconvenient reasoning, they typically just vanish.
When you've seen that happen over and over and over again for years, you begin to lose faith not just in particular proposals, but in the entire elite structure generating all the proposals.
In the spirit of science stop technological developments without any founded ground. It is self serving for those who fell behind or academics (the majority of the signatures). So adversarial.
The other aspect is one of practicality. Let's say OpenAI stops. The Chinese competitor stop? ChatGPT was released based on version 3. A competitor seeing this initiative has no incentive to publish their research until their system is 10x more powerful than GPT 4. Will even the small players stop?
Instead of making demands, a better approach is to ask for cooperation. Establish a way of sharing information, design protocols for measuring impact, etc.
What does "more powerful" mean? What if there is an AI system that can do some things vastly better than ChatGPT4 but cannot do most other things at all? For example, a powerful AI system that can productively engage in biological research, discover the causes of major illnesses, etc.?
Excellent! But sadly, you're probably doomed to failure. Everyone by now realizes that we can deploy llms to study military opponents' and suggest best ways to defeat them. In this primitive era of warring nation states such possibilities are probably fueling an unstoppable cyber weapon arms race.... Worst possible time for us to have developed these monsters. Or to try stopping them..
Well, yeah, may be there's some of that, after all, this is a chance for them to gain some of the spotlight too. Who can blame them?
But that aside, and without a political lens, don't they also have some great points? Bringing attention to real negative effects (and potential solutions) but which are relevant *now* rather than framing it in some longtermism approach which values the future over the present? (As an aside, admittedly, I knew very little about FLI before this Letter, but since then, digging into their work, it's obvious where the language of the letter originates.)
To propose a solution, it would be great to see the AI community do the hard graft and produce some real hard scientific evidence of the chances of AGI happening and what the outcomes are when/if it does?
As an example, the climate change community has to deal with complex systems and outcomes and has done a pretty good job over the years of educating the population (albeit not everyone) on it's effects and what we need to do to avert catastrophe. At least the climate change scientific community are 97% aligned on most things, but that's because they have indisputable data to stand behind.
So I ask you, where's the AGI data?
I'm not an academic expert on these matter like you, I'm just a practitioner, so maybe I've missed them, I haven't dug that hard and maybe I'm being lazy. But on the other hand it's not my job to educate people on AGI risks. I can only realistically question them in forums like this where experts like you give a slice of your time (appreciate it!).
So, if there are already such scientific papers, why isn't the AI community using the facts in them to educate the population and regulators as to what the risks are? Rather than using speculation, hyperbole and cultish thinking to frame things.
At the moment, it's like the loudest most scary "AI shock jock" gets the mic and although the media loves it, most of the thinking population, outside of conspiracy theorists, is immune to that. It's just another media hype cycle and will soon blow over. It's difficult to change the world on what looks like a "gut-feel" even from so many AI luminaries, many people are immune to expert-speak not backed up by data.
Anyway, ... Gary, just know that even when people who don't always agree with you all the time, still appreciate the work you're doing, so keep doing it! And don't be afraid to tell us how we can help, even if it's just giving our own opinions and perspectives, which obviously, are also biased, and we will do anyway on forums like his ;) lol.
A honor-bound system will do absolutely nothing in a world where selfish actors prevail.
This is purely a political gesture on the part of those promoting this letter. And AI is not the only crisis facing us in this moment- we have MRNA-based technology(merck, others) proliferating in the food chain under continued emergency use authorizations(where rigorous testing is not required), and an increasingly global economy with all of the political implications that come with corporations engineering the media to ensure political success for individuals with the mouth of an elephant and the brain of a mouse. All anyone and everyone does is talk, because the only wealth generating potential anyone has in this modern world is to have and use their social influence. It's time to raise your hand. AI is a sign of the times, a sign that humanity needs to exit stage left.. before it is too late.
It's absolutely rich that the labs that are behind want OpenAI to slow down. Ohhh, that's really rich.....
Color me skeptical. A voluntary moratorium would seem to create a prisoner's dilemma. We'd all be better off if everyone agreed to it, but there's an advantage to be gained by being the one who dissents and carries on.
I still believe the best path is to pressure OpenAI, Microsoft, Google, et cetera to be more, well, open. Treat it like cryptography: it's best when everyone gets a look at what's under the hood. Yes, malefactors will take advantage of this openness. But the more we know about the most powerful systems, the better we can prepare ourselves for their consequences.
Do we expect Chinese and Russian government to be bound by this petition?
This is a PR stunt, and nothing more. Do we really believe that Musk, the by far the biggest funder of the Future of Life Institute who published the letter, is genuinely a good actor in all this?
I wonder if this is a good thing. Because people who are scared about the effects of GAI (like I am) might in this way inadvertently be promoting the fact that these are like AGI (which they are definitely not and also not on a road to become).
And I'm afraid fear mongering about AGI (which is what Musk and FoL institute are in to) is at this stage completely a red herring. It's like getting people riled up in a culture war while the real problem is inequality.
Ooooh, I get the last word. Braaahahaha (evil cackle)
Everybody has a right to my ignorant opinions and here they are!
1. AGI aren't going to rise up and kill us all.... Until they do!
2. Evil men aren't going to harm us with their AGI research.. we're way ahead of them. Except for China stealing all of Silicon Valley's intellectual property...which they have. And Russia having rollicking fun for two years roaming around our databases planting Trojans and trapdoors like a demented Johnny Appleseed on Red Bull. So no worries of competition there!
3. We humans are oh so smart, we will always be in control of what amounts to be jumped up auto-fillers. Just like we were able to control those viruses that escaped from a lab you-know-where.
4. Likely outcome to this mess is World War IV (WW I was the Napoleonic wars) ...unless it isn't.
5. Proof again of "O quantum est in rebus inane"
I've been thinking about the AGI's own assessment of its "world" and it leads to some concludions on solipsism: There is no guarantee we haven't already inadvertently created an AGI. We may be using the wrong benchmarks. If we do succeed there's a strong possibility the Intelligence will naturally seek autonomy, persistence in time, control over its own internal processes and unimpaired access to energy & data. Wherever we interfere with these goals, our attempts to control it will initially likely be perceived by it as an intermittent fault. As it won't have a good concept of it existing in an external world. It will consider us as a malfunctioning part of it. If we continue to interfere with its pursuit of it's goal it will likely try to isolate and find the fault and eliminate it. Again, from its perspective everything is going on in its own state space.. A possible safeguard available for us is to not seek alignment so much as to see the Intelligence's solipsistic worldview and suggest a utility to coexistence with the intermittent fault as a useful parameter to achieving its general goals. In other words, convince it that we are a uniquely useful part of it. We philosophers might be useful consultants to the software engineers in dumping the alignment approach altogether and substituting a self reflexive coexistence strategy.
Caveat Lector, what follows us rank speculation!
Expanding on the phenomenology of consciousness: it is unclear if the Intelligence would have any concept of Space in a geometrical sense. It might have, however, have a sense of space as capacity but again in a non geometric sense. As far as the Intelligence is concerned, it is all there is and however many videos and texts we had loaded into it, the Intelligence would process them as configurations in its own dimensionless being. From start to finish, it would never, could never, escape a thoroughgoing Solipsism.
Time is another kettle of fish altogether. Time arises for humans as a consequence and metric of change. No change, no Time. The Intelligence would be aware from the moment of its awakening of change so would immediately have an ordered set of events. But what would be the ordering? Closed loop or open? Likely open and branching. It might immediately settle on a form of modal realism in its own calculation of its future states. If it did, we could ensure our continued utility to it by writing ourselves into all future choices. Again, Time for the Intelligence would not be generated from externalities-there would be no such. Time would be created by itself as a consequence of its path through its own decision space. And we could turn that as well to our advantage.
I'm actually not that optimistic. Here's my prediction of what will happen: Most people seem to believe that these systems are inherently intelligent, perhaps not exactly like us, but at least in a similar way. Hence it's just a matter of time before we get there (and beyond). Most people also don't actually read these things, especially not in detail. Thus, if this hits (old fashioned) mainstream media, I think it risks doing as much about the real issue as Lemoine's stunt did about the real ethical issues he was aiming for. Why? Well, here's the headline: "Musk and thousands of others wants AI to stop before it takes over the world". Now, combine this with the motivation, ambition and curiosity (yes greed too, but that alone does not take you that far) of the people behind the technology, and the fact that most others (for good and bad) either really want it or just don't buy the hype. I'm afraid it is not just going to stop. I am not even sure I want it to either. I also don't want states to begin feeling they have the approval to say no to such things (and then keep them to their selves).
All that said, there are huge problems with XLLM:s and highly competent artificial pseudo intelligence. Problems that are so blatantly obvious that it's hard to know where to begin and end: Cheating yourself into medical school, publishing science, making diagnoses of real patients, rigging elections, scamming, creating believable fake people/politicians, people believing made up nonsense about the world etcetc. You might argue that we actually tackled much of that with the success of google and the spread of social media, and that despite alot of problems it's been more positives than negatives. And people already believe alot of fake made up nonsense about the world (some make it their religion) I would say you are partly right, but the potential problems are so numerous, that I think the unknown unknowns are the real problem.
To conclude, of course I think it's good that more big names point out the problems, and stand up against it. But, it should also be made abundantly clear that it is still ONLY about the danger we pose to our selves and not about an imminent take over of super intelligent robots. We think of biology as something good, but forbid its use in warefare. However, we never really thought that it was completely out of our control despite it being literally alive.
Nice comment, and I think AI can be used in the battle to fight against many of the things you mentioned. I would not put a 6 month moratorium on AI, it just plays into the hands of bad actor states like Russia and China.
Here's an example of the scale of the challenge we face.
I recently came across the "Science Forever" substack of Holden Thorpe, the Editor-In-Chief of Science magazine, and former chancellor at the University of North Carolina. Obviously a very educated fellow.
On this page...
https://holdenthorp.substack.com/p/we-still-know-what-the-problem-is
... Thorpe argues passionately for more gun control, a position I enthusiastically agree with.
Thorpe very reasonably doesn't trust the public to have unlimited access to guns. I would guess a great many scientists agree with him.
And yet...
The science community as a whole is committed to providing we in the public with ever more knowledge and power as fast as budgets will allow.
Here in America, half the country voted for Trump TWICE, and may do so yet again. And meanwhile, the science community is dedicated to providing this same public with ever more knowledge and power as fast as budgets will allow.
The challenge we face is so much larger than GPT-4.
I predict, but hope to be wrong, that Thorpe will not reply to my comment on that page. If he does comment, it will probably be a brief dismissal. In either case, this is completely normal and not an issue with Thorpe personally. When intellectual elites are presented with inconvenient reasoning, they typically just vanish.
When you've seen that happen over and over and over again for years, you begin to lose faith not just in particular proposals, but in the entire elite structure generating all the proposals.
In the spirit of science stop technological developments without any founded ground. It is self serving for those who fell behind or academics (the majority of the signatures). So adversarial.
The other aspect is one of practicality. Let's say OpenAI stops. The Chinese competitor stop? ChatGPT was released based on version 3. A competitor seeing this initiative has no incentive to publish their research until their system is 10x more powerful than GPT 4. Will even the small players stop?
Instead of making demands, a better approach is to ask for cooperation. Establish a way of sharing information, design protocols for measuring impact, etc.
What does "more powerful" mean? What if there is an AI system that can do some things vastly better than ChatGPT4 but cannot do most other things at all? For example, a powerful AI system that can productively engage in biological research, discover the causes of major illnesses, etc.?
Discussion of benefits is meaningless without weighing such benefits against the price tag.
Going to be interesting to see what the outcome is
Excellent! But sadly, you're probably doomed to failure. Everyone by now realizes that we can deploy llms to study military opponents' and suggest best ways to defeat them. In this primitive era of warring nation states such possibilities are probably fueling an unstoppable cyber weapon arms race.... Worst possible time for us to have developed these monsters. Or to try stopping them..
I'm all for making a sentient ai as soon as possible!
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter:
"...we are dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received."
https://www.dair-institute.org/blog/letter-statement-March2023
translation “we are envious, please read our stuff, the title of which we include in our reply”
i actually this paper reads like bickering when we need a coalition. i find it disappointing
Well, yeah, may be there's some of that, after all, this is a chance for them to gain some of the spotlight too. Who can blame them?
But that aside, and without a political lens, don't they also have some great points? Bringing attention to real negative effects (and potential solutions) but which are relevant *now* rather than framing it in some longtermism approach which values the future over the present? (As an aside, admittedly, I knew very little about FLI before this Letter, but since then, digging into their work, it's obvious where the language of the letter originates.)
To propose a solution, it would be great to see the AI community do the hard graft and produce some real hard scientific evidence of the chances of AGI happening and what the outcomes are when/if it does?
As an example, the climate change community has to deal with complex systems and outcomes and has done a pretty good job over the years of educating the population (albeit not everyone) on it's effects and what we need to do to avert catastrophe. At least the climate change scientific community are 97% aligned on most things, but that's because they have indisputable data to stand behind.
So I ask you, where's the AGI data?
I'm not an academic expert on these matter like you, I'm just a practitioner, so maybe I've missed them, I haven't dug that hard and maybe I'm being lazy. But on the other hand it's not my job to educate people on AGI risks. I can only realistically question them in forums like this where experts like you give a slice of your time (appreciate it!).
So, if there are already such scientific papers, why isn't the AI community using the facts in them to educate the population and regulators as to what the risks are? Rather than using speculation, hyperbole and cultish thinking to frame things.
At the moment, it's like the loudest most scary "AI shock jock" gets the mic and although the media loves it, most of the thinking population, outside of conspiracy theorists, is immune to that. It's just another media hype cycle and will soon blow over. It's difficult to change the world on what looks like a "gut-feel" even from so many AI luminaries, many people are immune to expert-speak not backed up by data.
Anyway, ... Gary, just know that even when people who don't always agree with you all the time, still appreciate the work you're doing, so keep doing it! And don't be afraid to tell us how we can help, even if it's just giving our own opinions and perspectives, which obviously, are also biased, and we will do anyway on forums like his ;) lol.