Mar 30, 2023·edited Mar 30, 2023Liked by Gary Marcus
Gary, I felt your earlier post brilliantly articulated the nuance of your position. Sadly (through no fault of your own) it has gotten lost through your association with FLI and others who ally themselves to AGI narratives. The commonality you appear to have with your fellow signatories is an appreciation for how powerful these systems are, and the damage they are poised to wreak across society. Your own position seems to be that the danger stems largely from the brittleness of these systems - they are terrifying not because they are robustly intelligent, or remotely conscious, but precisely because the opposite. It is because they are lacking in any grounding of the world, and are sensitive to inputs, that we have to be wary of them (along with the obvious threats they pose to our information ecosystem etc). Please continue to shift the focus away from the presumed dawning of superintelligence and remind people that AI is dangerous because it both powerful and mindless (and, dare I say, at times utterly stupid). This is no time to cede our human intelligence!
> "AI is dangerous because it both powerful and mindless"...
...and most importantly, at massive *scale.*
One of the key points of the letter is the qualification: "models _larger_ than GPT4"... which reportedly (and confirmed) has in excess of 1 trillion (1,000 billion) parameters! These models are voracious consumers of electrical and supercompute resources. Until fusion and quantum computing come online, those are global resources with hard ceilings. ChatGPT reportedly took $10 million to train, GPT4 estimated at a $100 million training run (electricity bill + supercompute lease). We are rapidly entering the era of billion dollar models... that drain alone could shift global power equations.
And once those models are online (and some are already online), doesn't matter how smart or stupid people assess them to be... they are undoubtedly clever, undoubtedly fast, and undoubtedly capable of pumping out billions of pages of harmful content (and encouragement to humans to engage in harmful actions) at the push of a few buttons.
Wafer-scale compute also scales near linearly for LLMs, which is a beautiful engineering accomplishment. Yes, I am counting on all kinds of efficiency improvements (hardware & code & design efficiencies). And... with the current "design philosophy" (Sutskever, et al) that "scaling solves all," we are currently seeing an exponential curve whose ascent far outstrips Moore's Law (AI scaling, by my read, is about 10x, or an order of magnitude, per year). I'm fully expecting, on the efficiency front, for a GPT4 equivalent to run locally on edge devices (smartphones, watches?) within 2 years. But that doesn't stop the relentless march of GPT5, 6, 7, etc. (plus the billions being thrown into AI startups with non-LLM approaches, which will consume equal if not more compute). So, efficiencies will continue to be engineered and found... that slows it (the asymptote to global compute/energy ceiling), but doesn't solve it.
"November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities."
As I recall a lot of systems were damaged and a lot of angry sysadmins who had to fix their systems.
They criticized Mr. Morris, not just because he caused a lot of damage - but because there was nothing remarkable the code that he wrote. He hadn't created something special - it was second rate code.
My comment about the letter and proposed hold is this.
1. The Morris Worm was nothing remarkable - but caused widespread damage.
2. Consider the demonstrated ability of GPT-4 to "get out of the box".
3. You can't trust LLMs, and the people who built it don't even know how it works. It's said that they were surprised by GPT-3's abilities - I don't ever remember being surprised by a program I wrote.
It would seem like a good idea to move ahead with caution.
One final thought - the people coding LLMs should carefully consider the potential liability of what they are creating. The sysadmins that had to repair the damage caused by the Morris worm had no recourse to recover their costs - but you can bet that if a similar incident happens an unforgiving public will see that someone will be pay for it.
Agree wholeheartedly. Caution is called for. Computer scientists should give themselves a crash course on something mathematicians, chemists, and biologists have been aware of for some time: self-inducing structures. How little it takes for them to "trigger" and how quickly they can form.
Yes, something needs to be done about the risks of LLMs and signing that letter, however imperfect it may be, is a good way to bring attention to the severity of the problem. Another problem is that LLMs, by their ability to suck attention and resources, are a detriment to real progress toward solving AGI, at least by mainstream researchers.
On the other hand, the LLM craze might make it impossible to recognize the arrival of true AGI on the scene. A number of independent researchers, by virtue of their contempt for mainstream ideas, may strike the mother lode, so to speak. Keep in mind that AGI does not have to be at human level to be extremely powerful. My fear is that anyone who is smart enough to crack AGI while no one in the mainstream is paying attention, may also be smart enough to use it surreptitiously for their own private goals that may not coincide with those of the mainstream. Knowledge is power and power corrupts.
Thanks for speaking up on this subject, Gary, and for maintaining your realism about AI's limitations while noting its dangers.
I don't have any helpful suggestions for you, but I am very interested in hearing your take on the impact AI may have on education. For the past decade or so, many teachers and educators have asked, "why teach it if you can google it?". Cognitive science provides an answer to that question -- because the knowledge you build in your head is essential to acquiring new knowledge.
Now, teachers and students are stampeding toward Chat GPT and it's only a matter of time before they will ask, "why do it if AI can do it?", where "it" may mean writing or math problems or any number of things that constitute formal education.
In the spirit of moving the conversation forward constructively and quickly:
- it would help for you and others to begin identifying useful analogs for this situation, as you see it, to help communicate to the public the risks, urgency, consequences (known and unknown) involved.
And then suggest a range of options for evaluation, assessment, risk scoring, risk rating, disclosure of various AI efforts?
One thing that'll come up is distrust in any institutional oversight efforts, especially if resulting in domestic regulation. Can see this rapidly falling into a politicized debate over US competitiveness etc.
Just a handful of analogs off top of the head:
- E.g. Is this FTX/Crypto
- Great Financial Crisis/MBS/Derivatives/Shadow Banking/Leverage
- Nuclear non-proliferation/confidence-building measures/verification regimes/non-binding international agreements
- Academic panel/oversight/research council
- Consortium of corporations/non-binding/pledge/
- ESG/public pressure/corporate social responsibility
I fully agree with your desire that the Letter should be followed up with much more tangible, focused, executable recommendations and actions. Please have a look at my comment for a simple, implementable and potentially effective first step. Also note that Regulation Jurisdiction plays a big role in the conversation, if this is not to be a purely USA Centric initiative. European GDPR and developing AI Act are pertinent. Canada is working on something as well, but the EU communication products on the topic are already much clearer. https://artificialintelligenceact.eu/
Very helpful indeed. The speed of rate of progress/change in AI is somewhat unique - or at least accentuates an aspect of risk mitigation that's unusual. It's both difficult to anticipate unknown developments and difficult to anticipate their resulting downstream effects and impacts. (As noted in the EU's comment on the law's inflexibility.)
The IPCC might be a guide here - it's kept up to date as new data and measurements are observed, modeled, and impacts forecasted. In fact the mitigation aspects of the IPCC are interesting as well. Could be there's more to borrow from the climate change space.
It’s sad that half-informed people with fully armed keyboards get such a huge say in what is considered public consensus on a topic and that instead of looking at what normal, everyday people think about this, the mad town aware ofTwitter is being used as a proxy.
I don't think we disagree actually, but I'll post some of my criticisms of the letter here anyway. I'm a lowly Master's student so I may lack research perspective in manners relevant to the argument.
A frustration I share with a lot of folks is the letter's conflation of long-term and short-term risk, particularly because the letter's proposal seems exclusively relevant to the latter and near-useless to the former (the broad show of support might be, but likely not the moratorium).
Secondly--and I think this is a slightly more original view as far as I can tell--ideally a moratorium would be paired with a plan of action, but the letter reads as something of a "vibe check". That is, it's kind of vague, but it imagines that 6-month period in mind is one in which researchers gain ground in relevant problems: i.e, the identification of AI spam, the installation of social network guardrails, etc. This seems highly-critical to the letter's project, but the organization of this is left largely implied.
I would like to make clear that the intuition I outlined above--pausing research to target specific problems---is reasonable. If the pandemic taught us anything, it's that placing pressure on scientific institutions in moments of crisis is not a hopeless endeavor.
My interest is ethics, in relation to morals, and LLMs. And I do not mean whether or not AI is used ethically, but what would ethical modelling be for an LLM? The problem is distinguishing between ethics as an abstract enterprise, which I think LLMs can do well, albeit entirely thoughtlessly, and moral reasoning which remains entirely beyond the ability of mere optimization and pattern recognition. How would a deontologist or a utilitarian justify killing a baby or even one's neighbor or one's self? An LLM could come up with a slew of logically plausible explanations easily. The problem with the distinction between AI ethical reasoning and AI moral reasoning is does the decision matter. Do you care about the outcome? It is entirely different to add up the number of children in a statistical family and counting your own children. The number would be the same perhaps, but the differences are absolute. I guess it comes down to the Category Mistake that has plagued Philosophy of Mind from the start. Is thought an epiphenomenon of the brain, or is it just a way to talk about brain activity? Is the moral thing to do simply following the best logical manipulation of ethical principles or is it something entirely different? I would bet a dollar no LLM could make that distinction not now not ever. I explain that AI is absolutely stupid and absolutely stubborn and no matter how much data you feed it you only feed it stubbornness not it's Intelligence.
Very good questions and good advertising for the relevance, nay, indispensability of Philosophy to this topic. We have been studying Mind, Sentience, Self, Consciousness, etc. for a long time and bring a unique perspective to the table. Enjoy your bringing up Ryle.
I think the problem is one of intentionality in the existentialist sense. We human thinkers are never not intentional. We care about everything whether we want to or not. Consequently we see intentionality in nonsentient LLMs; it is the way we are made, not the way the LLMs are made. So how do we make our Machine Learning dance partners so we are protected from our own fetishistic predisposition to expect them to be like us. The human being is a thing that desires, at our very core we are desiring creatures. Will an LLM or any AI ever desire? (Lux Umbra Dei!!! yes)
Redacting my previous reply, my feeling whatever processes are occurring will remain somewhat occult to us. I think of Thomas Nagel's little essay. Translating those processes to human terms may have the advantage of allowing us to talk about them, but also carry the risk of being misleading. Both a spinning gyroscope and I resist falling over on our sides but we can't say the spinning gyroscope "desires" to do so. In any case you raise excellent points that I have no ready answers to.
My concerns about it are: I do not trust Musk (he's also by far the biggest funder of Foundation for Life and had proven himself untrustworthy time and again) or Altman so this stinks of being a PR stunt so that they can say they tried but the industry didn't play along. It only concerns itself with AI more powerful than GPT4, which is already powerful enough to do significant damage. It will be basically impossible to get everyone to play along even if a few do, but it might be enough to forestall more effective legislation being proposed. It also promotes AI hype, which is neither helpful note does it indicate good faith.
This is ironic, because at heart I am both an anarchist and a libertarian. However, I do believe in the genuine existential risk that AI poses, and, even after speaking to my elders who lived through the Cold War, I do firmly believe that this threat is of an entirely different character & magnitude than that posed by nuclear, biological, etc. (traditional WMDs & EoWs). THAT SAID:
I *firmly* support any and all means to s..l..o..w.....d..o..w..n.. the relentless march of the AI beast. And no, I don't expect all the labs and nation-states to suddenly say: "Oh, 1,000 people signed a letter? Great! Time for a vacation!". But I do believe that a letter like this, with our collective social & reputational power, could motivate governments (ugh!) into regulatory & legal action, and that clusterf*ck of red tape (and inane senate hearings) would effectively slow global progress on this front.
Might Musk & China & others use the opportunity to "catch up" with OpenAI & Google in a Machievellian way? Sure! So? AI represents a genuine material threat to our species, culture and civilization, and ANY thing that might slow its trajectory at this point, is warranted (there is a hilarious meme on twitter of the COVID "slow the spread... flatten the curve" re-applied to AI dev).
The letter is imperfect. Sure.
Gary, you want to know what we should actually do?
Congressional subcommittee on AI, leading to rapid deployment of laws and legal frameworks, including risk assessment, safety certifications, and clear liability for harms. Yes, it will be a cluster. Yes, it is anti-free-market-capitalist. No, it is not the "spirit" of Silicon Valley cowboy-ism pirate-ism... and so?
Slow down, and breath. Live to see another day. We'll be OK.
Hi Gary, I would laud the blending of a condemnation of a potentially dangerous and unsustainable trend, with a more hopeful and scientific manifesto of how to address the challenge - wisely getting the service innovation benefits while avoiding the harms (e.g., "tech for good"). See for example and inspiration "David Attenborough: A Life on Our Planet" - and keep in mind rewinding, rewilding, and resilience. Also for inspiration consider the Marie Curie quotes: 'We must believe that we are gifted for something and that this thing must be attained.' 'Nothing in life is to be feared; it is only to be understood.' 'I am one of those who think like Nobel, that humanity will draw more good than evil from new discoveries.' Best regards, -Jim
Yes, humanity will draw more good than evil from new discoveries, agreed. But that's thinking from the time of Curie.
The challenge we face today, on a growing number of fronts, is that the awesome scale of emerging discoveries means that the evil that does exist increasingly poses a threat to erase all the good that has and will be accomplished. As the discoveries grow in number and scale the room for error steadily shrinks.
We should have learned this a generation or two ago. One bad day with nuclear weapons is all it takes. We're ignoring that threat from the 1940s, while steadily piling more threats on top of it.
Much has been said on this letter controversy so I have little to add. The one thing I've seen that bothers me is that the general press, and some who should know better, are confusing the short-term AI safety issues (fake news, malicious use, risk to health) with long-term ones (the AI apocalypse, turning us all into paper clips, etc). I haven't read the letter closely but my gut feel says that if it makes this distinction, it does so weakly. Obviously, the short-term and long-term risks are related but the former is very real and the latter in sci-fi territory.
Are we already inhaling our own hallucinating AI fumes, and what is to stop this from becoming an irreversible "tragedy of the information commons" due to poisons we cannot filter out?
Thanks -- I have not seen the implications of this issue given due weight in the press. It seems far more deeply dangerous in the near term and more in need of immediate action than most of the other concerns.
Do you have any reason to think it is not already happening at a scale that will be increasingly hard to reverse? Are there any effective strategies to quarantine this self-pollution?
Effective strategies for chatbots? Don't know, probably not.
What I do know is that we have the option to back up, zoom out, and think more deeply about where AI and chatbots have come from. What are the assumptions and process which have brought us to this moment with chatbots, and whatever else of concern that may arise from AI?
If we can't, or won't, identify and address the source of such threats then new threats of various types will keep emerging. While we're scratching our heads over chatbots and the future of AI, well intentioned people working in other fields are creating new threats as fast as their budgets will allow.
The focus in these conversations is too narrow, and too technical. The challenge we face is not fundamentally technical, but philosophical, a failure of reason.
I wrote a post looking at some of the not-unreasonable criticisms of the letter but arguing that it's still worth signing. Gary Marcus makes a cameo appearance.
After the role played by spreadsheets in the Subprime debacle was uncovered, at least in theory, financial institutions started paying closer attention to them. ChatGPT is the spreadsheet here...it's a tool and a far more dangerous tool given the network effects that come from leveraging social media. The "AI" part of the discussion is a red herring; it's not different than say, pesticides or food additives at that point. Of course, we're not very good at pre-emptive anything...
That being said, I don't know how you regulate it. Two examples spring to mind. First, software engineering itself as a discipline has struggled with calls for certification and ethics. The problem is, anybody who goes to a coding bootcamp can call themselves a programmer and businesses do not generally have an incentive to enforce standards the same way hospitals must for, say, doctors and nurses. For ethics, as a software engineer, I cannot say to my employer, "this is unethical, if I withdraw my services, your website will no longer be designed by Certified Software Engineers". My employer will say, paraphrasing, the door is that way.
Second, in the United States, there are moral objections to cloning and other kinds of stem cell research. Some countries have no such qualms. As a result, two things have happened...in other countries, they have kept researching. In the US, we have found ways around the constraints. So if just one country, company, whatever has lower standards...it all falls apart. And unlike what might be required for a wet lab, poking around a LLM isn't very expensive, relatively speaking. And who would want to lose the advantage if you thought that your competitor wasn't following the terms of the standard? Since all notion of nuance and spirit instead of letter have left social discourse and public policy,.... :/
On a completely different but not unrelated topic, one of the things I find with NLP researchers is that they are very likely to read between the lines...supplying semantics where none exist. It seems to me that even if you did something as simple at ROT13 on your training data, the experiment would become blinded. You'd need to do the same on your evaluation data. You could have the model generate a prediction, look at the score, say "that's a good score" and then re-apply ROT13 and see what was actually done.
Of course, is the fact that the entire thing would still work if we applied ROT13 to all the training data evidence that this is all just probabilistic smoke and mirrors. ¯\_(ツ)_/¯ I haven't finished the thought experiment yet ;)
Gary, I felt your earlier post brilliantly articulated the nuance of your position. Sadly (through no fault of your own) it has gotten lost through your association with FLI and others who ally themselves to AGI narratives. The commonality you appear to have with your fellow signatories is an appreciation for how powerful these systems are, and the damage they are poised to wreak across society. Your own position seems to be that the danger stems largely from the brittleness of these systems - they are terrifying not because they are robustly intelligent, or remotely conscious, but precisely because the opposite. It is because they are lacking in any grounding of the world, and are sensitive to inputs, that we have to be wary of them (along with the obvious threats they pose to our information ecosystem etc). Please continue to shift the focus away from the presumed dawning of superintelligence and remind people that AI is dangerous because it both powerful and mindless (and, dare I say, at times utterly stupid). This is no time to cede our human intelligence!
I hope that signing one letter where there was common ground doesn’t cede my own independence as a thinker :)
I don't think there are any worries there. LOL... are there?
> "AI is dangerous because it both powerful and mindless"...
...and most importantly, at massive *scale.*
One of the key points of the letter is the qualification: "models _larger_ than GPT4"... which reportedly (and confirmed) has in excess of 1 trillion (1,000 billion) parameters! These models are voracious consumers of electrical and supercompute resources. Until fusion and quantum computing come online, those are global resources with hard ceilings. ChatGPT reportedly took $10 million to train, GPT4 estimated at a $100 million training run (electricity bill + supercompute lease). We are rapidly entering the era of billion dollar models... that drain alone could shift global power equations.
And once those models are online (and some are already online), doesn't matter how smart or stupid people assess them to be... they are undoubtedly clever, undoubtedly fast, and undoubtedly capable of pumping out billions of pages of harmful content (and encouragement to humans to engage in harmful actions) at the push of a few buttons.
wafer scale engine has lowered this compute requirement. expect gpt-4*2 sized models to cost under $5 million to train within 1 year.
Wafer-scale compute also scales near linearly for LLMs, which is a beautiful engineering accomplishment. Yes, I am counting on all kinds of efficiency improvements (hardware & code & design efficiencies). And... with the current "design philosophy" (Sutskever, et al) that "scaling solves all," we are currently seeing an exponential curve whose ascent far outstrips Moore's Law (AI scaling, by my read, is about 10x, or an order of magnitude, per year). I'm fully expecting, on the efficiency front, for a GPT4 equivalent to run locally on edge devices (smartphones, watches?) within 2 years. But that doesn't stop the relentless march of GPT5, 6, 7, etc. (plus the billions being thrown into AI startups with non-LLM approaches, which will consume equal if not more compute). So, efficiencies will continue to be engineered and found... that slows it (the asymptote to global compute/energy ceiling), but doesn't solve it.
Remember the Morris Worm?
Quoting Wikipedia:
"November 2: The Morris worm, created by Robert Tappan Morris, infects DEC VAX and Sun machines running BSD UNIX that are connected to the Internet, and becomes the first worm to spread extensively "in the wild", and one of the first well-known programs exploiting buffer overrun vulnerabilities."
As I recall a lot of systems were damaged and a lot of angry sysadmins who had to fix their systems.
They criticized Mr. Morris, not just because he caused a lot of damage - but because there was nothing remarkable the code that he wrote. He hadn't created something special - it was second rate code.
My comment about the letter and proposed hold is this.
1. The Morris Worm was nothing remarkable - but caused widespread damage.
2. Consider the demonstrated ability of GPT-4 to "get out of the box".
3. You can't trust LLMs, and the people who built it don't even know how it works. It's said that they were surprised by GPT-3's abilities - I don't ever remember being surprised by a program I wrote.
It would seem like a good idea to move ahead with caution.
One final thought - the people coding LLMs should carefully consider the potential liability of what they are creating. The sysadmins that had to repair the damage caused by the Morris worm had no recourse to recover their costs - but you can bet that if a similar incident happens an unforgiving public will see that someone will be pay for it.
Agree wholeheartedly. Caution is called for. Computer scientists should give themselves a crash course on something mathematicians, chemists, and biologists have been aware of for some time: self-inducing structures. How little it takes for them to "trigger" and how quickly they can form.
Yes, something needs to be done about the risks of LLMs and signing that letter, however imperfect it may be, is a good way to bring attention to the severity of the problem. Another problem is that LLMs, by their ability to suck attention and resources, are a detriment to real progress toward solving AGI, at least by mainstream researchers.
On the other hand, the LLM craze might make it impossible to recognize the arrival of true AGI on the scene. A number of independent researchers, by virtue of their contempt for mainstream ideas, may strike the mother lode, so to speak. Keep in mind that AGI does not have to be at human level to be extremely powerful. My fear is that anyone who is smart enough to crack AGI while no one in the mainstream is paying attention, may also be smart enough to use it surreptitiously for their own private goals that may not coincide with those of the mainstream. Knowledge is power and power corrupts.
We live in interesting times.
Thanks for speaking up on this subject, Gary, and for maintaining your realism about AI's limitations while noting its dangers.
I don't have any helpful suggestions for you, but I am very interested in hearing your take on the impact AI may have on education. For the past decade or so, many teachers and educators have asked, "why teach it if you can google it?". Cognitive science provides an answer to that question -- because the knowledge you build in your head is essential to acquiring new knowledge.
Now, teachers and students are stampeding toward Chat GPT and it's only a matter of time before they will ask, "why do it if AI can do it?", where "it" may mean writing or math problems or any number of things that constitute formal education.
Are we at risk of entering the End of Knowledge?
In the spirit of moving the conversation forward constructively and quickly:
- it would help for you and others to begin identifying useful analogs for this situation, as you see it, to help communicate to the public the risks, urgency, consequences (known and unknown) involved.
And then suggest a range of options for evaluation, assessment, risk scoring, risk rating, disclosure of various AI efforts?
One thing that'll come up is distrust in any institutional oversight efforts, especially if resulting in domestic regulation. Can see this rapidly falling into a politicized debate over US competitiveness etc.
Just a handful of analogs off top of the head:
- E.g. Is this FTX/Crypto
- Great Financial Crisis/MBS/Derivatives/Shadow Banking/Leverage
- Nuclear non-proliferation/confidence-building measures/verification regimes/non-binding international agreements
- Academic panel/oversight/research council
- Consortium of corporations/non-binding/pledge/
- ESG/public pressure/corporate social responsibility
- Standards/ICANN/
- Federal Regulation/Sarbanes Oxley/etc
I fully agree with your desire that the Letter should be followed up with much more tangible, focused, executable recommendations and actions. Please have a look at my comment for a simple, implementable and potentially effective first step. Also note that Regulation Jurisdiction plays a big role in the conversation, if this is not to be a purely USA Centric initiative. European GDPR and developing AI Act are pertinent. Canada is working on something as well, but the EU communication products on the topic are already much clearer. https://artificialintelligenceact.eu/
Very helpful indeed. The speed of rate of progress/change in AI is somewhat unique - or at least accentuates an aspect of risk mitigation that's unusual. It's both difficult to anticipate unknown developments and difficult to anticipate their resulting downstream effects and impacts. (As noted in the EU's comment on the law's inflexibility.)
The IPCC might be a guide here - it's kept up to date as new data and measurements are observed, modeled, and impacts forecasted. In fact the mitigation aspects of the IPCC are interesting as well. Could be there's more to borrow from the climate change space.
It’s sad that half-informed people with fully armed keyboards get such a huge say in what is considered public consensus on a topic and that instead of looking at what normal, everyday people think about this, the mad town aware ofTwitter is being used as a proxy.
I don't think we disagree actually, but I'll post some of my criticisms of the letter here anyway. I'm a lowly Master's student so I may lack research perspective in manners relevant to the argument.
A frustration I share with a lot of folks is the letter's conflation of long-term and short-term risk, particularly because the letter's proposal seems exclusively relevant to the latter and near-useless to the former (the broad show of support might be, but likely not the moratorium).
Secondly--and I think this is a slightly more original view as far as I can tell--ideally a moratorium would be paired with a plan of action, but the letter reads as something of a "vibe check". That is, it's kind of vague, but it imagines that 6-month period in mind is one in which researchers gain ground in relevant problems: i.e, the identification of AI spam, the installation of social network guardrails, etc. This seems highly-critical to the letter's project, but the organization of this is left largely implied.
I would like to make clear that the intuition I outlined above--pausing research to target specific problems---is reasonable. If the pandemic taught us anything, it's that placing pressure on scientific institutions in moments of crisis is not a hopeless endeavor.
My interest is ethics, in relation to morals, and LLMs. And I do not mean whether or not AI is used ethically, but what would ethical modelling be for an LLM? The problem is distinguishing between ethics as an abstract enterprise, which I think LLMs can do well, albeit entirely thoughtlessly, and moral reasoning which remains entirely beyond the ability of mere optimization and pattern recognition. How would a deontologist or a utilitarian justify killing a baby or even one's neighbor or one's self? An LLM could come up with a slew of logically plausible explanations easily. The problem with the distinction between AI ethical reasoning and AI moral reasoning is does the decision matter. Do you care about the outcome? It is entirely different to add up the number of children in a statistical family and counting your own children. The number would be the same perhaps, but the differences are absolute. I guess it comes down to the Category Mistake that has plagued Philosophy of Mind from the start. Is thought an epiphenomenon of the brain, or is it just a way to talk about brain activity? Is the moral thing to do simply following the best logical manipulation of ethical principles or is it something entirely different? I would bet a dollar no LLM could make that distinction not now not ever. I explain that AI is absolutely stupid and absolutely stubborn and no matter how much data you feed it you only feed it stubbornness not it's Intelligence.
Very good questions and good advertising for the relevance, nay, indispensability of Philosophy to this topic. We have been studying Mind, Sentience, Self, Consciousness, etc. for a long time and bring a unique perspective to the table. Enjoy your bringing up Ryle.
I think the problem is one of intentionality in the existentialist sense. We human thinkers are never not intentional. We care about everything whether we want to or not. Consequently we see intentionality in nonsentient LLMs; it is the way we are made, not the way the LLMs are made. So how do we make our Machine Learning dance partners so we are protected from our own fetishistic predisposition to expect them to be like us. The human being is a thing that desires, at our very core we are desiring creatures. Will an LLM or any AI ever desire? (Lux Umbra Dei!!! yes)
Redacting my previous reply, my feeling whatever processes are occurring will remain somewhat occult to us. I think of Thomas Nagel's little essay. Translating those processes to human terms may have the advantage of allowing us to talk about them, but also carry the risk of being misleading. Both a spinning gyroscope and I resist falling over on our sides but we can't say the spinning gyroscope "desires" to do so. In any case you raise excellent points that I have no ready answers to.
The letter threatens lots of people's vested interests. Of course they're going to push back!
(BTW I also signed it. It doesn't matter if it's imperfect. We're at the point at which action is required.)
My concerns about it are: I do not trust Musk (he's also by far the biggest funder of Foundation for Life and had proven himself untrustworthy time and again) or Altman so this stinks of being a PR stunt so that they can say they tried but the industry didn't play along. It only concerns itself with AI more powerful than GPT4, which is already powerful enough to do significant damage. It will be basically impossible to get everyone to play along even if a few do, but it might be enough to forestall more effective legislation being proposed. It also promotes AI hype, which is neither helpful note does it indicate good faith.
This is ironic, because at heart I am both an anarchist and a libertarian. However, I do believe in the genuine existential risk that AI poses, and, even after speaking to my elders who lived through the Cold War, I do firmly believe that this threat is of an entirely different character & magnitude than that posed by nuclear, biological, etc. (traditional WMDs & EoWs). THAT SAID:
I *firmly* support any and all means to s..l..o..w.....d..o..w..n.. the relentless march of the AI beast. And no, I don't expect all the labs and nation-states to suddenly say: "Oh, 1,000 people signed a letter? Great! Time for a vacation!". But I do believe that a letter like this, with our collective social & reputational power, could motivate governments (ugh!) into regulatory & legal action, and that clusterf*ck of red tape (and inane senate hearings) would effectively slow global progress on this front.
Might Musk & China & others use the opportunity to "catch up" with OpenAI & Google in a Machievellian way? Sure! So? AI represents a genuine material threat to our species, culture and civilization, and ANY thing that might slow its trajectory at this point, is warranted (there is a hilarious meme on twitter of the COVID "slow the spread... flatten the curve" re-applied to AI dev).
The letter is imperfect. Sure.
Gary, you want to know what we should actually do?
Congressional subcommittee on AI, leading to rapid deployment of laws and legal frameworks, including risk assessment, safety certifications, and clear liability for harms. Yes, it will be a cluster. Yes, it is anti-free-market-capitalist. No, it is not the "spirit" of Silicon Valley cowboy-ism pirate-ism... and so?
Slow down, and breath. Live to see another day. We'll be OK.
Hi Gary, I would laud the blending of a condemnation of a potentially dangerous and unsustainable trend, with a more hopeful and scientific manifesto of how to address the challenge - wisely getting the service innovation benefits while avoiding the harms (e.g., "tech for good"). See for example and inspiration "David Attenborough: A Life on Our Planet" - and keep in mind rewinding, rewilding, and resilience. Also for inspiration consider the Marie Curie quotes: 'We must believe that we are gifted for something and that this thing must be attained.' 'Nothing in life is to be feared; it is only to be understood.' 'I am one of those who think like Nobel, that humanity will draw more good than evil from new discoveries.' Best regards, -Jim
Hi Jim,
Yes, humanity will draw more good than evil from new discoveries, agreed. But that's thinking from the time of Curie.
The challenge we face today, on a growing number of fronts, is that the awesome scale of emerging discoveries means that the evil that does exist increasingly poses a threat to erase all the good that has and will be accomplished. As the discoveries grow in number and scale the room for error steadily shrinks.
We should have learned this a generation or two ago. One bad day with nuclear weapons is all it takes. We're ignoring that threat from the 1940s, while steadily piling more threats on top of it.
Much has been said on this letter controversy so I have little to add. The one thing I've seen that bothers me is that the general press, and some who should know better, are confusing the short-term AI safety issues (fake news, malicious use, risk to health) with long-term ones (the AI apocalypse, turning us all into paper clips, etc). I haven't read the letter closely but my gut feel says that if it makes this distinction, it does so weakly. Obviously, the short-term and long-term risks are related but the former is very real and the latter in sci-fi territory.
The short term risk that is most concerning is that AI chatbots pollute the only well we have.
Has the horse already left the barn, and what controls are currently in place? Carl Bergstrom raised this (https://fediscience.org/@ct_bergstrom/110071929312312906) asking "what happens when AI chatbots pollute our information environment and then start feeding on this pollution. As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at. https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation."
Are we already inhaling our own hallucinating AI fumes, and what is to stop this from becoming an irreversible "tragedy of the information commons" due to poisons we cannot filter out?
Yes I am very concerned and tweeted the latter link a few weeks ago
Thanks -- I have not seen the implications of this issue given due weight in the press. It seems far more deeply dangerous in the near term and more in need of immediate action than most of the other concerns.
Do you have any reason to think it is not already happening at a scale that will be increasingly hard to reverse? Are there any effective strategies to quarantine this self-pollution?
Effective strategies for chatbots? Don't know, probably not.
What I do know is that we have the option to back up, zoom out, and think more deeply about where AI and chatbots have come from. What are the assumptions and process which have brought us to this moment with chatbots, and whatever else of concern that may arise from AI?
If we can't, or won't, identify and address the source of such threats then new threats of various types will keep emerging. While we're scratching our heads over chatbots and the future of AI, well intentioned people working in other fields are creating new threats as fast as their budgets will allow.
https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
The focus in these conversations is too narrow, and too technical. The challenge we face is not fundamentally technical, but philosophical, a failure of reason.
I wrote a post looking at some of the not-unreasonable criticisms of the letter but arguing that it's still worth signing. Gary Marcus makes a cameo appearance.
https://open.substack.com/pub/myaiobsession/p/critics-are-battering-that-ai-pause?r=l3r4&utm_campaign=post&utm_medium=web
After the role played by spreadsheets in the Subprime debacle was uncovered, at least in theory, financial institutions started paying closer attention to them. ChatGPT is the spreadsheet here...it's a tool and a far more dangerous tool given the network effects that come from leveraging social media. The "AI" part of the discussion is a red herring; it's not different than say, pesticides or food additives at that point. Of course, we're not very good at pre-emptive anything...
That being said, I don't know how you regulate it. Two examples spring to mind. First, software engineering itself as a discipline has struggled with calls for certification and ethics. The problem is, anybody who goes to a coding bootcamp can call themselves a programmer and businesses do not generally have an incentive to enforce standards the same way hospitals must for, say, doctors and nurses. For ethics, as a software engineer, I cannot say to my employer, "this is unethical, if I withdraw my services, your website will no longer be designed by Certified Software Engineers". My employer will say, paraphrasing, the door is that way.
Second, in the United States, there are moral objections to cloning and other kinds of stem cell research. Some countries have no such qualms. As a result, two things have happened...in other countries, they have kept researching. In the US, we have found ways around the constraints. So if just one country, company, whatever has lower standards...it all falls apart. And unlike what might be required for a wet lab, poking around a LLM isn't very expensive, relatively speaking. And who would want to lose the advantage if you thought that your competitor wasn't following the terms of the standard? Since all notion of nuance and spirit instead of letter have left social discourse and public policy,.... :/
On a completely different but not unrelated topic, one of the things I find with NLP researchers is that they are very likely to read between the lines...supplying semantics where none exist. It seems to me that even if you did something as simple at ROT13 on your training data, the experiment would become blinded. You'd need to do the same on your evaluation data. You could have the model generate a prediction, look at the score, say "that's a good score" and then re-apply ROT13 and see what was actually done.
Of course, is the fact that the entire thing would still work if we applied ROT13 to all the training data evidence that this is all just probabilistic smoke and mirrors. ¯\_(ツ)_/¯ I haven't finished the thought experiment yet ;)