Here, big tech (in this case Google) was able to turn proposed checks and balances (on their somewhat predatory business model) in a Californian bill completely around. With some 'AI' thrown in for good measure.
Money is power. Power corrupts. Hence money corrupts.
While I applaud the efforts to corral AI before the beasts escape, I am resigned to the fact that it isn't going to happen. As is virtually always the case with regulating industry, bad stuff needs to happen before any preventative regulations can be passed. There are several reasons for this:
- No one is really sure what the bad things look like or how the scenarios will play out. This makes it hard to write effective regulations and there's nothing worse than ineffective regulations.
- Regulators are deathly afraid of restricting a possible economic powerhouse. After all, no one gives out awards for bad stuff avoided.
- When there are, say, 10 potential bad things predicted, it is hard to take the predictors seriously. They are hard to distinguish from people who simply want to thwart the technology. Gary Marcus constantly gets accused of this. The accusations aren't justified but it's still a problem.
- There's the feeling that even if US companies play by some new set of rules, other countries or rogue agents will not and the bad stuff will happen anyway.
The big established media companies are now going after the AI companies who rip them off. Sony, Universal, and Time Warner have real money. They don't have the valuations of Google and Microsoft, but they also have decades of experience protecting their IP, better relationships with politicians, and more public goodwill than the tech companies.
Right now they're going after smaller players. But it's a warning shot to the big ones.
At this point it's fair to say OpenAI has really, truly lived up to its name. It is openly unabashed about its desire for power and economic domination. I fully and completely expect them to do everything in their power to attain it, the world be damned. The question then becomes, do we let them do it?
I'll put a plug in for Le Show as well. I've been a listener since the 1980's. Shearer has always been entertaining and on the cutting edge of important issues.
Luckily, human-level AGI is *hard* - really, really hard. What we have today is easily sufficient to cause societal harm at global scale, but way too dumb to cause catastrophic or existential harm. Meanwhile, the idea that anyone motivated by short-term self interest will "self-regulate" as they race (necessarily via a sequence of low-hanging fruit) towards what they perceive to be infinite money, fame, and power is utterly ridiculous. It will be a very long time (decades) before those following the low-hanging-fruit path to human-level AGI will get anywhere close, and along the way there's quite likely to be some kind of high-profile "AI event" - such as a global AI cyberattack, or maybe even a large number of civilians killed by a swarm of rogue autonomous weapons - that forces people to realise the scale of harm that can occur when powerful AI is developed in an insufficiently-regulated way. With any luck, such events will finally persuade governments (including the US, UK, EU, and China) to enact and enforce appropriately strong AI regulation, legislation, and international treaties classifying powerful AI systems as safety-critical systems, requiring comprehensive evidence-based safety cases before being licensed for deployment.
This is a great point regarding what a path to AGI would look like. I'm skeptical that "human level AGI" is even possible, but if it is a lot of intermediate-level interations will have to come first. It's not like tech companies are going to keep their technology under wraps until they invent AGI, and then unleash it. Quite the opposite: they're so eager to get every new model to market that they often do so prematurely. I don't know what AI that's halfway between AGI and what exists today would look like, but it would definitely freak people the hell out.
I agree with everything here on the issues of governance, unaccountable decision making by CEOs, the futility of self-governance, and overly broad non-disclosure agreements.
But the scene-setter at the beginning makes this about existential risks, invoking explicitly the comparison of global nuclear war. And that is the usual problem of focusing on highly implausible 'extinction' risk to a degree that leaves too little room in public discourse for actual risks like important decisions being based on confidently wrong answers, discrimination and bias being baked into models through biased training data, social disruption through undermining of intellectual property, the political impact of deepfakes, and drowning information and communications in a firehose of AI-generated spam.
And this is a widespread phenomenon. I recently sat incredulously in a talk where a self-proclaimed AI safety expert nonchalantly pronounced that we were all agreed that there would be super-human AI by 2040, and if we don't support his work, it may kill us all. No, we aren't agreed. In fact, not only is there no evidence yet that that kind of AI is even possible in principle, and not only is there no evidence yet that if it could be built, it could be done without using up 800% of the global electricity supply, but there is good reason to believe that the kind of scenarios these people cook up in their heads are physically impossible.
In the end, if your AI does something scary, how about you just press the off button or pull the plug? The answer always amounts to, if it is smart enough, a mind can do magic, but sorry, magic is impossible. The AI won't copy itself onto my smartphone before somebody pulls the plug, because my smartphone can't store and run a model of that complexity, and also, data limits, firewall, etc. The AI won't create a super-virus that kills us all without the humans in the lab noticing what they are doing, because biology doesn't work like in a Lego movie. Likewise, a benevolent super-AI won't solve cancer in five minutes no matter how super-human it is, because even if it has a great idea, it will then have to request funding for a five-year experimental study to see if the idea works in real life and has no major side-effects.
This is all magical thinking, and a better analogy here is his 1940s counter-part worrying that a single nuclear bomb will burn up the entire atmosphere while ignoring the impact it could have on Nagasaki as negligible, or his 19th century counter-part worrying that riding a train at 80 km/h will kill the passengers while ignoring the dangerous work conditions of rail construction workers as some kind of unavoidable background noise that just has to be accepted. AFAIK, some people did think like that, because most people don't understand physics and have zero sense of plausibility. Hysteria repeats itself, one might say.
It isn't in the AGI apocalypse sense, but since 1995 the resources put into the arguably losing battle of preventing, detecting, addressing, and recovering from efforts of bad actors and unintended consequences of good actors have become extraordinary. Today's limited capabilities are expanding this problem, threatening social cohesion, democratic processes, and mental health. Will SB-1047 help at this level?
Do we really need an AI company “insider” (current or former) to tell us things like “one or just a few people at AI companies shouldn’t be making decisions for humanity” and “there should be external governance”?
The last person we need on an AI world governance board is Sam Altman. I'm all for Weiner's SB-1047 as a good starting point. "Regulation stifles innovation" has got to the biggest bull-crap IT commandment. The implication that undefined innovation is always good, and that anything (vendors are selling) that enables or encourages innovation is similarly good is nonsense.
Of course, this doesn’t mean you never make changes in non-differentiating areas, just that it’s about finding the right balance between standards and discipline on the one hand, and the freedom to explore and experiment on the other.
Agreed. This is especially rich coming from a company that has, time and again, displayed a penchant for recklessness. Between the ChatGPT roll-out, the shameless hyping and dishonesty about GPT4's abilities, "hey y'all we invented a way to spoof someone's voice using only 15 seconds of audio do you think should we release it?", desperately courting Scarlett Johansson's permission for something they'd already done without her permission, and the Altman firing/re-hiring shitshow, these guys deserve zero benefit of the doubt.
AIs will never be dangerous as instigators because they don't have free will.
Depending on AIs will of course be dangerous just like depending on everything from friends to lovers to the stability of the rock ledge you're clinging to when rock climbing.
AI in the next two years is very likely to finally convince humans to be more human because AI is so prolific at showing humans how pseudo human behavior is icky.
For some time I thought that the current hype cycle in AI would at least have the benefit of prepping us up for when the real thing arrives, basically something like a dress rehearsal. I am not so sure any more. Due to the ridiculous claims and end-of-the-world scaremongering from the likes of Altman, the world has become numb to the danger and possibility of a true AI and may not react at all when it arrives. It's like the story of the shepherd who cried "wolf" to scare his mates and when a real wolf came they thought he was joking again and didn't answer his cries for help.
The issue is centralized governance. Maybe a public blockchain DAO with smart contracts and information immutability is the model. Everything else can be corrupted for power reasons. An example is the AICYC project governance model. https://aicyc.wordpress.com/2023/07/26/aicyc-governance/
AICYC built on a semantic net the kind Gary talks about in his book Rebooting AI. It is much easier to validate a semantic AI model than a large language model. More about the project at aicyc.org.
States usually pass laws and regulations before the federal government does, that's how we find broad enough consensus for some legal framework to actually pass federally. Look at slavery, civil rights, gay rights, prohibition of alcohol, fetucide. Even something as simple as Marijuana has conflicting state and federal laws.
Companies that run the line of argument that "this should only be related federally if at all" are worried that their actions which impact the whole country will be regulated by the actual people it's impacting, instead of a handful of representatives who can be bought with campaign contributions and coerced with frequency bias via social media bots and privately owned "main stream" media (who don't really do news, and admit freely that it's entertainment when sued).
We already have laws against making false claims about products that lead to public harm. When they deploy crummy chat bot tech to "solve" problems it's not equipped for, and it causes massive harm, then the consequences will fall on the companies involved and most likely squarely on Sam himself because of his coercive and authoritarian management style.
What benefits has any of the "scary tech" brought so far? I'm not really seeing anything that outweighs the harm that has already come. From scaring the crap out of stupid people, facilitating election manipulation and dark arts of social engineering, to reversing efforts at reducing carbon emissions... If this is "effective altruism" then it's safe to say that's a misleading term that actually embodies "greed is good" and "better to ask forgiveness than permission", we already know the effects of that philosophy and it isn't anything good for "humanity".
I see no dishonesty in OpenAI testifying in front of Congress that AI regulation is necessary while opposing a proposed, state specific regulation. Letting 50 states and the District of Columbia regulate AI in 51 different ways will slow innovation. This should be a federal matter.
As much as a distrust OpenAI, I agree with you here. I suspect OpenAI's apparent support for regulation is cynical and insincere, but that doesn't mean they're contradicting themselves when the oppose some specific piece of legislation.
If we believe the risks are real - and I do - then inevitably something will get through whatever regulatory scheme is established. It only needs one for the Pandora’s box to be opened. And regulation, while it should happen, is a very blunt instrument. Murder is illegal with very severe sanctions and a whole infrastructure sanctioning it. It still happens! Even in medicine, with what is mostly a fantastically successful code of ethics, bad things happen. And so it will with AI.
Even so press on because that diminishes the risk but realise it’s not eradicated. Work out m what to do when the goblins escape the box.
I don’t know and suspect no one does but I wonder if AI might itself be part of a solution.
Love your substack, Gary, and hope you’ll maintain your expert Cassandra prognostications. It must get lonely.
In related news: https://www.bloodinthemachine.com/p/how-a-bill-meant-to-save-journalism
Here, big tech (in this case Google) was able to turn proposed checks and balances (on their somewhat predatory business model) in a Californian bill completely around. With some 'AI' thrown in for good measure.
Money is power. Power corrupts. Hence money corrupts.
While I applaud the efforts to corral AI before the beasts escape, I am resigned to the fact that it isn't going to happen. As is virtually always the case with regulating industry, bad stuff needs to happen before any preventative regulations can be passed. There are several reasons for this:
- No one is really sure what the bad things look like or how the scenarios will play out. This makes it hard to write effective regulations and there's nothing worse than ineffective regulations.
- Regulators are deathly afraid of restricting a possible economic powerhouse. After all, no one gives out awards for bad stuff avoided.
- When there are, say, 10 potential bad things predicted, it is hard to take the predictors seriously. They are hard to distinguish from people who simply want to thwart the technology. Gary Marcus constantly gets accused of this. The accusations aren't justified but it's still a problem.
- There's the feeling that even if US companies play by some new set of rules, other countries or rogue agents will not and the bad stuff will happen anyway.
I mostly agree but there are bad things that have already happened that remain unaddressed. Copyright issues with training content is a biggie.
The big established media companies are now going after the AI companies who rip them off. Sony, Universal, and Time Warner have real money. They don't have the valuations of Google and Microsoft, but they also have decades of experience protecting their IP, better relationships with politicians, and more public goodwill than the tech companies.
Right now they're going after smaller players. But it's a warning shot to the big ones.
Those are two unrelated groups so not a "both sides" situation.
At this point it's fair to say OpenAI has really, truly lived up to its name. It is openly unabashed about its desire for power and economic domination. I fully and completely expect them to do everything in their power to attain it, the world be damned. The question then becomes, do we let them do it?
OpenAI : Open(ly) Autocratic Instincts
🧵* Dear IPI**,
Thank you a lot. I’ve heard you a lot via Harry Shearer’s LeShow, so I reading you here. I don’t know if you have a paid membership, but I would pay.
Thank you.
——
* I use the bobbin of thread in the general sense, of all conversations being part of one thread & one garment in humanity.
Also, the simple blue color & blocky look make it a good identifier for me, and for those whom I’m writing to.
——
** “Indispensable Public Intellectual”
You can upgrade to paid if you like. I will be back on Le Show soon!
I'll put a plug in for Le Show as well. I've been a listener since the 1980's. Shearer has always been entertaining and on the cutting edge of important issues.
Luckily, human-level AGI is *hard* - really, really hard. What we have today is easily sufficient to cause societal harm at global scale, but way too dumb to cause catastrophic or existential harm. Meanwhile, the idea that anyone motivated by short-term self interest will "self-regulate" as they race (necessarily via a sequence of low-hanging fruit) towards what they perceive to be infinite money, fame, and power is utterly ridiculous. It will be a very long time (decades) before those following the low-hanging-fruit path to human-level AGI will get anywhere close, and along the way there's quite likely to be some kind of high-profile "AI event" - such as a global AI cyberattack, or maybe even a large number of civilians killed by a swarm of rogue autonomous weapons - that forces people to realise the scale of harm that can occur when powerful AI is developed in an insufficiently-regulated way. With any luck, such events will finally persuade governments (including the US, UK, EU, and China) to enact and enforce appropriately strong AI regulation, legislation, and international treaties classifying powerful AI systems as safety-critical systems, requiring comprehensive evidence-based safety cases before being licensed for deployment.
This is a great point regarding what a path to AGI would look like. I'm skeptical that "human level AGI" is even possible, but if it is a lot of intermediate-level interations will have to come first. It's not like tech companies are going to keep their technology under wraps until they invent AGI, and then unleash it. Quite the opposite: they're so eager to get every new model to market that they often do so prematurely. I don't know what AI that's halfway between AGI and what exists today would look like, but it would definitely freak people the hell out.
I agree with everything here on the issues of governance, unaccountable decision making by CEOs, the futility of self-governance, and overly broad non-disclosure agreements.
But the scene-setter at the beginning makes this about existential risks, invoking explicitly the comparison of global nuclear war. And that is the usual problem of focusing on highly implausible 'extinction' risk to a degree that leaves too little room in public discourse for actual risks like important decisions being based on confidently wrong answers, discrimination and bias being baked into models through biased training data, social disruption through undermining of intellectual property, the political impact of deepfakes, and drowning information and communications in a firehose of AI-generated spam.
And this is a widespread phenomenon. I recently sat incredulously in a talk where a self-proclaimed AI safety expert nonchalantly pronounced that we were all agreed that there would be super-human AI by 2040, and if we don't support his work, it may kill us all. No, we aren't agreed. In fact, not only is there no evidence yet that that kind of AI is even possible in principle, and not only is there no evidence yet that if it could be built, it could be done without using up 800% of the global electricity supply, but there is good reason to believe that the kind of scenarios these people cook up in their heads are physically impossible.
In the end, if your AI does something scary, how about you just press the off button or pull the plug? The answer always amounts to, if it is smart enough, a mind can do magic, but sorry, magic is impossible. The AI won't copy itself onto my smartphone before somebody pulls the plug, because my smartphone can't store and run a model of that complexity, and also, data limits, firewall, etc. The AI won't create a super-virus that kills us all without the humans in the lab noticing what they are doing, because biology doesn't work like in a Lego movie. Likewise, a benevolent super-AI won't solve cancer in five minutes no matter how super-human it is, because even if it has a great idea, it will then have to request funding for a five-year experimental study to see if the idea works in real life and has no major side-effects.
This is all magical thinking, and a better analogy here is his 1940s counter-part worrying that a single nuclear bomb will burn up the entire atmosphere while ignoring the impact it could have on Nagasaki as negligible, or his 19th century counter-part worrying that riding a train at 80 km/h will kill the passengers while ignoring the dangerous work conditions of rail construction workers as some kind of unavoidable background noise that just has to be accepted. AFAIK, some people did think like that, because most people don't understand physics and have zero sense of plausibility. Hysteria repeats itself, one might say.
"Current AI us not all that scary."
It isn't in the AGI apocalypse sense, but since 1995 the resources put into the arguably losing battle of preventing, detecting, addressing, and recovering from efforts of bad actors and unintended consequences of good actors have become extraordinary. Today's limited capabilities are expanding this problem, threatening social cohesion, democratic processes, and mental health. Will SB-1047 help at this level?
Do we really need an AI company “insider” (current or former) to tell us things like “one or just a few people at AI companies shouldn’t be making decisions for humanity” and “there should be external governance”?
Those hardly seem like profound conclusions.
But maybe I just think these things are obvious because I am ignorant and they actually ARE profound.
The last person we need on an AI world governance board is Sam Altman. I'm all for Weiner's SB-1047 as a good starting point. "Regulation stifles innovation" has got to the biggest bull-crap IT commandment. The implication that undefined innovation is always good, and that anything (vendors are selling) that enables or encourages innovation is similarly good is nonsense.
Of course, this doesn’t mean you never make changes in non-differentiating areas, just that it’s about finding the right balance between standards and discipline on the one hand, and the freedom to explore and experiment on the other.
Agreed. This is especially rich coming from a company that has, time and again, displayed a penchant for recklessness. Between the ChatGPT roll-out, the shameless hyping and dishonesty about GPT4's abilities, "hey y'all we invented a way to spoof someone's voice using only 15 seconds of audio do you think should we release it?", desperately courting Scarlett Johansson's permission for something they'd already done without her permission, and the Altman firing/re-hiring shitshow, these guys deserve zero benefit of the doubt.
AIs will never be dangerous as instigators because they don't have free will.
Depending on AIs will of course be dangerous just like depending on everything from friends to lovers to the stability of the rock ledge you're clinging to when rock climbing.
AI in the next two years is very likely to finally convince humans to be more human because AI is so prolific at showing humans how pseudo human behavior is icky.
For some time I thought that the current hype cycle in AI would at least have the benefit of prepping us up for when the real thing arrives, basically something like a dress rehearsal. I am not so sure any more. Due to the ridiculous claims and end-of-the-world scaremongering from the likes of Altman, the world has become numb to the danger and possibility of a true AI and may not react at all when it arrives. It's like the story of the shepherd who cried "wolf" to scare his mates and when a real wolf came they thought he was joking again and didn't answer his cries for help.
The issue is centralized governance. Maybe a public blockchain DAO with smart contracts and information immutability is the model. Everything else can be corrupted for power reasons. An example is the AICYC project governance model. https://aicyc.wordpress.com/2023/07/26/aicyc-governance/
AICYC built on a semantic net the kind Gary talks about in his book Rebooting AI. It is much easier to validate a semantic AI model than a large language model. More about the project at aicyc.org.
States usually pass laws and regulations before the federal government does, that's how we find broad enough consensus for some legal framework to actually pass federally. Look at slavery, civil rights, gay rights, prohibition of alcohol, fetucide. Even something as simple as Marijuana has conflicting state and federal laws.
Companies that run the line of argument that "this should only be related federally if at all" are worried that their actions which impact the whole country will be regulated by the actual people it's impacting, instead of a handful of representatives who can be bought with campaign contributions and coerced with frequency bias via social media bots and privately owned "main stream" media (who don't really do news, and admit freely that it's entertainment when sued).
We already have laws against making false claims about products that lead to public harm. When they deploy crummy chat bot tech to "solve" problems it's not equipped for, and it causes massive harm, then the consequences will fall on the companies involved and most likely squarely on Sam himself because of his coercive and authoritarian management style.
What benefits has any of the "scary tech" brought so far? I'm not really seeing anything that outweighs the harm that has already come. From scaring the crap out of stupid people, facilitating election manipulation and dark arts of social engineering, to reversing efforts at reducing carbon emissions... If this is "effective altruism" then it's safe to say that's a misleading term that actually embodies "greed is good" and "better to ask forgiveness than permission", we already know the effects of that philosophy and it isn't anything good for "humanity".
“What former [and current] OpenAI employees are worried about”
Losing their vestment in the company if they spill the beans
I see no dishonesty in OpenAI testifying in front of Congress that AI regulation is necessary while opposing a proposed, state specific regulation. Letting 50 states and the District of Columbia regulate AI in 51 different ways will slow innovation. This should be a federal matter.
As much as a distrust OpenAI, I agree with you here. I suspect OpenAI's apparent support for regulation is cynical and insincere, but that doesn't mean they're contradicting themselves when the oppose some specific piece of legislation.
If we believe the risks are real - and I do - then inevitably something will get through whatever regulatory scheme is established. It only needs one for the Pandora’s box to be opened. And regulation, while it should happen, is a very blunt instrument. Murder is illegal with very severe sanctions and a whole infrastructure sanctioning it. It still happens! Even in medicine, with what is mostly a fantastically successful code of ethics, bad things happen. And so it will with AI.
Even so press on because that diminishes the risk but realise it’s not eradicated. Work out m what to do when the goblins escape the box.
I don’t know and suspect no one does but I wonder if AI might itself be part of a solution.
Love your substack, Gary, and hope you’ll maintain your expert Cassandra prognostications. It must get lonely.