I am an artist and a mother, and I am feeling extremely worried about the current state of the world. As much as I hate to admit it, I feel powerless when it comes to making a difference. I know there are many others like me who care deeply about the issues at hand, but we don't know where to begin when it comes to advocating for change.
The reality of everything you've been writing hit me the other day when I was looking into buying a common household item—a bug spray, what with it starting to get hotter where I am—and doing as I always do when buying anything, I tried to do a little research first, searching for some reviews or info on what people say works best.
I was halfway down reading a list of reviewed items that popped up in the search results when I realised what I was reading was unmistakably written by ChatGPT, with its characteristic writing style repeating through each "review" and none of them written by a human, with sentences like (e.g.): "When considering to purchase a X, there are a few things to consider. First..."
Amazed, I went back to look at the other results only to realise almost all of them had to be brand new AI spam sites pumped up in the results by SEO tactics, or perhaps copying other websites' metadata to displace them as I recalled seeing a CEO mentioned as doing (probably from this publication). It was not long ago every result was written by a human; now, I could scarcely find a single search result that was.
I had slightly more luck looking for a vacuum cleaner finding human reviews. On the other hand, I was looking for some new cat toys and found AI cats (of course never labelled) mixed into video results and marketing images (formerly photos).
What is the world coming to when there is so much junk AI "information" to wade through making everyday purchases? And if businesses can so quickly pollute the Internet with junk on things that don't matter that much, what are the better motivated like states, intelligence, their stakeholders, and other political actors doing that I haven't noticed? If business is doing it, they must be.
The consequences of LLMs are not a future issue but a present one, and the way they're being used is mortifying, drowning the Internet in a sea of crap.
Consumer Reports is still around, and still written by humans. Be careful, though -- there are lots of sites with similar-sounding names but without CR's integrity, trying to sound like they're the same thing. https://consumerreports.org
Thanks, I had heard of them, and I'm embarrassed to admit I assumed for whatever reason they were something around in my parents' time and not something still going!
It's incredible we now have to place a premium on anything being done by humans. It is hard to avoid the output of LLMs these days, whether searching the Internet, trying to reach customer service, or whatever. I've even seen obvious ChatGPT-written social media posts from—bizarrely—people who are human, but felt like spitting out an AI reply for whatever reason. Certainly not the good AI outcome we all hoped for.
I agree, and I wish I had any confidence in Washington to actually be competent enough to serve “the people” well… but I don’t. At all. Either party :(
When I read the announcement for the Artificial Intelligence Safety and Security Board and saw the names of Sam Altman, Dario Amodei, Jensen Huang, Satya Nadella and Sundar Pichai as members, I had to double check the whole thing wasn't a dark humour internet joke.
Complicated tradeoffs, contractual requirements. I write a lot here that is free and public but there is value in having MIT press publishing a peer reviewed book
I get it and agree that it will be great for your book to be published by MIT Press. Does your MIT Press contract prevent you from discussing the content of the book prior to publication? Would they sue you or cancel publication if you wrote brief descriptions of the laws/regulations you would like to see enacted? Given that this is a crucial election year and that September is a long way off in terms of the time until the election, such a restriction far over-values the monetary loss MIT Press might suffer when compared to the good such a discussion might facilitate.
I like activism. I like research. I like regulation that is asymmetrical and meaningful, esp with regards to training data and ownership.
But an outright pause policy just won't work, least of all a policy that restricts access to public research. Aside from arbitrary computing limits, there is no universally agreed upon definition of "better than GPT-4"; benchmarks are trivially cheated either way (making something seem stronger than it is or weaker than it is). Weights don't tell the story of inference capability (modern models converge on replicating their original data set with multiple sizes and architectures). Hell, we would be hard pressed to explain how such models or AI are different than numerous other statistical or networked systems deployed in our daily lives.
The core issue has and will always be THE DATA. Where did the data come from, was consent or compensation provided, and is that data source made clear and public?
That single issue broadly and cheaply cuts the problem right at the core issue of power, ownership, and wealth inequality.
This is why we want to have oversight committees do pretesting on new frontier models, which I think is already an idea with Romney's proposed framework.
Is anyone else concerned that these AI models may also include "comments sections" from Internet content? I am completely behind requiring all AI training to be based on paid access. It's a simple test - if any of us must pay to access content behind a paywall, then the AI trainers should also. Beyond that, it should also be clear that this opinion that I'm writing today should NOT be included in any AI training. I am not an expert on AI, even though I have various IT skills acquired over 40+ years of experience. There has never been a computer related technology that wasn't exploited, and I maintain that principal applies to all technology.
Timely. Uphill battle, though, as it generally is these days. Fifty years (at least) of the conviction that market forces are always the best for good societal outcomes (they are not, they are the best for profits, the bad — and good, that is there too — that arrive are side effects of the profit drive) has produced generations of people who think that regulation is 'bad'. This is true in a lot of complex situations, be it IT strategy in organisations (where 'silver bullets' reign supreme at top level, if the issues aren't neglected, that is) or in politics. You're not up against tech bros. You're up against deep cultural convictions. The same sort that gave us the roaring twenties, the Great Depression, and then war. That outcome is of course not certain, but the situation is so far out of stability that anything can happen.
I feel like this post is from some kind of mirror universe. My whole life it seems like anti-market fundamentalism has been the deep cultural conviction and that pro-science economists are constantly fighting a desperate rear-guard action against it. Every single time something has gone wrong with the economy, regardless of whether it was caused by too little regulation, too much regulation, or the wrong kind of regulation, the majority of people have been ready to pounce on it and declare that "market fundamentalism" and "neoliberalism" have been discredited and we need more regulation. I've become convinced that market fundamentalism is like cow-tipping, Satanic cults, or rainbow parties, it's something everyone swears is common, but actually doesn't exist.
Maybe Terry Gilliam could script another black comedy: “Fear & Loathing in Silicon Valley”? The only problem is that it’s t’other way round… yes, I too fear the technocratic CEOs.
Indeed. As I noted earlier, I wrote (and talked) about this stuff in 1991 after researching AI, philosophy, neuroscience, and politics at the university.
Reflecting back, the reasons no one cared:
• Too scary to even think about; switch topics :end of conversation
• Too esoteric
• Too futuristic
• Too hard to understand / techie
• Inevitable, too big, nothing I/we can do, powerless, others are handling/dealing
• Narrow Self interest, cynicism
• Pro AI/ tech - it’s good, you're just a luddite activist hippie judgement
• Belief in some grand shiny AI better Future - a quasi-religious faith/hope.
• Don’t have the mental bandwidth, time, resources to deal with social & political issues nor risk the professional repercussions/backlash.
As with many things in life, laws will only be enacted or changes made if there’s a huge disaster, so as to overcome the inertia, ignoring, inconvenience and self interest. The only way to get the attention, perhaps? As was the case with social media, it’s more of the proverbial frog in the slowly heating pot: a general creeping malaise and mental/social rot, rather than an explosive type disaster.
Raising the alarm by us cyber security professionals is hard. Most of my colleagues have insufficient application security background to understand what the difference is between a programmed machine and a trained one, with opaque representations (if that is even the correct term) central to the resulting architecture. I have a background in unconventional computing and the philosophy of science and technology so saw the 1980s-1990s debate over eliminative materials and the introduction of ANNs into the debate being crucial, as well. This is part of the puzzle, and is also poorly understood. Our host's work is doing a yeoman effort to communicate some of these matters, but putting it all together and getting more voices for legitimate appeals to authority (so that, in my case, executives in a Canadian public service context listen) is hard.
Trying to explain to people that the tech CEOs are not necessarily the ones who best understand how their technology works is a nearly impossible task. The Ds want the tech industry to be their besties and the Rs want to either complain that it’s biased against them or ensure that it reinforces their own side of the culture wars. All short sighted. All style and no substance, just like ChatGPT.
1) U.S. lawmakers have jurisdiction over about 5% of the world's population. About the same for the EU.
2) We live in a globalized world interconnected by the Internet and other means, therefore....
3) If silicon valley were to vanish completely, the march towards an AI future would just continue elsewhere.
4) Therefore, getting all hysterical about AI is pointless, because we don't have the power to determine the future of this technology.
It's like the weather. We don't yell at the sky on rainy days because that wouldn't accomplish anything. The rainy day is bigger than us. Our only option is to adapt to the rain.
AI seems less like rain and more like a neighbor who, after installing a big expensive sprinkler system that ends up blasting water right at your front door, insists that at this point nothing can be done. And then tries to convince you that it's beneficial.
The fundamental premise of the AI community, that it should be allowed to regulate itself, is false. Negative feedback control systems are ubiquitous in nature, and are the shared characteristic of all stable and effective human-constructed systems. Regarding the latter, there are legitimate arguments to be made concerning the details, but no human-created entity has to my knowledge has succeeded in regulating itself. Once again our elected officials are guilty of dereliction of duty for failing to enact meaningful legislation setting guidelines for use of AI in the media and imposition of financial and criminal penalties for violations thereof.
We also need to enact legislation declaring that personhood is DNA-based. Otherwise we will see a move by the wingnuts of the SCOTUS to declare chatbots people, and therefore protected by the First Amendment, inter alia. Needless to say, the use of AI in the autonomous exercise of life and death decisions should be outlawed, both domestically and internationally.
As that Pinko Commie Wokeist Adam Smith said, "people of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the publick, or in some contrivance to raise prices."
It's the same problem we see with the military-industrial complex, big oil, the pharmaceutical industry...and pretty much every other major lobbying group in Washington - government was captured by special interests decades ago because we haven't gotten money out of politics. That's how you end up with things like "citizens united" and policies designed to benefit big corporations and the wealthy donor class (e.g. like private equity firms being able to buy up vast quantities of residential real estate).
In the meantime, everyone else gets screwed, and we're seeing it in all facets of our civilization right now.
From your lips to God's ears. History is repeating itself in the lack of AI regulation-exactly the playbook Silicon Valley used to block all attempts at data and privacy regulation. It's why the US is the only first world country without such regulation. Silicon Valley has paid off the regulators open to their bribes...ahem "Donations," and threatened to "primary" those politicians who won't take their money. When the lack of regulation merely allowed unbelievable concentration of wealth and heaps of online abuse, well that was bad, but survivable. When it's about whether a potentially disastrous tech gets developed with no guardrails-we're talking Silicon Valley getting carte blanche on the most important industrial policy in history! Make no mistake-Silicon Valley will not stop pushing to be virtually unregulated in AI development It will have to be MADE to stop, by politicians fearing their voters more than they do Silicon Valley. This will be a long and ugly fight.
Meta actually pay the ex UK Lib Dem leader to keep things sweet. For anybody that knows the policies that he and Cameron subscribed to, the hypocrisy is off the scale.
The revolving door exists between every regulator and the industries they purport to oversee. Heck, that’s among the biggest problems in believe we have with any gov’t oversight. Whether it’s the oil & gas industry, the environmental industry, the healthcare industry, or any other, the claim is always that a small number of people truly understand the issues faced in the respective industry, that industry insiders are the only really qualified folks for the job…until of course their term is done & back to industry they go. It helps to have nice stacked the deck for their future employers (or in some cases, the employers they took the sabbatical from to serve their country before returning ;). I wouldn’t expect things to be any different in this industry hence why I don’t see gov’t and regulators as the solution either. Legislators will never do the right thing when it comes to regulation because they depend on these companies for the electoral fundraising. It’s a corrupt and rigged system and until money is no longer considered speech, and the Citizens United ruling stands, I see no way out of this regulatory mess across any industry.
I am an artist and a mother, and I am feeling extremely worried about the current state of the world. As much as I hate to admit it, I feel powerless when it comes to making a difference. I know there are many others like me who care deeply about the issues at hand, but we don't know where to begin when it comes to advocating for change.
I have posted the PauseAI discord below. It is a start
The reality of everything you've been writing hit me the other day when I was looking into buying a common household item—a bug spray, what with it starting to get hotter where I am—and doing as I always do when buying anything, I tried to do a little research first, searching for some reviews or info on what people say works best.
I was halfway down reading a list of reviewed items that popped up in the search results when I realised what I was reading was unmistakably written by ChatGPT, with its characteristic writing style repeating through each "review" and none of them written by a human, with sentences like (e.g.): "When considering to purchase a X, there are a few things to consider. First..."
Amazed, I went back to look at the other results only to realise almost all of them had to be brand new AI spam sites pumped up in the results by SEO tactics, or perhaps copying other websites' metadata to displace them as I recalled seeing a CEO mentioned as doing (probably from this publication). It was not long ago every result was written by a human; now, I could scarcely find a single search result that was.
I had slightly more luck looking for a vacuum cleaner finding human reviews. On the other hand, I was looking for some new cat toys and found AI cats (of course never labelled) mixed into video results and marketing images (formerly photos).
What is the world coming to when there is so much junk AI "information" to wade through making everyday purchases? And if businesses can so quickly pollute the Internet with junk on things that don't matter that much, what are the better motivated like states, intelligence, their stakeholders, and other political actors doing that I haven't noticed? If business is doing it, they must be.
The consequences of LLMs are not a future issue but a present one, and the way they're being used is mortifying, drowning the Internet in a sea of crap.
Consumer Reports is still around, and still written by humans. Be careful, though -- there are lots of sites with similar-sounding names but without CR's integrity, trying to sound like they're the same thing. https://consumerreports.org
Thanks, I had heard of them, and I'm embarrassed to admit I assumed for whatever reason they were something around in my parents' time and not something still going!
It's incredible we now have to place a premium on anything being done by humans. It is hard to avoid the output of LLMs these days, whether searching the Internet, trying to reach customer service, or whatever. I've even seen obvious ChatGPT-written social media posts from—bizarrely—people who are human, but felt like spitting out an AI reply for whatever reason. Certainly not the good AI outcome we all hoped for.
I agree, and I wish I had any confidence in Washington to actually be competent enough to serve “the people” well… but I don’t. At all. Either party :(
There is only one party basically, when it comes to war and regulation of industry. The only meaningful differences are on cultural issues.
When I read the announcement for the Artificial Intelligence Safety and Security Board and saw the names of Sam Altman, Dario Amodei, Jensen Huang, Satya Nadella and Sundar Pichai as members, I had to double check the whole thing wasn't a dark humour internet joke.
Any chance that the book could come out sooner?
i so desperately wish
It could come out today if you would post the current draft on arxiv.org, or some other open website. Why aren't you doing that?
Complicated tradeoffs, contractual requirements. I write a lot here that is free and public but there is value in having MIT press publishing a peer reviewed book
I get it and agree that it will be great for your book to be published by MIT Press. Does your MIT Press contract prevent you from discussing the content of the book prior to publication? Would they sue you or cancel publication if you wrote brief descriptions of the laws/regulations you would like to see enacted? Given that this is a crucial election year and that September is a long way off in terms of the time until the election, such a restriction far over-values the monetary loss MIT Press might suffer when compared to the good such a discussion might facilitate.
This why I think we need to organize and get our voices heard, and at least try:
https://pauseai.info/2024-may
Please find information on the upcoming protests and coordinate with us in Discord as it helps:
https://discord.com/invite/vcUByb5F
I like activism. I like research. I like regulation that is asymmetrical and meaningful, esp with regards to training data and ownership.
But an outright pause policy just won't work, least of all a policy that restricts access to public research. Aside from arbitrary computing limits, there is no universally agreed upon definition of "better than GPT-4"; benchmarks are trivially cheated either way (making something seem stronger than it is or weaker than it is). Weights don't tell the story of inference capability (modern models converge on replicating their original data set with multiple sizes and architectures). Hell, we would be hard pressed to explain how such models or AI are different than numerous other statistical or networked systems deployed in our daily lives.
The core issue has and will always be THE DATA. Where did the data come from, was consent or compensation provided, and is that data source made clear and public?
That single issue broadly and cheaply cuts the problem right at the core issue of power, ownership, and wealth inequality.
don't let perfect be the enemy of good
This is why we want to have oversight committees do pretesting on new frontier models, which I think is already an idea with Romney's proposed framework.
I agree that we need more data transparency too.
Is anyone else concerned that these AI models may also include "comments sections" from Internet content? I am completely behind requiring all AI training to be based on paid access. It's a simple test - if any of us must pay to access content behind a paywall, then the AI trainers should also. Beyond that, it should also be clear that this opinion that I'm writing today should NOT be included in any AI training. I am not an expert on AI, even though I have various IT skills acquired over 40+ years of experience. There has never been a computer related technology that wasn't exploited, and I maintain that principal applies to all technology.
Timely. Uphill battle, though, as it generally is these days. Fifty years (at least) of the conviction that market forces are always the best for good societal outcomes (they are not, they are the best for profits, the bad — and good, that is there too — that arrive are side effects of the profit drive) has produced generations of people who think that regulation is 'bad'. This is true in a lot of complex situations, be it IT strategy in organisations (where 'silver bullets' reign supreme at top level, if the issues aren't neglected, that is) or in politics. You're not up against tech bros. You're up against deep cultural convictions. The same sort that gave us the roaring twenties, the Great Depression, and then war. That outcome is of course not certain, but the situation is so far out of stability that anything can happen.
I feel like this post is from some kind of mirror universe. My whole life it seems like anti-market fundamentalism has been the deep cultural conviction and that pro-science economists are constantly fighting a desperate rear-guard action against it. Every single time something has gone wrong with the economy, regardless of whether it was caused by too little regulation, too much regulation, or the wrong kind of regulation, the majority of people have been ready to pounce on it and declare that "market fundamentalism" and "neoliberalism" have been discredited and we need more regulation. I've become convinced that market fundamentalism is like cow-tipping, Satanic cults, or rainbow parties, it's something everyone swears is common, but actually doesn't exist.
Maybe Terry Gilliam could script another black comedy: “Fear & Loathing in Silicon Valley”? The only problem is that it’s t’other way round… yes, I too fear the technocratic CEOs.
"But nobody did anything about it.”
Indeed. As I noted earlier, I wrote (and talked) about this stuff in 1991 after researching AI, philosophy, neuroscience, and politics at the university.
Reflecting back, the reasons no one cared:
• Too scary to even think about; switch topics :end of conversation
• Too esoteric
• Too futuristic
• Too hard to understand / techie
• Inevitable, too big, nothing I/we can do, powerless, others are handling/dealing
• Narrow Self interest, cynicism
• Pro AI/ tech - it’s good, you're just a luddite activist hippie judgement
• Belief in some grand shiny AI better Future - a quasi-religious faith/hope.
• Don’t have the mental bandwidth, time, resources to deal with social & political issues nor risk the professional repercussions/backlash.
As with many things in life, laws will only be enacted or changes made if there’s a huge disaster, so as to overcome the inertia, ignoring, inconvenience and self interest. The only way to get the attention, perhaps? As was the case with social media, it’s more of the proverbial frog in the slowly heating pot: a general creeping malaise and mental/social rot, rather than an explosive type disaster.
Let's hope it won't take such an event.
Bruce Schneier is a good resource for risks and mitigations of unregulated AI
https://www.schneier.com/tag/artificial-intelligence/
Raising the alarm by us cyber security professionals is hard. Most of my colleagues have insufficient application security background to understand what the difference is between a programmed machine and a trained one, with opaque representations (if that is even the correct term) central to the resulting architecture. I have a background in unconventional computing and the philosophy of science and technology so saw the 1980s-1990s debate over eliminative materials and the introduction of ANNs into the debate being crucial, as well. This is part of the puzzle, and is also poorly understood. Our host's work is doing a yeoman effort to communicate some of these matters, but putting it all together and getting more voices for legitimate appeals to authority (so that, in my case, executives in a Canadian public service context listen) is hard.
Trying to explain to people that the tech CEOs are not necessarily the ones who best understand how their technology works is a nearly impossible task. The Ds want the tech industry to be their besties and the Rs want to either complain that it’s biased against them or ensure that it reinforces their own side of the culture wars. All short sighted. All style and no substance, just like ChatGPT.
Only ever met one tech billionaire, but we all agreed that he didn't seem very smart.
Might be a good business man, but probably just in the right place at the right time.
1) U.S. lawmakers have jurisdiction over about 5% of the world's population. About the same for the EU.
2) We live in a globalized world interconnected by the Internet and other means, therefore....
3) If silicon valley were to vanish completely, the march towards an AI future would just continue elsewhere.
4) Therefore, getting all hysterical about AI is pointless, because we don't have the power to determine the future of this technology.
It's like the weather. We don't yell at the sky on rainy days because that wouldn't accomplish anything. The rainy day is bigger than us. Our only option is to adapt to the rain.
AI seems less like rain and more like a neighbor who, after installing a big expensive sprinkler system that ends up blasting water right at your front door, insists that at this point nothing can be done. And then tries to convince you that it's beneficial.
Thank you for your thoughtful post!
The fundamental premise of the AI community, that it should be allowed to regulate itself, is false. Negative feedback control systems are ubiquitous in nature, and are the shared characteristic of all stable and effective human-constructed systems. Regarding the latter, there are legitimate arguments to be made concerning the details, but no human-created entity has to my knowledge has succeeded in regulating itself. Once again our elected officials are guilty of dereliction of duty for failing to enact meaningful legislation setting guidelines for use of AI in the media and imposition of financial and criminal penalties for violations thereof.
We also need to enact legislation declaring that personhood is DNA-based. Otherwise we will see a move by the wingnuts of the SCOTUS to declare chatbots people, and therefore protected by the First Amendment, inter alia. Needless to say, the use of AI in the autonomous exercise of life and death decisions should be outlawed, both domestically and internationally.
As that Pinko Commie Wokeist Adam Smith said, "people of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the publick, or in some contrivance to raise prices."
Or, as George Carlin put it https://m.youtube.com/watch?v=VAFd4FdbJxs
It's the same problem we see with the military-industrial complex, big oil, the pharmaceutical industry...and pretty much every other major lobbying group in Washington - government was captured by special interests decades ago because we haven't gotten money out of politics. That's how you end up with things like "citizens united" and policies designed to benefit big corporations and the wealthy donor class (e.g. like private equity firms being able to buy up vast quantities of residential real estate).
In the meantime, everyone else gets screwed, and we're seeing it in all facets of our civilization right now.
From your lips to God's ears. History is repeating itself in the lack of AI regulation-exactly the playbook Silicon Valley used to block all attempts at data and privacy regulation. It's why the US is the only first world country without such regulation. Silicon Valley has paid off the regulators open to their bribes...ahem "Donations," and threatened to "primary" those politicians who won't take their money. When the lack of regulation merely allowed unbelievable concentration of wealth and heaps of online abuse, well that was bad, but survivable. When it's about whether a potentially disastrous tech gets developed with no guardrails-we're talking Silicon Valley getting carte blanche on the most important industrial policy in history! Make no mistake-Silicon Valley will not stop pushing to be virtually unregulated in AI development It will have to be MADE to stop, by politicians fearing their voters more than they do Silicon Valley. This will be a long and ugly fight.
Meta actually pay the ex UK Lib Dem leader to keep things sweet. For anybody that knows the policies that he and Cameron subscribed to, the hypocrisy is off the scale.
The revolving door exists between every regulator and the industries they purport to oversee. Heck, that’s among the biggest problems in believe we have with any gov’t oversight. Whether it’s the oil & gas industry, the environmental industry, the healthcare industry, or any other, the claim is always that a small number of people truly understand the issues faced in the respective industry, that industry insiders are the only really qualified folks for the job…until of course their term is done & back to industry they go. It helps to have nice stacked the deck for their future employers (or in some cases, the employers they took the sabbatical from to serve their country before returning ;). I wouldn’t expect things to be any different in this industry hence why I don’t see gov’t and regulators as the solution either. Legislators will never do the right thing when it comes to regulation because they depend on these companies for the electoral fundraising. It’s a corrupt and rigged system and until money is no longer considered speech, and the Citizens United ruling stands, I see no way out of this regulatory mess across any industry.