The number one reason I hear for not regulating AI is that if we do China will beat us, and if China beats us, we are all screwed, because they would use AI to different ends than we would. Like, presumably, surveilling their citizens. We would never do that. Right? Right?
"Have we learned nothing?"
The tl;dr but completely accurate answer: No.
I used to work at a civil liberties nonprofit and this is extraordinarily frustrating yet wholly unsurprising. I’m too exhausted to even finish this thought so I’ll leave it at this:
1. The amount of surveillance the government does on us is unreal and they will jump on any tech bandwagon that they think will help them continue the surveillance machine
2. Seeing the way the government uses technology against its own citizens is a large reason why I’m a bit of a Luddite and certainly an AI hater. Our government has huge checks and balances compared to much of the world and we have enormous constitutional protections, yet shoddiest of “science” is used against us if it can be shoved under the “national security” blanket. When autocratic governments start turning to “AI” to control their citizens....big yikes
But that Nvidia stock looked great this week, am I right!? Full steam ahead fellas!
As always, I would urge us to step back a bit from the details of specific events, circumstances, technologies, people, and companies etc to focus on the bigger picture all such particulars inhabit.
The foundation of all such real world events is our relationship with knowledge. If we continue to seek as much knowledge as possible as fast as possible, the resulting pace of change is most likely going to outstrip our ability to adapt. Because...
Knowledge development feeds back upon itself, leading to an ever accelerating pace of knowledge development. The more we learn, the faster we learn more. Our ability to adapt is not just a function of technology, but is also dependent upon our biology, and social agendas installed in us by evolution millions of years before we were even human. Thus our ability to adapt to change is incremental, not exponential.
The point here is that we can talk about the details all day long, but until we find a way to align the pace of knowledge development and our ability to adapt to change, things are simply going to get weirder and weirder.
This challenge can't be met by focusing on details and trying to address particular troubling situations one by one by one as they arise. That is a loser's game because an accelerating knowledge explosion will continue to present new challenges faster than we can figure out how to meet them.
1) Either we find a way to limit the pace of knowledge development...
2) Or we find a way to dramatically enhance our ability to adapt...
3) Or we find a way to make peace with the price tag for failure.
I'm not entirely clear on what, exactly, we're concerned that China will "beat" us to. Domestic surveillance? That's their deal. Strategic intelligence gathering? What is the mechanism by which "AI" advancements create an edge there? General technological innovation? That's speculative "superintelligence" stuff. More powerful LLMs? Have at, why should we care?
On the other hand, if China are able to dramatically augment their military capabilites using AI, that would be cause for concern. And it would be catastrophic if they're able to make an AI super-hacker and hijack our power grids and financial institutions.
Also, I constructed those last two sentences by describing well established existing concerns and then inserting "AI".
I might also wonder how the lo mein at my favorite Chinese restaurant will taste after being enhanced with AI, or how much more dominant the Chinese diving team will become once their coaches start using AI. Will K-Pop and J-Pop fade into obscurity after the arrival of AI-powered C-Pop? And will the top European football leagues still be home to the world's most talented players if the Chinese Super League wins the AI race? So much to worry about!