The number one reason I hear for not regulating AI is that if we do China will beat us, and if China beats us, we are all screwed, because they would use AI to different ends than we would.
Like, presumably, surveilling their citizens.
We would never do that. Right? Right?
§
I don’t know what’s worse. The fact that the US is apparently using these tools, or the fact that they aren’t likely to be reliable, almost certainly regularly reporting false alarms, and likely full of insidious bias, like so many previous AI systems have shown themselves to be? Or the fact that the systems that are being used presumably aren’t transparent about what data they are trained on, leaving us wholly in the dark as to how pernicious those biases might be?
Have we learned nothing? Buolwamini, Gebru, Birhane, Mitchell, Raji, Sweeney, Zuboff, Whittaker, and many others have been warning us about this for years.
Gary Marcus really does want to get to a positive AI future in which humans thrive, but he increasingly sees that as an uphill battle. More discussion of bias in the forthcoming final episode of Humans versus Machines, with Alondra Nelson and Brian Christian.
"Have we learned nothing?"
The tl;dr but completely accurate answer: No.
As always, I would urge us to step back a bit from the details of specific events, circumstances, technologies, people, and companies etc to focus on the bigger picture all such particulars inhabit.
The foundation of all such real world events is our relationship with knowledge. If we continue to seek as much knowledge as possible as fast as possible, the resulting pace of change is most likely going to outstrip our ability to adapt. Because...
Knowledge development feeds back upon itself, leading to an ever accelerating pace of knowledge development. The more we learn, the faster we learn more. Our ability to adapt is not just a function of technology, but is also dependent upon our biology, and social agendas installed in us by evolution millions of years before we were even human. Thus our ability to adapt to change is incremental, not exponential.
The point here is that we can talk about the details all day long, but until we find a way to align the pace of knowledge development and our ability to adapt to change, things are simply going to get weirder and weirder.
This challenge can't be met by focusing on details and trying to address particular troubling situations one by one by one as they arise. That is a loser's game because an accelerating knowledge explosion will continue to present new challenges faster than we can figure out how to meet them.
1) Either we find a way to limit the pace of knowledge development...
2) Or we find a way to dramatically enhance our ability to adapt...
3) Or we find a way to make peace with the price tag for failure.