The US and China want different things, right?
The number one reason I hear for not regulating AI is that if we do China will beat us, and if China beats us, we are all screwed, because they would use AI to different ends than we would.
Like, presumably, surveilling their citizens.
We would never do that. Right? Right?
I don’t know what’s worse. The fact that the US is apparently using these tools, or the fact that they aren’t likely to be reliable, almost certainly regularly reporting false alarms, and likely full of insidious bias, like so many previous AI systems have shown themselves to be? Or the fact that the systems that are being used presumably aren’t transparent about what data they are trained on, leaving us wholly in the dark as to how pernicious those biases might be?
Have we learned nothing? Buolwamini, Gebru, Birhane, Mitchell, Raji, Sweeney, Zuboff, Whittaker, and many others have been warning us about this for years.
Gary Marcus really does want to get to a positive AI future in which humans thrive, but he increasingly sees that as an uphill battle. More discussion of bias in the forthcoming final episode of Humans versus Machines, with Alondra Nelson and Brian Christian.