Discover more from Marcus on AI
Could we all be doomed, without ranked-choice voting?
As I wrote the other day, p(doom) is somewhat-tongue-in-cheek shorthand for the probability that we are all going to die from some AI-induced extinction event. I rather doubt that will happen, but I am not sure that it won’t.
What I am sure of, though, is that our chances to avoid serious consequences aren’t great if we control neither AI nor the companies that make AI. The goal of the companies is return to shareholders, not to return to humanity.
The default right now is regulatory capture: the companies will make the rules, telling the governments what they are and are not comfortable with. The government will go along. Indeed, what we have seen on the AI front so far in the US, for the most part, is voluntary guidelines – stuff that the companies are comfortable with.
That has meant, for example, no tangible progress on getting the big companies to commit to revealing what data their models were trained on; we have transparency in name, but not in deed. Without data transparency, we can’t make sure our models are without bias, or even know the sources of the biases they hide within, we can’t truly understand the scope and limits of their generalization (which is central to understanding what they might do in unexpected circumstances) and we are hamstrung in our ability to mitigate their harmful consequences: what is unknown is hard to fix. If you shouldn’t fix what ain’t broke, you can’t fix what you don’t know.
Fact is, we have not seen enough independence from Big Tech, in the current government, or any previous governments in recent memory. For fifteen years we have known that social media is a problem, and yet we can’t even get a basic privacy bill passed. Section 230 has been a disaster.
If we are to get the right regulatory regime for AI – one that protects consumers, without stifling innovation – we are going to need government to step up.
I wish I could say I was more optimistic.
Powerful corporate lobbying is clearly a huge problem. Maybe the two-party system is another.
Andrew Yang made some pretty good arguments against the two-party system last week in a debate hosted by John Donavan and the recently rechristened Open for Debate (formerly Intelligence Squared). He also made some powerful arguments for ranked-choice voting.
The most compelling example he gave was about what happened recently in Alaska, when they abandoned the traditional party based primary system, in favor of a non-partisan open primary with ranked-choice-voting. The Republican Lisa Murkowski, a voice of independence never fully aligned either with Democrats or Republicans, managed to avoid being primaried, unlike nearly every other Republican who has tried to maintained a measure of independence in the Trump era. Ranked-choice voting was probably also key to Mary Peltola’s upset victory, allowing her to become the first Alaska Native member of Congress. Yang’s discussion left me wondering whether ranked-choice voting might be a way to empower outsiders that could stand up to Big Tech more directly to participate in government.
As Yang pointed out, ranked-choice voting also might be our best shot at avoiding an authoritarian turn, absolutely essential to a positive AI future.
To be fair, I don’t know any of this for sure; I am not an expert in politics or voting, and there are some other worthy variations on the same theme of alternative voting approaches. But I think ranked-choice-voting (or some similar alternative) is very much worth serious consideration. As Miles Taylor, former chief of staff at the US Department of Homeland Security, put it to me in a text message, “ranked-choiced-voting is … one of the wonder drugs for democracy.”
If it can help us get to a good place with AI policy, I am all in.
p.s. Want to know more about the intersection of AI and politics? To ask questions about this essay? Please join Andrew Yang and myself in conversation about topics like this Wednesday on a Twitter (X) Space, hosted by Holden Culotta, at 5pm Pacific/8pm Eastern time.
Gary Marcus is CEO and co-founder of Ca-tai.org, co-author of Rebooting AI, and host of the podcast Humans versus Machines.