29 Comments
User's avatar
Sherri Bergman's avatar

It has always seemed to me that a big part of the appeal of AI, whether it’s drone warfare, algorithmic discrimination in hiring, or deepfakes in elections, is to make it hard to assign responsibility and liability.

Expand full comment
Celeste Garcia's avatar

I see no future where deep fakes aren’t playing a significant role in elections. They will only get better and more sophisticated. It’s so disheartening.

Expand full comment
Joy in HK fiFP's avatar

"Who indeed will pay the price if GenAI destroys democracy? Who will be held accountable?"

No one will be held accountable. We have spent decades creating an unaccountable society, where democracy means what the highest bidder wants. I don't know if it's too late to look for ways to bring accountability into our world. But the answer is not confined to within the AI, or even, tech worlds. It is a systematic failure, and will require all to join together to bring accountability into being. And that will include stopping what's behind its absence in the first place.

Do we even know how to begin?

Expand full comment
direwolff's avatar

No one of course, because just like “blockchain” and DAOs, the entire point is for no one to be in charge or responsible even when someone is. With AI part of the trick is to anthropomorphize the heck out of it, to tell us it has feelings and to make people believe it deserves rights so that the companies behind can clam ignorance and that it’s its emergent qualities that are unpredictably responsible for the decision this “technology” is making, and not its makers 😉

Expand full comment
Oleg  Alexandrov's avatar

GenAI won't destroy democracy. Any more than Facebook or the internet did.

One could argue that the tape recorder bought down Nixon's presidency. It surely did. The hot new tech of that time. No tape recording companies were held liable.

Expand full comment
Tim Marcus Moore's avatar

The tape recorder brought down a president by revealing the truth, not by manufacturing a lie.

Expand full comment
Oleg  Alexandrov's avatar

Yeah, the analogy was not great, but it serves to illustrate the point.

Blaming the tech is not the solution. Gross intentional negligence on behalf of manufacturer is punishable. Otherwise, people misusing tools is not manufacturer's fault.

This is important to remember, as when it come to AI's power, we've seen nothing yet.

Expand full comment
Doug Poland's avatar

On the contrary, I think it an excellent analogy. The tape recorder has no more agency than does AI, which is why nobody then argued that the tape recorder acted to bring down a President. The analogy is the mis-assignment of agency to the tool rather than the actors, the nature of the act isn’t relevant here.

Expand full comment
Oleg  Alexandrov's avatar

If you want a different analogy, note that nobody blamed the Post Office for delivering Unabomber's packages.

Expand full comment
Tim Marcus Moore's avatar

The US Postal Inspection Service absolutely took responsibility for investigating the Unabomber, assisting in Kaczynski's arrest, and is generally responsible for the safety of the postal service.

Expand full comment
Oleg  Alexandrov's avatar

One does not blame the Post Office. The FBI worked with the Post Office. One does not call for banning shipment of packages.

It was not Post Office's fault some nut sent explosives by mail.

Expand full comment
Sherri Bergman's avatar

Also not a good analogy. The Unabomb Task Force, a group of postal inspectors and law enforcement, was created to try to take down the bomber and the USPS currently employs over 1600 inspectors to try to keep that type of thing from happening again. What is the AI equivalent?

Expand full comment
Tom N's avatar

I think that an AI free certification would be much more useful than A GMO free certification.

Expand full comment
Patrick McGuinness's avatar

I don't think it's right to talk of "GenAI destroys democracy" It reminds me of the headline about "SUV kills 5 pedestrians" as if a disembodied machine is responsible. It's the driver, not the vehicle doing the destroying. In this case, there are human abusers of GenAI abusing the tech for malicious ends that are responsible for harming us, in political systems and society. It's wise to distrust that which can be abused, but also wise to address ultimate responsibility.

When we start sending people to jail for malicious abuses of technology, eg deepfakes, theft of copyright info, and maliciously false misinformation, perhaps the lesson will be learned.

Expand full comment
atomless's avatar

But the framing there is all wrong. It should be: "Who indeed will pay the price if Capital destroys democracy? Who will be held accountable?"

At least you can then ask how Capital secures its impunity, invisibility, and continuity --and note how it deploys genAi as a mechanism thereof.

Expand full comment
David's avatar

Curious to learn more, I searched the topic and found articles going back about 6 years, with quite a few from last year, concerning the use of deepfakes in Argentina's elections and politics. The threat is real, but if there's any silver lining (that "if" is working hard), perhaps voters are learning to be appropriately skeptical.

Expand full comment
Uncanny Valley's avatar

We can't even get Meta to monitor their own content, ain't nobody got time for monitoring AI... conveniently. We need better tooling to flag AI content for sure

Expand full comment
mel borneu's avatar

If you think we have democracy today, I've got a beautiful beachside property with sweeping ocean views in Kansas you may like to buy. Democracy died decades ago.

Expand full comment
Doug Poland's avatar

You left "been used to" out of the title between "just" and "influence". As you know as well as anyone, AI, while it is artificial, is not a form of intelligence and is not capable of agency.

Also, thank you for the post and your continued efforts!

Expand full comment
Mikhail Mimic's avatar

Your article amounts to “I can’t verify anything about its truth, or offer an expert analysis, but it fits my narrative so who cares if it’s true.” I do respect your expertise and have been reading you for a while as a counterbalance to overwhelming hype from Silicon Valley. Increasingly it just feels like you want to “dunk” on opponents, real or manufactured. You are a respected scientist, and that’s why I read you. If you don’t have time to read or “vet” something why post it at all.

Expand full comment
Ian Douglas Rushlau's avatar

"Who indeed will pay the price if GenAI destroys democracy?"

Any of us who prefer pluralistic democracy to fascism.

"Who will be held accountable?"

Until shown otherwise, it's safest to assume no one, ever.

Expand full comment
Scott Burson's avatar

Hey Gary, you should see this paper (“Vision-Language Models Do Not Understand Negation”): https://kumailalhamoud.netlify.app/publication/negbench/

Expand full comment
keithdouglas's avatar

I noticed problems with logical constants in ordinary language fairly early on with the current hyped systems. Our host has discussed why; I had originally guessed there would be a problem because they are all very equivocal in ordinary language. In fact, I routinely refer to work in the philosophy of logic when discussing the problems with these systems for that reason. I wondered, particularly, about conditionals, but it turned out that negation was a problem too. It seems that sometimes the models just do not regard the "not" or similar as relevant because it is "high % match" without it.

Expand full comment
Scott Burson's avatar

Imagine for a moment what it would take to train a VLM about negation. You'd show it an image, along with a list of sentences like: "There is no elephant here. There is no orchid here. There is no 1957 Chevy here." The problem, obviously, is that the set of things that are not in the image is infinite.

The only compact way to model negation is as a meta-level operator. But how is a model going to learn that? It's only looking for correlations between things that actually occur in its input.

Expand full comment
keithdouglas's avatar

This relates to the meanings of "not". "There is nothing in the fridge" is not usually literally true in the most narrow of meanings - there's probably a electromagnetic and a gravitational field in the fridge, for example. And if someone asks "Can I have something to eat?" the fact that the fridge contains some rustwater and a box of baking soda is not a counterexample to the response "there's nothing in the fridge". And negation only gets harder from there (predicate vs propositional negation, maybe). It is not surprising in light of the linguistic data that many nonclassical logics (relevant, connexive, intuitionistic, default, etc.) can be understand as discussing variations/views of/on negation.

In conversation and class discussion with Mario Bunge more than 25 years ago, who offered "AI is impossible because we can ... and they cannot ..." type examples as are common, I responded how (by what mechanism) *we* do it. The same applies here; children obviously come to use negation somehow. So, can we replicate that (or something "equivalent")? This might even create a practical (so to say) version of a current topic often also published in the logics literature - how does one compare them, stitch them together, etc.? And if we do it "wrong"? (Psychopathology? My sister is a forensic clinical psychologist, and we've talked about the hard case of someone who might have defects in that natural *logic*. Textbook discussions of "human natural logic" presuppose classicality, usually, and that's a problem, as has been occasionally discussed in monographs, etc.)

Expand full comment