75 Comments

"First they came for the socialists, and I did not speak out—because I was not a socialist.

Then they came for the trade unionists, and I did not speak out—because I was not a trade unionist.

Then they came for the Jews, and I did not speak out—because I was not a Jew.

Then they came for me—and there was no one left to speak for me.

—Martin Niemöller https://encyclopedia.ushmm.org/content/en/article/martin-niemoeller-first-they-came-for-the-socialists

Expand full comment

in recent weeks I have have thought about that quote literally every day. sometimes hourly.

Expand full comment

Nice that we are in synch, sad that it is about this sort of issues....

Expand full comment

I agree with your argument. I have one seemingly subtle, but profoundly important thing to add. You said that they were looking for “supporters of Hamas”. That’s not really true. They intentionally conflate the support of Palestinians with supporters of Hamas. It’s very deliberate - especially for people who call Antifa (not a real group and shorthand for anti-fascist) a terrorist group. They know many Americans and elected Democrats will happily decry someone who actually supports Hamas and also let the bad faith right wing conflate those with people who don’t like Apartheid-like rule and/or genocidal/war crime actions.

Expand full comment

that’s probably true for some and not others and why due process is vital.

Expand full comment

I have former friends who identified as members of antifa, had branded stickers and propaganda with the red and black anarchy flags and everything. Just because they don't have articles of incorporation and an official nonprofit status doesn't mean they're "not a real group". You could say the same about some religious sects or about 12 step programs. Some organizations are informal and decentralized by design. For antifa the design is explicitly for evading interference by feds - they have to waste resources to infiltrate many separate cells in order to carry out any kind of intelligence or divide-and-conquer or instigation/false flag ops. The antifa I knew were very fed-conscious and committed to security culture. They most definitely do have unifying symbols and principles of ideology and organization though.

Expand full comment

Why are they former friends?

Expand full comment

Because we don't have much in common and they kinda scared me.

Expand full comment

To elaborate, I used to be more sympathetic or adjacent to their political ideology, which is how we became acquainted, but slowly I morphed into more of a run of the mill libertarian/classical liberal.

Expand full comment

Spot on

Expand full comment

This is a perfect example of harm created by AI even if it is an inept AI. It also smells of Elon Musk playing Fagin to his DOGE kids. They cannot add correctly among other things and know next to nothing about COBOL and the representation of calendar dates. A reference to trans mice was taken to mean transsexual instead of transgenic mice. I thought these were supposed to be whiz kids but instead they are living breathing examples of Dunning-Kruger. To quote Ripley from Alien, Did IQs just drop while I was gone?!

Expand full comment

Actually, this is a case where good AI would have realized what was being requested and stopped it. Instead, I guess Big Balls from DOGE just saw Rocky Horror for the first time. Perhaps he was smitten with Dr. Frank-N-Furter:

"I'm just a sweet transvestite from Transexual Transylvania"

Surprised they didn't also pull the transmissions out of the vehicles in the motor pool.

Expand full comment

It would be slightly less egregious if the AI technology could be relied on or worked.

As it is, it is so error-prone, that there will be large numbers who are erroneously identified through this process as undesirable.

Expand full comment

I’m of the mind that at this point LLMs being unreliable should be considered a feature, not a bug. What I personally find troublesome is the type of people that will be in charge of sifting through the data will not do their due dilligence either through negligence (over reliance on the systems) or through harboring an explicitly ideological agenda.

Expand full comment

DoD is deleting photos of the Enola Gay. I vote ideology.

I don't know how any military, whether vet or active duty, can support this.

Expand full comment

America turning more into a dark China mirror world all the time.

One of my favorites was Mark Zuckerberg publicly changing his tune as if to stave off being disappeared by the government for months…

Expand full comment

Literally why we need legislation like the EU AI ACT here.

Expand full comment

The problem is, this is the whole reason the public sector is pushing for AI development: Surveillance and military reasons. The private sector, of course, just wants to put all human workers in every industry on the unemployment list. Don't listen to anyone who tells you they're investing in this to "boost productivity," "solve scientific problems," "cure disease," "improve peoples' lives" or any of that. Anyone who tells you that is either absurdly naive or actively lying. AI engineers go to work every day with the goal of putting every family in the country on the street. That is why they do what they do.

Expand full comment

That might be true of the company owners, but I doubt that it's generally true of the engineers. They are trying to solve a problem and likely don't spend too much time thinking about the direct or indirect implications if they succeed.

Expand full comment

Then they are absurdly naive and probably not smart enough to do anything meaningful. It's so obvious why it's being invested in that I can't imagine you'd work for these companies unless driving humans to extinction was just a passion project of yours. To me, there's little difference between working for OpenAI or Anthropic and working for ISIS. They have pretty similar goals.

Expand full comment

Well, check out this article and tell me more about what you think:

https://open.substack.com/pub/ethicsandink/p/eu-ai-act-upholding-fundamental-rights?r=5201nb&utm_medium=ios

Expand full comment

I'm all for the regulation of AI seeing as I think it has zero potential to help regular people, but the issue is regulating doesn't do anything. If they actually create what they're trying to, laws won't matter. The government won't exist. Courts won't exist. The people who control that technology will be the unkillable rulers of all mankind. The fact that I see so many people who don't understand this makes me feel insane.

Expand full comment

Tommy, you’re an interesting person.

Expand full comment

Like, I'm sure there are plenty of people who do want to use AI to do stuff like curing diseases. Apart from anything else, disease cures are profitable!

Expand full comment

More than a slippery slope…

Expand full comment

You guys are correct. I run a small charity in Australia, selling second hand stuff. I

Thanks to Google Customer Reviews,i already live in the Black Mirror world. It's hell. I wear a name badge, and I deal with the public. To help donors and buyers the charity puts details on Google My Business. That site offers visitors a Customer Reviews facility, it's public, .and they get to publish anything they feel like, anonymously . Anyone with a grudge can do this So I literally do get assessed and star rated on every interaction .

A few lessons. 99.5% of clients are decent humans .. they declined Google's option to star rate me. Of the 0.5%, most are decent too .. the reviews are good .. hey feel they got good service, and are kind enough to write a testimonial. Thank you ! .

That leaves 0.2% 1 in 500 who are hateful. Or just plain wrong. They go public with their complaints before taking part in our resolution process. What they publish is what the public see it comes up when they google search and find us.. The public judge us on that .. so it affects our status. And the way people work, 0.2 percent of clients, Google transforms into 10 percent of comments.

That hits our charity's public standing .. in terms of impacting on our good name, on how much the public trust us to handle their donations respectfully and honestly.. Charities rely on public trust,. That's something the National charities regulator recognise.

So you are right about that world .. it is hell. Now, when interacting with the public, there's always at the back of my mind, a fear. That impacts on my mental well being, on my ability to volunteer.

Yes shop assistants always do get rated. In past times, Sales staff were on commission, unhappy customers could "speak with the manager" or tell their friends (word of mouth). Managers monitored staff performance. If you mis behaved,That affected your pay packet, But it stayed within the shop. And with your next customer, you could learn, and make a fresh start without that being recorded forever and visible to everyone who entered the shop or googled it.

Thanks for reading.

Expand full comment

Thank you, Gary for bring this up. It's pretty clear that the tyranny playbook has always targeted the most vulnerable, and those often intentionally vilified, or dehumanized, amongst us, to use as a wedge for their goal of total control.

Anyone who is aware of what's going on should be actively opposing this nefarious move with every tool, and skill, they have.

I fear there is a deep sense of passivity in the public that reminds me of those moments in a dream when we see the monster coming and we are paralyzed. We who can speak out need to do so now, with the clear understanding that the next one they come for will be you.

Expand full comment

From outside the US I'd actively oppose it if I knew what I should do. Even if I was inside I wouldn't know what to do. What should people do? Whatever we do it should probably involve groups meeting in person!

Expand full comment

I'm pretty sure the government's AI doesn't treat Netanyahu as a war criminal. Biased subjective AI will be used as a weapon against the government's enemies but not its friends, even if these friends are a million times worse.

Expand full comment

Yes, and this principle remains true even when our preferred party is in power. It's just as important to uphold limits on government then, because someone else will wield those same tools in the next administration. Unfortunately the partisanship of our age has led many to abandon these general principles. Each administration relies more on executive power than the one that came before it.

Expand full comment

Yikes, that's reminiscent of the Minority Report.

Expand full comment

Yeah, that was the most chilling Black Mirror that I saw. Not a good use case for AI.

Expand full comment

I agree with you Gary but this kind of thing has been going on since the Bush admin post-9/11. See for instance "total information awareness" (https://en.m.wikipedia.org/wiki/Total_Information_Awareness)

I worked for a company that was one of many contractors for US intelligence agencies who were using ML to analyze people's social media posts for sympathy with terrorism as early as 2014 or 2015, under Obama. One of the masters thesis projects available in the data science program I attended was using ML/NLP to detect extremist speech for another such government contractor, in 2015. The federal govt has been dragnetting civilian communications data for at least 2 decades and using modern ML to detect potential threats for at least a decade.

Expand full comment

Thank you for adding a dose of reality.

Expand full comment

It bugs me when people only criticize something when they perceived that it's their enemy who is doing it. It displays a lack of principles. I would expect better from Gary, who I find to be otherwise clear thinking.

Expand full comment

You think I am a supporter of Hamas?

Not saying this entirely new, but it’s next level.e

Expand full comment

I definitely did not mean to imply that you are a Hamas supporter. It's very clear that you are not. I just meant that what you're describing is not really a new phenomenon, except for the fact that the govt is being more honest and open about its application. The contractors I mentioned in my comment, by comparison, did most of their work under cloak of TS/SCI security clearances. Students who worked on the above mentioned project had to sign NDAs. I never had a security clearance so I don't even know what was ultimately happening behind the scenes, but just the public-facing parts of it seemed no less dystopian than what we're seeing here, if said techniques were resulting in governmental action against individuals. And we have no reason to believe they weren't, if any of the people still detained at Guantanamo without formal charge are any indication (I would say being detained without charge at gitmo is more dystopian than being denied a job or visa). Because of need-to-know, many of the nerd types doing the ML and software engineering probably never even knew if their inferences were actually being acted on - that was a job for the hardened men in the agencies they contracted with.

Another thing the company I worked for was working on (again, an intelligence and military contractor) was real time detection and classification of humans and vehicles in drone footage. Let the implications of that sink in. Again, this was all the way back in 2015, under Obama. Some of the DC area companies who work on this kind of stuff, just for documentation: Booz Allen Hamilton, Battelle, GA-CCRi.

Expand full comment

Thanks again. I would just add that your usage of "ML" is essentially equivalent to Gary's usage of "AI" in this context, but "AI" sounds so much scarier than "ML".

On the political side, I really would NOT like my government to be handing out free entry passes to my country without scrutiny for the purpose of that entry. So I'm not particularly concerned about this issue, certainly not at Gary's "urgent warning" level.

Expand full comment

Yeah I mean it's all ML, even "AI", just in more and more flexible/general architecture. Nowadays the same contractors probably brand themselves with the "AI" moniker to get those government dollars.

Expand full comment

What is the name of the company?

Expand full comment

I name a few at the bottom here: https://garymarcus.substack.com/p/urgent-warning-black-mirror-has-entered/comment/99505328

I'd prefer to keep my anonymity within reason when criticizing these companies in public so I'll just leave it as a list. You can search any of them and find their websites and other coverage of what they do. But much of the military and intelligence stuff is classified. And they don't just come out and say right on their site stuff like "we train ML models to put bounding boxes around humans in drone video", even though that's one of the things they do, because, reasonably, most people would be kind of turned off by that. They usually speak in more abstract terms.

Expand full comment

It seems certain that the data extracted by Musk et al will be consolidated with data from FB, X, and the rest of the personal data we've handed over. It will be used to do things like root out leakers by matching data items such as IP addresses. I would look for some high profile imprisonments soon unless some 1st or 4th amendment issues can be raised successfully. I wouldn't be surprised to see forged posts on social media sites as well.

One of my employers interacted with Palantir (owned by Thiel and Karp) around 2011 or 2012. They touted their ability to make connections across disparate data sets (my employer's product would have widened their reach). One demo showed predictive crowd control, e.g: as a protester (they called them rioters in the demo, IIRC) was newly identified, Palantir would search for related information to suggest which other leaders might be involved, or perhaps suggest potential destinations for a moving crowd of protesters. Ultimately they were simply figuring out how to build their own version of our product.

It appears they're using AI to help lock down the channels of protest before they can start, because the goal has always been to invoke the Insurrection Act of 1807. When that happens, Palantir will be locked and loaded. We better get support from military and police when it happens.

Expand full comment

This is almost as frightening as the news that AI will be used to monitor nuclear weapons systems. Using software that is unpredictable, and of which the correctness probably has not been proven, for such critical tasks sounds like a recipe for disaster.

Expand full comment