33 Comments
Jun 7·edited Jun 7Liked by Gary Marcus

Asking any industry to police itself is lunacy pushed by industries: we have plenty of examples from chemicals, oil, social media, nanotech, etc. I recall reading that only 10,000 of the 150,000 chemicals used have been safety-tested.

Even Pharma and Food skirt the rules as often as possible, and they are directly life-threatening. And they do so even when better-regulated as in Europe.

Greed is powerful :(

Expand full comment

The only kind of regulation they'll support is that which either widens their moat or straight up hands them cash. Lobbying used to be a crime in this state and it should be again. Criminalizing the lobbyist is the kind of progress I'd love to see. Let the people elect leaders and let the leaders lead. While I'm eyeing that pie in the sky, how about a little public campaign finance as well. Now where is that llama tutorial?

Expand full comment
Jun 7Liked by Gary Marcus

It is deplorable that Big Tech is both claiming that this is "the most powerful technology in our lives" and at the same time, demanding that it be put into a special category to have no regulation whatsoever.

But it is also deplorable that WE are doing nothing, as citizens. We're going to let Big Tech get their way unless we do something. Here are the actions that you can take:

1) Write to your representatives: your congressmen and your senators. Support the bill and explain why it is important that AI has basic regulations.

2) Join organizations that fight for more regulations for the public good. I'm a member of PauseAI, and I think we can work together to get more regulations on this speeding trainwreck.

https://discord.gg/kh9MSGb8

3) If Gary has any ideas, please share. We shouldn't just let this happen to us. You matter. We matter.

Expand full comment
author

This is EXACTLY why I wrote Taming Silicon Valey: getting citizens involved and organized is our best hope!

Expand full comment
Jun 7Liked by Gary Marcus

https://x.com/shakeelhashim/status/1799085543835513293

"Interesting to see Anthropic joining TechNet, the trade group opposing SB 1047.

That means OpenAI, Anthropic, Google, Meta, Amazon, Apple, IBM, and Andreessen Horowitz all now belong to orgs opposing the bill.

Hardly looking like regulatory capture!"

The industry and their e/acc lapdogs just keep repeating "regulatory capture' like stochastic parrots unable to update on new information, even as every single major company fights the bill. What kind of "regulatory capture" is there if all of Big Tech doesn't want the bill?

All the more reason that we should fight for the bill, and regulate Big Tech.

Expand full comment

Anthropic are a joke. They are looking to employ people with experience in Python, but not interested in low level languages like C. They just want code kiddies, or people who can't code at all: just another bunch of conmen.

Expand full comment

Externality Capitalism in full swing.

Expand full comment
Jun 7Liked by Gary Marcus

Sounds like the gun manufacturers. "We can't be held liable to how people use our product. If someone's kid dies, that's the cost of having a free society and strong Second Amendment. " Same arrogant crap. Grease a few palms and walk away.

Expand full comment

Pains me to say it, but the problem of regulatory capture should have been addressed decades ago, before we found ourselves in a situation where critical new technologies could threaten human survival.

But, the story of humanity is the story of greed and corruption...and here we are yet again in a new kind of Gilded Age.

Expand full comment

Thanks for this much needed clarity on the California bill!

Expand full comment
Jun 7Liked by Gary Marcus

Curious to see how brilliant Big Tech is when it comes to the technical build of their products, which requires such depth of logical thought and engineering skill, yet so dumb when it comes to interpreting the text of a proposed bill. You would think the lack of critical thought there was intentional or something.

Expand full comment

Great reporting!!!

Expand full comment

I agree with Andrew Ng on the hazardous capability clause. It should either not be there or clarified in more detail. There are already laws regarding company liability when a product does not function as claimed or directly causes great harm. EVERY product could be misused by users. Is the pencil factory liable when John Wick kills three men in a bar with a pencil. No, they should not be. The hazardous capability clause is not reasonable and is the equivalent of asking the pencil company to ensure the pencil cannot be used to cause harm. And this is complicated for AI where we just have no idea how people might choose to use it. Yes, society absolutely should bear its portion of responsibility. I'm not against regulation, but it must be practical, detailed, and not narrative driven.

Expand full comment

You’ve cherry picked and twisted the passages from Andrew Ng’s blog post to respond to here Gary.

He hasn’t in fact said he’s against regulation. In other pieces of his commentary, he’s actually come out in support of it. As long as it’s reasoned and logical.

How would you respond to this excellent piece of logic from that same Andrew Ng blog post, which actually forms the basis of his argument against the bill…

“Regulators should regulate applications rather than technology…

For example, an electric motor is a technology. When we put it in a blender, an electric vehicle, dialysis machine, or guided bomb, it becomes an application. Imagine if we passed laws saying, if anyone uses a motor in a harmful way, the motor manufacturer is liable. Motor makers would either shut down or make motors so tiny as to be useless for most applications. If we pass such a law, sure, we might stop people from building guided bombs, but we’d also lose blenders, electric vehicles, and dialysis machines. In contrast, if we look at specific applications, like blenders, we can more rationally assess risks and figure out how to make sure they’re safe, and even ban classes of applications, like certain types of munitions.”

Expand full comment

I am beginning to appreciate it that I live in the EU :)

Expand full comment

America is setting itself up for a fall if it doesn't get its act together. If comic book conmen and people who can build a website can make billions, there is no hope.

Expand full comment

Offer the vulture capitalists like Altman and Ng a trade: total self-regulation, but in exchange they and their families/cronies up and move to East Palestine and can never go more than 10 miles from the train crash site, their food will come from dollar stores, and their water will come from Flint.

(Oh, and no cheating by moving their apocalypse bunkers and equipment there, either... Why do they have apocalypse bunkers again? It sure sounds like it's for when self-regulation implodes.)

Expand full comment

I listened to a very recent A16z podcast on SB 1047 that made a lot of sense to me as to why it's a net negative. See what you think. Very smart people on both sides.

https://a16z.com/podcast/californias-senate-bill-1047-what-you-need-to-know/

They reckon:

- It's misguided, making LLM/developers personally liable for any harmful future use by anyone using their models,

- It's going to cause AI companies to leave California in droves and will sabotage the US's ability to innovate and be at the forefront of this world change. (If it passes in August, it'll pave the way for other state/fed laws like a lot of California stuff does.)

- It would cement big tech cartel/monopolistic power as new entrants would have massive regulatory costs and it would effectively cut off open source.

- It's badly made and would soon apply to all, not just big tech as AI model improvements accelerate

I like A16z and Marc Andreesen because they are PRO small tech, startups and clearly do not want Big Tech monopolies. Their take is that this actually benefits big tech under the guise of safety.

-This act is the equivalent of asking Ford Motor Company to guarantee the Model-T will never cause harm in any circumstance, and to be liable for how it is used by everyone who drives.

-It is the equivalent of asking Thomas Edison to guarantee that electricity will never harm humans, and to be liable for every use of the lightbulb and every future innovation created by others.

The countries that are the most prosperous in the world are ones that best took advantage of the Industrial Revolution. AI innovation is of the same caliber. We have to give talented people freedom to innovate and not hand over control to the elite.

Yes, regulate AI and protect against harm. But regulate AI misuse, not the developers.

Expand full comment