59 Comments
Aug 20Liked by Gary Marcus

Notice the astroturfing even here, where the objectors to SB-1047 consistently have written no other comments, follow no subcriptions, etc.

Expand full comment

My objection to this bill (CA SB-1047) is not that it would be fatal or even significant to the companies producing LLMs, it is that it is silly. How many LLMs does it take to change a light-bulb? Yeah, that's right, LLMs cannot change lightbulbs. They can't do anything but model language. They don't provide any original information that could not be found elsewhere. They are fluent, but not competent.

The bill includes liability for "enabling" catastrophic events. The latest markup revises that to "materially enable," but that is still too vague. Computers enabled the Manhattan project. Was that material? Could it have been foreseen by their developers?

The silliest provision is the requirement to install a kill switch in any model before training, "the capability to promptly enact a full shutdown" of the model.

The risks that it seeks to mitigate might be real for some model, some day, but not today. The current state of the art models do not present the anticipated risks, but the criteria for what constitutes a "covered model" are all stated relative to current models (e.g., the number of FLOPS or the cost of training). They would not necessarily apply to future models, for example, quantum computing models, or they may apply to too many models, which do not present risks. Future models may be trainable for less than $100 million, for example, and they would be excluded from this regulation. That makes no sense: Apply today's criteria to models that do not exist and may not be relevant to models that do present risks.

What this bill does is to respond to and provide government certification to the hype surrounding GenAI models. It supports and provides government certification that these models are more powerful than they are. Despite industry protestations, this bill is a gift to the industry. If today's models are not genuinely (and generally intelligent), they will be within a few years (or so the bill presumes) and so this specific kind of regulation is needed now. The state is contributing to the marketing based on science fiction. That is silly.

Finally, the bill creates a business model for "third party" organization to certify that the models are safe. For the foreseeable future, that party will be able to collect large fees without actually having to do any valuable work.

Today's models do not present the dangers that are anticipated by this bill and it is dubious whether any future model ever will. The California legislature is being conned and that is why I object to this bill. Stop the hype.

Expand full comment
Aug 20Liked by Gary Marcus

This is a simple transparency and liability bill which encourages safety. Assuming that none of the harms caused by AI is novel, then that will be a valid defense in court.

But overall, this merely holds Big Tech to their own voluntary commitments to the White House. That they are objecting so heavily now to it heavily questions their sincerity.

We do know that AI models can create novel threats, including with deception(Anthropic, Apollo) and it only makes sense that there is fledging light-touch regulation to a transformative technology.

Expand full comment

So far the biggest harms from AI are misinformation and copyright violation or at least abusing copyright. Are we now in the business of pre crime? Punishing companies for crimes that haven't been committed yet. I have yet to see how AI is a bigger threat than any other product, aside perhaps from its ease of use and scale. We are all freaking out about things people think will happen that haven't happened yet, and may not..or which the industry or an innovative startup may resolve. If we're going to regulate AI we should regulate all the other products and businesses that present similar risks. Or only act on the things we have tangible evidence upon which to act.

Expand full comment
author

This is misinformation - nowhere in the text is “precrime” prohibited.

Expand full comment

The text of SB 1047 actually says that a developer cannot train an AI model if it might "enable a critical harm."

Quoting:

"A developer shall not use a covered model commercially or publicly, or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law or make a covered model or a covered model derivative available for commercial or public public, or foreseeably public, use, if there is an unreasonable risk that the covered model or covered model derivative can cause or materially enable a critical harm."

Elsewhere in the text the bill attempts to define "critical harm" as a weapon resulting in mass casualties, a cyber attack resulting in 500 million dollars of damages, or "other" harms comparable in scope.

To me this looks like picking on AI. I'm not saying we shouldn't have regulation, but this is so comically general as to be pointless and onerous.

Why shouldn't we have a similar bill for libraries, or the web itself, since they could enable such critical harm?

The bill excludes some conditions such as if the information is reasonably publicly accessible by an ordinary person, if the model is combined with other software which doesn't really use the model to do its harm, and harms not caused by the model.

Glad they have that last point cause otherwise I would totally blame a model for things it didn't actually do.

This actually protects libraries and the internet. But why should it? What's the point if your goal is to prevent mass casualties and costly events? It would seem, then, that the goal is something else.

The phrase "enable a critical harm" seems downright Orwellian to me.

What am I missing? And how is it not an attempt to regulate something for the crimes it _might_ enable rather than the ones it actually commits? Anything can enable a critical harm.

This bill, applied to boxcutter manufacturers, would have made them responsible for 9/11.

In my opinion SB 1047 is misguided and sloppy. It is a shot across the bow of large AI companies, warning them to watch their step, but it is not especially sound or consistent.

I would like to see something more well reasoned, more precise, and with more legal specificity.

Expand full comment

Copyright violation is the big story because that is how the big tech companies are funding there A.I products. if you are not paying for goods and services yet are still using them that is a such a big advantage. I saw this twenty years ago when the social media companies didn't pay copyright holders and created a now trillion dollar economy. These same social media companies now tax you with ads and using there algorithm as a weapon.

Expand full comment

We regulate other car manufacturers for product harms, this is really no different.

Expand full comment

But we don't hold the auto manufacturer responsible for "enabling critical harm" if someone uses the car to transport a bomb. This is actually different.

Expand full comment

Certainly we do, e.g. saddlebag gas tanks are liable. We want to encourage safety as a whole, especially as AI is far less understood and much closer to an autopilot than a fully controlled thing like a car.

Expand full comment

Steven bunchofnumbers - no likes - no posts - no reads.

Expand full comment

Indeed, the risks are "still poorly understood". That's why premature regulation is unhelpful.

In particular " [Attorney General] to sue companies for negligent safety practices before a catastrophic event occurs" would have been a very bad idea. Give a government a lot of weaponry, and it will use it in no time.

Nothing "catastrophic" is in the pipeline. Deep fakes, etc, are what plenty of other tools do.

If AI becomes more powerful, which I fully expect it to, rather than to plateau, which it won't, then we can see.

Expand full comment
Aug 20Liked by Gary Marcus

If AI is harmless as you believe, then the regulations will never cause any issues. No worries about liability at all!

Expand full comment

AI is not harmless, of course. Nothing is. Regulation must be appropriate. So far it is premature.

Expand full comment

Light touch liability enhances transparency and promotes safety and innovation.

Expand full comment

One day. So far chatbots are too clueless and regulation is premature.

No regulation is harmless. It slows down innovation. Regulation must be for a reason. When there is a reason, such as for airplanes, then yeah, regulation is good.

I would support specific measure regarding privacy, copyright etc, but at least the last one would be sorted out in court anyway.

Expand full comment

If people just continually asked ChatGPT questions they know the answer too and saw how consistently wrong it is then the hype would die overnight

Expand full comment

Alas, I would not be so sure. Pentesters like me have been pointing this sort of stuff out to people; so maybe it requires getting people to try themselves? The pure "Here's the crazy crap it does" from someone else does not seem to give certain folks pause.

Expand full comment

Garry Marcus, last week: the AI bubble is over! This is it! AI is collapsing! It was all BS!

Garry Marcus, this week: If this bill doesn't pass AI is going to become an existential threat.

Talk about speaking from both sides of your mouth!

To be clear, I support this legislation. But man, you sure do change your narrative to suit your needs.

Expand full comment

Questions like yours Gary has explained numerous times. By "bubble bursting and AI collapsing" he referred to funding front, not necessarily the tech or research fronts. By "threat" he referred to dumb AI threats such as deepfakes and scams and disinformation and more which require no AGI. After all, you don't need AGI to produce spams and phishing emails after all. With LLM (dumb AI) deepfakes and scams and disinformation just became readily available in large scale to everyone. It is precisely because dumb AI like LLM is not capable of detecting truth from falsehood and yet is capable of presenting humanlike conversational appearance, bad actors can use them to harm society easily, rapidly, and in large scale.

Expand full comment

Furthermore, I don't remember "existential threat" appear often in Gary's AI warning list, if ever. That is because AGI at this moment still seems like a far off fantasy, and if ever realized, will be based on architectures unrelated to LLM. Warning about "existential threat" from LLM is simply marketing gimmick and delusion of grandeur from the big techs and tech bros, delusional, disingenuous, and sinister at the same time.

Expand full comment

GenAI won't bust, except for the weakest companies. Meta may give up, also X's Grok, even Anthropic.

OpenAI is big and brings in cash, and in the worst case they will cut the less promising directions.

Google and Microsoft's investments will go on, and for Google GenAI is natural fit, as it improves the search and the assistant. Google also has long timelines.

Then it is a faulty assumption that GenAI won't improve or that LLM trained on internet text is all they got. Architectures will get better.

So, both the pessimism about future ability of chatbots, and excessive zeal for regulation are both misguided.

Expand full comment

I respectfully disagree on the GenAI/LLM improvement (to AGI?) part. Time will tell.

Expand full comment

I don't think chatbots alone will bring us to AGI. That will take many more iterations and architecture work. But chatbots can and will improve when it comes to solving reasonably simple and frequently encountered problems, while being able to check their work and run tools. Hallucination will go down too. Then will see what happens after that.

Expand full comment

Regulation is written in blood.

These days it feels like billionaires can wade through lakes of real blood (thinking of self-driving cars here) without consequence. Linking the abstract concepts of algorithms that feed the social media monsters to political violence or suicide and properly regulating them seems impossible. It's very disheartening.

Expand full comment
Aug 20Liked by Gary Marcus

And they are literally pouring millions now to advertise falsehoods or astroturf a highly popular bill(77% of Californians favor it).

Expand full comment

Based on history we won't see any real regulation until the political and financial elites are harmed. Like with radium they didn't regulate the stuff when dozens of watch company workers died. It was when a wealthy socialite named Eben Byers died that they finally cracked down on use of radium.

Once someone hacks an autonomous car to crush a billionaire or two we should see the law get real serious about regulation of software systems.

Expand full comment

Don't Look Up.

Expand full comment

I look up. I see chatbots. Yawn.

Expand full comment

This is all wild speculation. Tegmark is a known alarmist.

Expand full comment

Hinton, Yoshua, etc all have also chimed in. On the objective level, all of Tegmark's points are completely true.

Expand full comment

This is all very long-term speculation. Once the tech gets better, we'll see.

Expand full comment

Speculation?

https://www.forbes.com/sites/emmawoollacott/2024/06/19/top-ai-chatbots-spread-russian-propaganda/

I can also show deception, self awareness, reward hacking, etc

Expand full comment

Any advice for us here in England? The government has been speaking to a number of orgs about regulating AI. My charity aims to be involved from a disability perspective

Expand full comment

I think there are UK based AI Safety organizations; I may be able to connnect you with them if you want to DM.

Expand full comment

“Fairy Tales”

The AI Tale

Is too big to fail

Too big to nail

And too big to jail

Too big for facts

And too big to tax

We have to be lax

Lest China attacks

Expand full comment

“Agog about a Gig”

A gig economy

Current biz

Gen AI, you see

All it is

Expand full comment

And just a gig is no gig deal

Expand full comment

“Too big to derail” too

Cuz then you’re only left with “The AGI G”

Expand full comment

“The AGI Grail

Is too big to fail

….”

is better

Expand full comment

Sorry, supposed to be “fAIry Tales”

Expand full comment

“AIry Tales” also works

Expand full comment

But what about China?

Expand full comment

Frankly, they AI’n the place

Expand full comment

Thank you for being a voice for what is good for the public as well as for ongoing innovation!

Expand full comment

Don't you dare require us to be responsible. That would thwart innovation.

Expand full comment

It would hurt AInovation

Expand full comment

Definition of AInovation: scrape up more copyrighted data to train on

Expand full comment

And when there is no more copyrighted (ie, good) data to be had, feed the AI its own tail (MAID - Mad AI Disease)

Expand full comment

I would like to see the people who weakened the bill and their online allies have their names remembered. It shouldn't be a free of consequence game to willingly endanger the public however much of a temporarily embarassed millionaire you are.

Expand full comment

This is commonplace when it comes to politics, whether it's in Sacramento or here where I live in Washington. Effective delaying comprehensive regulations not only protects major players, it also gives them time to extensively develop deeper relationships with legislators and their staffs via higher level lobbying that will enable them to develop long term strategies to establish regulations that continue to protect them while making it harder for smaller players to enter the market and thrive.

They, of course, have every right to do this. The problem is that most of the public will not be award of what's going on NOW and will found out later when something happens to them, their lives, their lives, their jobs, their industries, or their communities.

Expand full comment