52 Comments

"You claim that "SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model" and in this connection that "It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model."

We need to remember that there is no divine right for innovation at all costs. Innovation should always work within the surrounding social contract and laws. Even budding coders need to be careful with their code and design with Security by design and by default (as an example).

In addition, today, there is far too much trivial "innovation" that serves no purpose, other than attempting to attract funding, fame and wealth.

Expand full comment

Every time somebody extolls "innovation in the abstract" I remind them that the application of Zyklon B to the problem of mass murder checked every box (reduced cost, increased speed, less staff) for an innovative step. The principal use of "Innovation in the abstract" today is to provide cover for concrete damage to individuals and society. It is that damage in the here-and-now that the drafters of SB-1047 are attempting to address. If you think they got something wrong, give us an argument based on specifics and spare us the fuzzy generalities.

Expand full comment

Methinks 'innovation' long ago, sadly, became just another, (admittedly four syllables long & vaguely impressive sounding...), Corporate Cliché in the global business 🐂**** bible. ((Speaking from experience here, having studied Design & Innovation for a Sustainable Future (O.U.). 2008. (Dissertation on Closed Loop Systems in Danish Industrial Estates)). Spot the 2nd Current Corporate Cliché!! Clue=Start with, 'Sus..... Unfortunately, when these words lose their Scientific/ Technical meaning, we see the babbling insane results resulting in really shoddy research & design. I include Software Development & Engineering here, particularly much contemporary A.I. Examples Can Be Provided.

Expand full comment

Maybe "innovation" is a corporate cliché.

I am more concerned with "regulation", which do-gooders use to prematurely control which research directions are worthwhile and how businesses must spend their money.

We are still in exploration phase here, and chatbots are dumb as brick. Regulation is premature.

Expand full comment

I think that this regulation is not for that necessarily, but to get "innovators" to think more carefully about the way that they develop and distribute their new shiny toys. The EU AIA is also mainly aimed at getting "innovators" to think more carefully.

In both cases, if you don't want to have the stringent regulations affect you, there is a very simple answer, develop it within the spirit and intent of the regulations and attempt to limit the damage caused by your "innovation", in other words do not work fast and break things.

Expand full comment

'Regulation', has almost become an International Governmental Abstract Concept has it not? Sure Corporations have the Higher Hand in the Game of Cards, no? I.e. More Wonga, and Higher Paid, Highly Trained & Qualified, Lobbyists & Public Relations Teams, (Harvard, Yale, MITT, Oxford, Cambridge etc cetera. Not Mixing with that Esteemed Milieu myself personally, my opinions are ultimately

a/. Lowly, & b/. Amateur. Respectfullly, I am not incredibly a/. Bright or b/. Qualified.

Expand full comment

Innovation is mostly a corporate cliché in finance sector and the rest of the FIRE trio. Innovation at any price seems to be a mantra in gain of function research. Everyone knows what was the price. Unfortunately, more is likely to come.

Expand full comment
Aug 10Liked by Gary Marcus

It seems to be a favorite game of public figures ... write a piece for mainstream media righteously arguing for protection of a technology (or policy or whatever thing), gaslighting the reader with a twisted interpretation of a bill. Those who are good with words know words can be twisted into a rope you can hang just about any issue with.

Good on you Gary to keep them on their toes.

Expand full comment

This “public figure” happens to be widely viewed as “the Godmother of AI”, not some random famous person.

Expand full comment

There are no deities among humans

Expand full comment

You always get more engagement with someone you approach reasonably, and this piece does that. Of course it is not as viscerally satisfying as interrogating Fei-Fei-Li with questions like-how much would you be against AI regulation if you didn't have a huge financial self interest in AI being unregulated? Isn't it kind of a big issue with intellectual honesty if you have a big conflict of interest cuz you hold equity in/are on board of the AI companies that would be regulated, and you fail to mention that giant conflict while you try to subvert reasonable regulation. EVERYBODY thinks their conflict is okay. Doctors think it is okay to get kickback from drug companies based on volume of prescriptions and pretend that personal interest never impacts what they prescribe-but PHARMA wouldn't do it if it didn't pay off. If people could magically subtract their self-interested bias in the things they advocate for, we wouldn't need rules around conflict of interest-and the people WITH conflict of interest would not fail to mention potential conflicts. When they fail to declare conflict, take everything they say with a grain of salt. They've got an agenda, and quite frequently they themselves are the beneficiaries of that agenda. https://www.theverge.com/2024/7/17/24200496/ai-fei-fei-li-world-labs-andreessen-horowitz-radical-ventures

Expand full comment

Many thanks! I was not aware of Fei Fei Li having interest in a company financed by Andreesen Horowitz.

Expand full comment
founding
Aug 10Liked by Gary Marcus

Can't we at least wait until it solves the goat boat problem?

Expand full comment
author

ha ha but of course it can already cause harm despite failing at such nuance

Expand full comment
founding

But not too many chatbot suicides, right?

Expand full comment
author

so far one that i am aware of.

Expand full comment
founding

Whew! One huge danger crossed off the list.

Expand full comment
Aug 10Liked by Gary Marcus

Well-put, Gary. The days of Si Valley's righteousness of supposedly "Do no harm" are gone. The emperor is naked, and doesn't' like it being noticed outside of its echo chamber.

Expand full comment
Aug 10Liked by Gary Marcus

Speed limits stifle the economy : if trucks could go as fast as they want, we would be able to turn over inventory quicker and hold less in warehouses for just in time processing.

Income taxes stifle workers, especially single people with no children and high enough income to do innovative things. If there were no income tax, people would be able to do about 20 to 30% more economic activity and we could chuck trashbots in the bin.

Requiring people who give investment advice and who sell securities to get licensed and then regulating their activity, stifles innovative sales techniques that would increase investment in everything. Imagine that, no fiduciary responsibility, no disclosure, just delightful grift to raise capital for every moonshot proposed by every unaccountable greenhorn developer who couldn't possibly care... err PREDICT that their cobbled together trashbot might not be great at aiming missiles and heart surgery... YET. BUT, if they promise to do better next time (repeatedly) and pivot into building terrible MMORPG for defi web3 larpers who don't mind floating with no legs, then I'm positive the world will forgive the "accidental" friendly fire that just happens to scare the general public into generating enough corporate profit to make a breakthrough! (or fund a middle aged man's dream of becoming karate kid so he can totally whoop a 50 year old in a cage match)

We stifle "innovation" and "progress" in thousands of different ways, usually around activities that impersonate those things while actually generating profit through harming the public. Just look at the pro-slavery arguments before the Civil War, they're full of nonsense about stifling productivity and self proclaimed declarations of "superiority" and "divine right" that totally turned out to be bullshit to the surprise of nobody.

The last thing we need is another layer of bullshit between the C-Suite and where the work happens, to stand there in front of productivity doing nothing except waving it's arms screaming "look at what I did" and stealing credit/salary from everyone. The grift has begun to back off from declaring imminent superintelligence and dogwhistled divine right, to super-effeciveness and still dogwhistling, when the reality is that it is an error prone calculator operating in new areas that we don't have well defined rules of calculation for : it's a guess generator being touted as an ubermensch while we're watching it try to slam square pegs through round holes with a hammer. As always, the evangelists throw a shit fit when we say things like : maybe we should write a law that says "don't write a check with your mouth that your ass can't cash".

🤣 I could keep going, but there are regulations on speech that stifle all the innovative alternative solutions I have to regulation. We should probably let the lawmakers do the work. 🧠🤷‍♂️

Expand full comment
Aug 10Liked by Gary Marcus

Indeed :-) "stifling" is a guard against recklessness. Not everything needs to be more "efficient".

Expand full comment

Exactly, "move fast and break things" didn't pan out with a chatbot running fast food drive through window because of course it won't, and nobody needs the experiment run again OGAS style. Hubris and societal collapse also moves fast and breaks things, but never gets stifled by the "innovators" who triggered it because they usually hide and then die. Regulations prevent all of that at the expense of stifling the vices of the ignorant. 😅

Expand full comment

"if trucks could go as fast as they want" ... have you seen what the human body looks like after it's been hit by a speeding tractor trailer?

I have.

Expand full comment

I've seen much worse. It's good to know that you get my point that "stifling" some carelessly harmful activity is a good idea. Thanks Birgitte 😎

Expand full comment

This is pretty funny when the advocates for little tech are Andreesen Horowitz. Not sure if this bill is what we need, but at least you've read it with a practiced eye.

Expand full comment

I’ve often thought of a federal agency akin to the FDA for AI safety would be a great model for consumer protection in this case. This puts the burden of proof on the company who stands to benefit and leaves (or at least used to) the level of proof focused on downside risks.

I’m not sure how effective a govt. agency or any group or agency could possibly be at limiting distribution though.

Expand full comment

A follow-on point I’d like to make also is that if AI companies were serious about AI safety, they wouldn’t hire engineers to evaluate safety but instead would hire experts in the fields of human safety (I.e. physicians, psychologists, etc.) who would advocate for the society at large rather than solely evaluate the technology. Similar to the FDA who have statisticians, physicians, and chemists all working to evaluate drugs efficacy and safety.

-and again, love the blog!!! Keep it up!!!

Expand full comment
Aug 10Liked by Gary Marcus

Gary, the link to the SB-1047 article is sending me in a loop back here. Would love to read the article by Fei Fei Li to put your response in context.

Expand full comment

Let's just hope your message reaches her.

Expand full comment

This is local US politics, so I don't know all the details, but from the above, I gather that the issue might be that in GenAI it is impossible technically to discriminate between your 'grave' risks and 'any' risk. The AI tech companies know that without actual understanding, it is impossible to do this. Hence, they fear it will shut them down.

As such, the opposition to such regulations is a tacit acknowledgment of what many (who do understand what is going on, such as prominently yourself) have been telling the world in the first place: these are powerful but ultimately very dumb systems.

By the way, the blanket "this stifles innovation" argument is shallow. Sometimes the change is one we try to limit as much as we possibly can. E.g. what if the innovation is 'a better nerve gas'? Or the innovation is a wonderful new battery, but its use or construction is an ecological disaster? Or it is against norms and values such as cloning humans? This stifles innovation often should be read as "this limits my profits". Regulations can be real innovation drivers.

Note, this doesn't solve the issue that if someone builds a weapon of mass destruction using Walmart tools, you can't hold Walmart responsible, unless such tools have no other legitimate use.

I am getting curious about this proposal, but alas, no time.

Expand full comment

SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. (2023-2024)

This bill is silly. It is based on a fictional view of artificial intelligence and will do nothing to improve safety.

The bill seeks to regulate something (autonomous super-intelligence) that does not exist and likely will not exist anytime in the foreseeable future. It is based on AI promoter’s hype and a fictionalized view of artificial intelligence as portrayed, for example, in the “Terminator” movies. It is not clear whether artificial intelligence models with the capabilities attributed to them in this bill will ever be possible or if they are possible, whether they will present the projected risks.

The “Terminator” movies are entertaining, but they are not a forecast of the future. They and the other fictionalized accounts of future AI running amok, are based on fundamental logical flaws and should not be the basis for legislation.

Contrary to Sec.2. (c), current models do not “have the potential to create novel threats.” They are language models, not autonomous thinking machines. They are word guessers, they do not reason, they do not plan, they have no autonomy. Autonomous intelligence, called artificial general intelligence, requires computational methods that have yet to be invented.

There is a great deal of hype from promoters of the current group of AI systems that they are currently at the level of high school students and complete general intelligence will be achieved in the next several months. This prediction is groundless. There has been no comprehensive, let alone, coherent, analysis comparing the performance of these machines with human intelligence. Current benchmarks there are logically flawed and of dubious validity.

Today’s models are trained on massive amounts of text to predict the next word, given a context of preceding words. The longer the context, in general, the more fluent the models are at predicting the next word.

A model is a summarization function that represents a simplified prediction given an input. It consists of three sets of numbers: numbers representing the inputs, numbers representing the model (the relationships between the inputs and the outputs) and numbers representing the outputs. The model’s numbers are called “parameters” (Sec. 3. (m)).

Some researchers call these models “stochastic parrots.” They are parrots because they repeat what they have been fed and stochastic because there is some variability in the words that they produce.

Current language models produce very fluent language that is often similar to what a human might produce, but that does not mean that they have similar human-level understanding. The larger the language model (more computing capacity, more data), the more fluent its produced language. Fluency should not be confused with competence. When a current generation model appears to be reasoning, for example, it is repeating, with some variability, a language pattern in the training text, produced by a human who may have been reasoning.

Many AI researchers fail to recognize this fluency/competence distinction because it is in their interest not to. It is not consistent with the hype that they have been using to promote their work. It is much more exciting to claim that one is on the verge of a breakthrough in AI than it is to say that they have built a great word guesser.

Sec. 3. (b) states: “Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

This is a definition of a thermostat. I think that it means that a system with any level of autonomy, rather than a system that varies its autonomy. The former is consistent with thermostats and every computer or electronic device ever. The latter rules out all systems ever. Systems do not vary their autonomy, but they may choose to cooperate.

Sec. 3 (e) tries to put a dollar or computational threshold on the definition of a covered model. That is a fool’s errand. First, these numbers are consistent with the method used to train today’s models (massive amounts of data and massive amounts of computing space), but today’s models do not present any of the risks with which this bill is concerned. They may not be relevant to future AI systems. Second, these variables may not even be calculable for quantum computers. I expect that the transition to quantum computing will come much sooner than the autonomous intelligence envisioned by this bill.

Critical Harm (Sec. 3. (g)) is worthy of regulatory prevention. Surely, we would want to protect the public from critical harm caused by a model, but the section also tries to protect from harms “enabled by” models. The word “enabled” seems to include just about anything. A pocket calculator, a watch, or just about anything might enable a harm. Primitive IBM computers enabled the Manhattan project. If they met the other criteria (for scale and cost) in this bill, those computers would be prohibited.

The bill seeks to exclude from “Critical Harm” “harms caused or enabled by information that a covered model outputs if the information is otherwise publicly accessible” (Sec. 3. (g) 2.). That would exclude from the definition anything that could be produced by a current language model. As stochastic parrots, they can only produce text that follows (repeats with variation or combines) the text on which they have been trained. That would also exclude any future models built on similar architecture. They do not originate information, they parrot it.

Whether a computer can be designed that will be able to do its own research and create its own facts on its own initiatives is, at this point, still speculation. In any case, it would probably be part of some larger institution, and like the computers used in the Manhattan project, the degree to which they directly enabled such harm may be tenuous.

The bill would require a developer to determine (22603 (c) (i) ) “That a covered model does not pose an unreasonable risk of causing or enabling a critical harm,” or that derivatives of that model will not pose an unreasonable risk. But these are impossible tasks. Could the developers of the IBM computers anticipate that they would be used in the Manhattan project? Could anyone anticipate all of the uses of any invention?

In some ways, the silliest of the requirements in this bill is that for an AI “kill switch” (22603. (a)(2), which comes straight out of the “Terminator” movies. In the movie a computer system designed to protect national security becomes large enough to suddenly become sentient. It simultaneously is smart enough to reinterpret its instructions but stupid enough to get that interpretation wrong. Once it interpreted its purpose, it blocked attempts to shut it down. Regulations should be based on reasonably foreseeable facts, not on movie tropes.

As well-intentioned as this bill may be, it is a boogeyman regulation seeking to protect the public from computer systems that may never exist. It requires the establishment of a new bureaucracy, the Frontier Model Division, to receive reports that would be impossible for model developers to prepare adequately. It serves to exacerbate the current AI hype by certifying that these models are so powerful, government regulation is needed to protect the public from them. It would also serve to provide protection to current large enterprises by adding regulatory burden that smaller organizations may not be able to meet.

Expand full comment

I heard that before, that the bill makes open source developers liable. That would indeed be of grave concern. But you say that this is not the case. Is there a definite answer to this question?

Expand full comment

Open Source developers need to be liable, otherwise, they have no accountability for their errors and omissions.

Expand full comment
Aug 10Liked by Gary Marcus

I took Gary's point to be that the original developers of Open Source would not be liable for what other developers did with it, not that those subsequent developers would not be liable. Further clarification is needed.

Expand full comment

FOSS is given away for free. It makes sense that it is the user who is accountable for using it.

Expand full comment
Aug 10Liked by Gary Marcus

We still outlaw bomb-making, even if the bombs are given away for free with a warning to the user.

Expand full comment

Here is a more interesting question. I agree that AI is a potentially dangerous technology. If one takes measures against FOSS AI we are left with proprietary AI only, owned by Big Tech. Is that the right path towards safe AI? Do you want to live in a society in which only governments and multinational corporations are allowed to own AI?

Expand full comment

I suspect everyone agrees that we generally want to have corporations and FOSS that make their wares available to the public to follow the same rules. However, we also imagine that governments and their contractors will want to create AIs that don't follow those rules to be used as weapons. Of course, countries can also agree on rules for weapon AI just as they do (or don't) for nuclear arms, land mines, biological, chemical weapons, etc.

Expand full comment

"I suspect everyone agrees that we generally want to have corporations and FOSS that make their wares available to the public to follow the same rules."

I dont understand. Corporations typically make their software not available to the public. FOSS is in the public domain. Corporate software is proprietary.

Expand full comment

Are you in favour of making gun manufacturers liable for killings committed with the guns they sell? (They are not currently.)

Expand full comment

Not a fair comparison. US society has decided guns are ok. That wouldn’t be my choice but that’s an entirely different discussion. We have lots of laws against creating dangerous things. The fact that we don’t have laws that cover all dangerous things, guns for example, is neither here nor there.

Expand full comment

But bombs are a fair comparison? Why is that?

Expand full comment

I don't think that the bill distinguishes between for profit and open source, but it does impose thresholds.

Quoting from the draft bill:

(e) (1) “Covered model” means either of the following:

(A) Before January 1, 2027, “covered model” means either of the following:

(i) An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.

(ii) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations.

(B) (i) Except as provided in clause (ii), on and after January 1, 2027, “covered model” means any of the following:

(I) An artificial intelligence model trained using a quantity of computing power determined by the Frontier Model Division pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud compute at the start of training as reasonably assessed by the developer.

(II) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold determined by the Frontier Model Division.

Expand full comment

This law is vastly premature. It will indeed have no purpose than to throw cold water on the industry.

I believe the tech will get better, rather than plateau. In another iteration or two, such a law may make sense.

Expand full comment

Marcus needs to take this one step more: tell us who's writing the code to decide what an AI or LLM is capable of

Answer: it's going to be unacceptable gov employees who the legislature hands control to regulate to

Driving AI and LLM to TX and FL

Expand full comment
author

that’s not actually how the law is constructed, but you might want to read it

Expand full comment

I read it. Just did again. From TFA: "The bill would require a developer, beginning January 1, 2028, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided."

Expand full comment

Clearly AI must be controlled by elected reprresentatives. In fact I think that anything which and anybody who uses the internet must abide by rules set up by elected reps.

Expand full comment