Dear Fei-Fei,
The stakes are really high here so please forgive this open letter.
We've known each other for a long time and I very much respect your work. But your Fortune letter opposing SB-1047 seems off the mark to me, in part because it doesn't fit my understanding of what SB-1047 actually calls for, and in part because it does too little by way of offering a legitimate alternative.
Here are some of my concerns:
• You claim that "SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model" and in this connection that "It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model." But SB-1047 does not require predicting every use.
Rather, it focuses on specific, serious "critical harms" such as mass casualties, weapons of mass destruction, large-scale cyberattacks, and AI models autonomously committing serious felonies. Those seem reasonable to me, and I don't understand what would justify an exemption there. Even then developers are required only to implement "reasonable safeguards" against these severe risks--not to fully mitigate them. And much of what would be required is already something companies committed to voluntarily, in discussions at the White House and in Seoul. None of this is really conveyed in your Fortune essay.
• You argue that SB-1047 is potentially "stifling innovation" on the assertion that the bill could harm open-source AI development because of "kill switch" requirements. But as I understand the latest version of the bill, the "kill switch" requirement doesn't apply to open-source models once they are out of the original developer’s control.
• You claim that the bill will hurt academia and"little tech" and put others at a disadvantage to tech giants. But you don't make clear that much of the bill's requirements are limited to models with training runs of $100 million+. Companies that can afford that, presumably valued in the billions, are not exactly "little-tech."
• You say that you favor AI governance, but don't make any positive, concrete suggestion for how to address risks such as mass casualties, weapons of mass destruction, large-scale cyberattacks, and AI models autonomously committing serious felonies. With no other serious proposal on offer, I personally favor SB-1047, though I would welcome discussion of alternatives.
Lastly, asking for standards is not unique to AI; it's common across many industries to ask that companies evaluate the safety of their products according to set standards: just look at the pharmaceutical industry, or aviation, automobiles, etc. As Bengio, Russell, Hinton, and Lessig observed, "There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers." Your letter doesn’t really grapple with this.
I'm sure that your argument against SB-1047 was made in good faith, and with the best of intentions. But as noted above there seem to be some inaccuracies in your essay, and I wonder if you would be willing to reconsider in light of these clarifications.
Sincerely,
Gary
Professor Emeritus, New York University
Founder and CEO, Geometric Intelligence (acquired by Uber)
Author, Taming Silicon Valley
"You claim that "SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model" and in this connection that "It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model."
We need to remember that there is no divine right for innovation at all costs. Innovation should always work within the surrounding social contract and laws. Even budding coders need to be careful with their code and design with Security by design and by default (as an example).
In addition, today, there is far too much trivial "innovation" that serves no purpose, other than attempting to attract funding, fame and wealth.
It seems to be a favorite game of public figures ... write a piece for mainstream media righteously arguing for protection of a technology (or policy or whatever thing), gaslighting the reader with a twisted interpretation of a bill. Those who are good with words know words can be twisted into a rope you can hang just about any issue with.
Good on you Gary to keep them on their toes.