28 Comments
Nov 3, 2023·edited Nov 3, 2023Liked by Gary Marcus

Interesting. I seriously doubt that those that are pushing for open source AI would release any of their own research if they deemed it to be a breakthrough that gets them close to solving AGI. It's just a publicity gimmick in my opinion.

At this point, I don't see how any government on earth can regulate research on AGI. I personally don't believe AGI can be solved by government research organizations, academia or big AI corporations. Cracking AGI will require serious thinking outside the box which is impossible for the mainstream. Only a Newton-like, maverick thinker can crack this nut. There are many private AGI researchers around the world. Good luck convincing them to share their work with others.

I suspect that whoever is smart enough to solve AGI, will also be wise enough to keep it a secret for as long as possible. What they eventually decide to do with it is anyone's guess. We live in interesting times.

Expand full comment
Nov 3, 2023·edited Nov 3, 2023

Strong argument against releasing future version of Llama 2 by inventor of artificial genetic drive https://twitter.com/kesvelt/status/1720440451059335520?t=iyTjB6Xp-LF4YGCR28Im5g&s=19

Can biology kill >100m? Yes: smallpox.

Can biology do worse? Yes: myxoma killed >90% of rabbits.

Could a biotech expert match this within 10y? Surprising if not.

Would sharing future model weights give everyone an amoral biotech-expert tutor? Yes.

Expand full comment

Gary, we both tweeted out the presentation by Arvind Narayanan at Princeton on evaluating LLMs. He and many others see preserving open AI models as critical to fostering independent research of how they work, and leveling the playing field between private actors and the public sphere. I'm curious how you'd respond to that argument, as I think it has force.

Expand full comment

Yes, of course it is unthinkable than anyone, be it Yan of Sam, be the decider as to what is good, for you, for me, for any of us. Yan has clearly described why, in his opinion, the current path of LLM’s cannot lead to super intelligence, and, combined with not having agency, he sees ni dystropic danger over the horizon. I agree with Bengio that if there is a genuine danger, AI should be regulated. Non LLM’s AI has already killed at least one person by driving an autonomous car and is not regulated. I am concerned by this, and the nasty use effects of so call algorithmic governance, where AI ,as non robust as it is, is being used.

Expand full comment
Nov 4, 2023·edited Nov 4, 2023

I see how open-source models open up society to risks like "guy uses AI-powered bot farms to create immense fraud operation" or "Russia uses AI-powered bot farms to flood social media with far-left and far-right garbage in an effort to destabilize NATO", but the risks don't seem catastrophic. It seems possible that an everyday bad outcome from these open models might prompt society to do something that reduces catastrophic risk, which could be a net improvement. Still, I guess the default response of humanity will be to try to stop the exact kind of threat it is faced with, rather than the more general threat.

Expand full comment

I still hold that regulatory capture and "false alignment" remain the greatest risks in AI, and open source AI offers a defense against both.

By false alignment, I mean... recall when OpenAI tried to handle Dall-E's bias issues by appending racial and gender tags onto its prompts? That kind of thing. For non-open source AI, there's way too many perverse incentives. I'd rather an open source AI whose biases are known, than a close source AI that has deeper issues hidden beneath a layer of patches. (And as for an unbiased LLM: we know that's not happening.)

I will admit my biases. First, I don't expect the hopes for LLMs in biomedical research to pan out, whether positive or negative. I'll happily put mana on Manifold on this, if anyone gives me a link; I'm itching to climb the tiers, and betting against AI capabilities has proven a winning strategy so far. Second, I think, for totally AI-unrelated reasons, that the threat of disiniformation (no matter how it's produced) has been generally overstated, and that people are more resistant to disiniformation than we give them credit for.

Expand full comment

> why the offense/defense balance never shifts?

My knowledge of creating these AI models is, at best, rudimentary, but it all comes down to programming. Something along those lines, what goes in influences what comes out. You could design AI models in such a way that, while individuals can abuse them, they can also be used defensively against the former, right?

Similar to how there is a community of people that examine and experiment with virus/trojan programs-code in order to understand them, and their input is useful to people who want to fight against hostile actors who use such code against them.

So in that case I think there will always be some kind of balance. Am I off track?

Expand full comment

Speaking as an AI Engineer:

Open source has its pros and cons.

Best is to not let the law interfere with freedom of science.

Let consumers in a free market decide whether they want to depend on open source or not.

The market of ideas, when unrestricted, has done more good than evil. Whenever the economy is overregulated, innovation gets stifled.

Most importantly, I respect those who may not hold the same views as me. True science exists in debate; true science exists when we do not trust the science. Hence, my earnest request is that we all develop digital literacy in order to be well-informed customers, instead of falling for clickbait psyops.

Expand full comment

As someone with an actual background in biological science:

While bioweapons are by far the most dangerous sort of weapon, your notion of how easy it is to make a bioweapon is very ill-founded. We presently do not have this kind of knowledge, and an AI is not capable of generating this sort of knowledge.

Solving some fundamental problems in biology might make this possible; however, this stuff is necessary for developing actually useful things, so you can't restrict it without greatly increasing the risk from pathogens.

And frankly, if you actually believe in this stuff, the correct take is not "try to suppress technology", it is to advocate for the eradication of evil people, because the tech IS coming, and given how many people are making AI models at this point, if you *actually* believe this is a threat, the correct response is to advocate for the mass eradication of evil people on a global scale, because you won't be able to stop the technology - it's literally impossible.

Expand full comment

Gary, I think it would be great to address how we could balance the risks involved in open-sourcing AI vs the risks of creating a literal monopoly on (this kind of) AI, which is what seems to be happening with this regulation, and likely will be in conflict with the antitrust law. It may very well be that prevention of risks of proliferation of AI outweigh the risks of monopolism, but I'm wary of such things that do something for the "greater good" - it seems to rarely lead to good outcomes.

Expand full comment

I do find it odd that Yann LeCun is the voice of reason, and surprisingly cogent in argument with others with deep technical knowledge. However he's hopeless in debate being carried by the amazing Melanie Mitchell.

Expand full comment