24 Comments

If this isn't corporate capture, big tech is spending a whole heap of Euros just to keep those lobbyists in beer and moules-frites. https://www.euronews.com/my-europe/2023/09/11/tech-companies-spend-more-than-100-million-a-year-on-eu-digital-lobbying.

Always essential reading, Gary, thank you.

Expand full comment

The regulations are overreaching, this is good. People are coming back to their senses. I think you need to update your definition of “regulatory capture”.

Expand full comment

Kosi, so put ethics aside. Do you have a technical solution to model collapse for when the information environment you rely on for training inevitably degrades? You're stuck in an arms race against people whose incentive is to fool your curators, so curation will keep getting more expensive and less successful, making your job no longer fun. What's your proposal, if you consider an information equivalent of the environmental protection agency to be overreach?

Expand full comment

Politicians deal with many real problems. I assume they compare the worst things that people made with the models they could regulate with bad things in other domains. There were some pictures of Trump in prison generated with MJ and a lot of students cheated on their homework using ChatGPT. Some people might have lost jobs but the impact on the job market may be good by making EU economy more competitive. Comparing to even cryptocurrencies that isn't scary.

Expand full comment

I'm really not sure it's a crisis that the AI Act might not include the stuff it never really needed. I've long suspected the release of an unready for prime time chatGPT was regulatory interference. I mean, I may be missing something, but this is what I posted on LinkedIn:

If you're still on twitter, here's a decent entry point into some discussion of the latest drama wrt the AI Act. (If you're not, summary below) https://lnkd.in/eCisf-7Y

tl;dr I don't think we need that much specific handling of "foundation models", I do think we should finish the AI Act in 2023, or at least the bulk of it. If necessary, we could maybe try the trick the DMA attempted – split out the harder-to-pass parts into their own act. Here maybe the generative AI (e.g. LLM) parts, which I'm not sure are needed anyway.

I do think we should maybe update existing copyright legislation to handle generative AI just like we did the liability act, but that doesn't need to be in the AI Act (just like we don't need to legislate hiring or sustainability there, those are their own special problems with or without AI.) cf my commentary with Meeri Haataja from earlier this year on the drafts going in to the trialogue https://lnkd.in/ek6yEB7v

Expand full comment

I can see both sides of the argument. There're those who say we need even more regulation now, and they may be right for the right applications (medicine, finance, autonomous vehicles etc). But then there're those that say if we allow only the big companies like OpenAI to "pass" whatever bars are set by the regulation they helped create (such as limits on model size in the recent EO), then we have effectively made a monopoly (and thus the proverbial "regulatory capture"), and possibly stifled true innovation (which probably won't come from model size, but still). I think we need a balance between these 2 sides of the equation, both for short- and long-term risks. It's a very tricky subject, and far from obvious how to do it right - and a lot of the current conversations on the topic seem to gravitate only towards one end or another, with very few discussions where everyone has a voice.

Expand full comment

Governmental AI regulation? What, with the help of the likes of Musk offering his childish musings to Rishi Sunak, or Kamala Harris being appointed AI czar? This is a joke. Yes..?

Expand full comment

Hey, Gary, what do you think of Biden's EO?

Expand full comment

Thanks for calling this out

Expand full comment

France and Germany fear that with much regulation on their side, the european R&D on AI will be behind the chinese and american ones. They want to keep up the pace in order to share the potential benefits and maintain geopolitical equilibrium. This dangerous race can be only stopped if the regulations are global, international and worldwide. The agreement on regulations on AI systems development should be considered at the same level as global agreements on weapons or trade and should be handled by an organization comparable to WTO.

Expand full comment

How is “not regulating foundation models” regulatory capture? Isn’t this a good thing, especially for players that aren’t big like Google and OpenAI?

Expand full comment

per usual, the EU will end up again on the wrong side of history

Expand full comment

Long-term, we need regulation. For now, chatbots and art-gen are more just clever versions of Google search and Photoshop.

Some bad content will be created, yes, but it is impractical to police that.

Expand full comment