24 Comments

If this isn't corporate capture, big tech is spending a whole heap of Euros just to keep those lobbyists in beer and moules-frites. https://www.euronews.com/my-europe/2023/09/11/tech-companies-spend-more-than-100-million-a-year-on-eu-digital-lobbying.

Always essential reading, Gary, thank you.

Expand full comment

but in this case it's not 'big tech' but the supposedly responsible European alternatives that are stalling the regulatory process, no?

Expand full comment

The regulations are overreaching, this is good. People are coming back to their senses. I think you need to update your definition of “regulatory capture”.

Expand full comment

You can’t “regulator capture” without regulation

Expand full comment

Kosi, so put ethics aside. Do you have a technical solution to model collapse for when the information environment you rely on for training inevitably degrades? You're stuck in an arms race against people whose incentive is to fool your curators, so curation will keep getting more expensive and less successful, making your job no longer fun. What's your proposal, if you consider an information equivalent of the environmental protection agency to be overreach?

Expand full comment

Model collapse happens only if almost all the data put into the model is artificial to start with, and and then many times the model is fed ever more degraded outputs of itself.

Sufficiently diverse input images, including a healthy proportion of real images, will result in robust representations. We will also never have art-gen work dominate. It is a niche industry, useful for illustrations, etc.

Expand full comment

Politicians deal with many real problems. I assume they compare the worst things that people made with the models they could regulate with bad things in other domains. There were some pictures of Trump in prison generated with MJ and a lot of students cheated on their homework using ChatGPT. Some people might have lost jobs but the impact on the job market may be good by making EU economy more competitive. Comparing to even cryptocurrencies that isn't scary.

Expand full comment

I'm really not sure it's a crisis that the AI Act might not include the stuff it never really needed. I've long suspected the release of an unready for prime time chatGPT was regulatory interference. I mean, I may be missing something, but this is what I posted on LinkedIn:

If you're still on twitter, here's a decent entry point into some discussion of the latest drama wrt the AI Act. (If you're not, summary below) https://lnkd.in/eCisf-7Y

tl;dr I don't think we need that much specific handling of "foundation models", I do think we should finish the AI Act in 2023, or at least the bulk of it. If necessary, we could maybe try the trick the DMA attempted – split out the harder-to-pass parts into their own act. Here maybe the generative AI (e.g. LLM) parts, which I'm not sure are needed anyway.

I do think we should maybe update existing copyright legislation to handle generative AI just like we did the liability act, but that doesn't need to be in the AI Act (just like we don't need to legislate hiring or sustainability there, those are their own special problems with or without AI.) cf my commentary with Meeri Haataja from earlier this year on the drafts going in to the trialogue https://lnkd.in/ek6yEB7v

Expand full comment

I can see both sides of the argument. There're those who say we need even more regulation now, and they may be right for the right applications (medicine, finance, autonomous vehicles etc). But then there're those that say if we allow only the big companies like OpenAI to "pass" whatever bars are set by the regulation they helped create (such as limits on model size in the recent EO), then we have effectively made a monopoly (and thus the proverbial "regulatory capture"), and possibly stifled true innovation (which probably won't come from model size, but still). I think we need a balance between these 2 sides of the equation, both for short- and long-term risks. It's a very tricky subject, and far from obvious how to do it right - and a lot of the current conversations on the topic seem to gravitate only towards one end or another, with very few discussions where everyone has a voice.

Expand full comment

Governmental AI regulation? What, with the help of the likes of Musk offering his childish musings to Rishi Sunak, or Kamala Harris being appointed AI czar? This is a joke. Yes..?

Expand full comment

Hey, Gary, what do you think of Biden's EO?

Expand full comment

Thanks for calling this out

Expand full comment

France and Germany fear that with much regulation on their side, the european R&D on AI will be behind the chinese and american ones. They want to keep up the pace in order to share the potential benefits and maintain geopolitical equilibrium. This dangerous race can be only stopped if the regulations are global, international and worldwide. The agreement on regulations on AI systems development should be considered at the same level as global agreements on weapons or trade and should be handled by an organization comparable to WTO.

Expand full comment

How is “not regulating foundation models” regulatory capture? Isn’t this a good thing, especially for players that aren’t big like Google and OpenAI?

Expand full comment
author

this is literally making the rules (dropping an objectionable section) that europe’s largest AI players are insisting on, at their request.

Expand full comment

perhaps i need to update my definition of regulatory capture, but i didn’t consider a lack of regulation to be beneficial to a particular corporate party

regardless, we don’t typically regulate general purpose tools (ie computers or programming languages) to prevent bad things, and i’d consider foundation models to be general purpose tools. my fear is that regulating foundation models might unnecessarily restrict smaller players and serve only to line the pockets of specific players already in the game. forming regulation around things line explainability, especially in high-risk areas like medicine, makes more sense to me. I also don’t quite understand how you’d even regulate foundation models effectively - parameter counts or compute limits seem very superficial and ineffective, because you can reasonably have a smaller model do significant damage.

Expand full comment

per usual, the EU will end up again on the wrong side of history

Expand full comment

Long-term, we need regulation. For now, chatbots and art-gen are more just clever versions of Google search and Photoshop.

Some bad content will be created, yes, but it is impractical to police that.

Expand full comment

How can we regulate for models that do not yet exist for a world we do not know what form it will take. The funny thing about technology is that when we speculate we most often predict wrong usecases and of course wrong harms. This is a case of “regulation looking for a problem”. As for the information environment, it comes down to distribution and distillation i.e most of the burden will fall on the social media companies, therefore will require a “technical” solution

Expand full comment

That is why I wrote "long-term". The industry is moving fast, and go figure what the next innovations will be. We need to be watchful, but regulation should cover only what we got for now.

Expand full comment

You guys are long on objections but short on solutions. "Will require a technical solution" is conveniently written in third person, like "mistakes were made". What solution? What regulation? Regulatory capture means whittling down regulation to give the appearance of protecting the public interest but in fact only protecting incumbents from competition. That is precisely what you are arguing in favor of right now. If you truly expect any of the problems to get solved long-term, then you're going to need to make current practitioners a bit unhappy. That's what long-term means. You don't always get to do whatever you want today.

Expand full comment

I am saying: please explain the problem. And please explain the solution. Regulation must exist for a reason.

Expand full comment

Yeah, I agree with that.

So, on the problem. You mentioned AI generated content not dominating, but there are two problems with that. One is that the economics are against it, it's much cheaper to generate low quality content than high quality. There are significant business opportunities in ultra low quality content, so it will keep growing until opposed. The second problem is that low quality doesn't need to dominate to ruin the training environment. Hallucinations won't converge to zero, which is what it means to trust AI content, if there's nontrivial AI generated content in the training. Not dominating isn't a path to trustworthy, and the technology will need to become trustworthy or see its value collapse.

On the solution, look to environmental law as a model, in terms of pollution and hazardous materials handling, and also genetically modified organisms as an analog for AI in the wild. All those laws grew in reaction to stunts that people actually pulled. It has the same characteristics of balancing the interests of the commons with the interests of business, and much more history. For example, revisit section 230, and further, hold whoever hosts a model accountable for its speech.

Expand full comment

The internet has a lot of good stuff, and is also awash in spam, lies, propaganda, porn, etc. You have to be selective about your sources. Good content (human or machine-created) tends to bubble up.

Expand full comment