12 Comments

Gary, great breakdown of the White House's AI executive order. It's encouraging to see the US government tackling AI's vast challenges. However, you're spot on about the need for clarity, especially around foundational models and what constitutes a "serious risk." Your FDA comparison is on point – we need more than just risk notifications. Also, keen to see how the UK AI summit stacks up after this. Here's hoping for more concrete steps in AI governance globally!

Expand full comment

It is good regulation is taken seriously. I don't anticipate too many "teeth" in the final version. Historically, regulation has been reactive, not proactive.

That's how it should be. Does not mean wait till people die. But each individual rule must be carefully reasoned, and hopefully based on some data.

That Musk has been able to get away for many years with misleading claims about Tesla's self-driving skills does not inspire much hope about the US regulatory process.

Expand full comment

I compiled a list with all the deadlines for various reports that are to be submitted as result of the Executive Order. I thought it could be useful for people who want to track the aftermath of the order: https://valentinsocial.substack.com/p/bidens-ai-executive-order-all-the

Expand full comment

I’ve read that there are a few passages in the EO that effectively make it apply to very large models (and companies) only, which is a good thing. But I wonder how in any regulation that’s considered, the open-source market will stay protected, especially in light of https://www.businessinsider.com/andrew-ng-google-brain-big-tech-ai-risks-2023-10 . This is a more immediate concern for the (probably upcoming) semi-open models like Llama 3, where it could possibly fall under this order while being in the best public and academic interest, unlike the fully proprietary models.

Expand full comment

"Frontier" models are not like drugs designed for a specific use case, so (perhaps intentionally) they can't really be tested against any specific claims. I don't think there is much by way of 3rd party usage in terms of safety and regulation, other than implying "fine tuning" doesn't fall under compute restrictions or requires an additional level of red teaming. That certainly sounds like a liability loophole (basic model can't be tested on specific case, but tuning for specific task also won't require testing).

Expand full comment

It looks like non-neural networks are considered a priori safe

Expand full comment

I wonder if any future regulations will address data privacy concerns wrt training data. Will there be incentives for companies to source the data ethically and legally?

Expand full comment

One major problem: it is (apparently) all voluntary

Expand full comment
author

the part i referred to says “will require”, but also hit refresh for an update from Missy Cummings

Expand full comment

red-teem // red-team

specific predicts // specific products

Wuth // With

the risk threshold, and // the risk threshold. And

Expand full comment
author

updated online, thanks! and see the online version for an update from Missy Cummings

Expand full comment

also rishexecutive // executive

Expand full comment