13 Comments

Agree on not polarizing into camps that can't compromise. Compromise is essential for most great endeavors.

As a former regulator (of IT risk in finance), keeping regulation targeted in scope, instead of over-broad, can keep it from swamping small companies that can't afford compliance costs and won't (and shouldn't!) be training novel frontier foundation models.

For now, the ability to train a frontier foundation model (aka afford the compute for training) is a key bottleneck/chokepoint. Bottlenecks like this are the best places to apply regulations in a targeted way to a smaller number of better-resourced entities.

And if compute costs fall precipitously in the future, it would probably be good to have regulation in place on frontier foundation models specifically, since it could help ensure frontier models aren't developed by tiny companies without sufficient controls.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023Liked by Gary Marcus

Very strong economic arguments here, thank you for laying out so clearly! I was previously a bit more ambivalent about this whole topic, but all of your points make 100% sense, as presented. Could hardly agree more now.

This whole approach is very akin to “shift left” approach in modern computer security- it is much easier (and cheaper!) to treat a problem at its source, rather than deal with downstream consequences.

Expand full comment

From The Guardian Techscape:

"And so, quickly: Dan Milmo’s rundown of what exactly happened last week [with OpenAI — GW] is worth a read if events sped past you too fast to be perceived. What conclusions should we draw? I think the most important one is that we already have an inhuman system that is more powerful than any individual human and fundamentally incapable of being prevented from carrying out its own goals, and it’s called capitalism."

And what capitalism without regulation is capable of we can easily imagine. Come on, even Adam Smith warned us against the fundamentally egoistic nature of entrepreneurs and that it was important to not take anything they say is 'best for all' (i.e. political suggestions regarding business/society) as necessarily 'best for all', but simply 'best for their own profit'.

Expand full comment

What we can do at this point is put in place laws that make firms accountable for consequences, and add some enforcement of those laws and rules. As we have learned with the crypto-currency fiasco, governments use the same rules to protect the public that are used to put up stop signs--wait until someone is killed at the corner (maybe even several people). It was well known that crypto currencies were mostly worthless (outside of a few) and that a lot of the transactions were for criminal enterprises, money-laundering and illegal arm sales. It is only now, years later, after billions is losses and massive fraud that both FTX and Binance are being held to some level of accountability. This is sad, of course, because the consequences of misuse of AI could be far deadlier and more widespread (and expensive). But until we see the dead bodies, very little will be done, regardless of how many warnings Gary and other thinking people put forward. The same was true of crypto currencies as many people pointed out the weaknesses and worthlessness of most of the currencies. Some review of existing consumer and other protection laws that can be applied to AI products and services is needed. With those, active consumers and lawyers might be of some help. We will need $billion dollar cases though. Binance penalties are paltry from what I understand from readers of the NYT.

Expand full comment

Moloch is in the building.

Expand full comment

I still have not seen a convincing argument for regulating LLMs separate from their applications. If I train a foundation model (or rent access to one) and then build an application (e.g., based on prompting or on supervised fine tuning for my task), it is the application that should be regulated. Even in your note, you refer to ChatGPT, but this is an APPLICATION of an LLM, not the LLM itself, and I totally agree that it should be regulated. The regulation should also take into account the feasibility of safety testing. I don't see how I can test an LLM for safety without knowing the specifics of the application. Indeed, as I have written elsewhere, narrow applications are much easier to test than broad-scope applications. And I don't see any practical way to test something like ChatGPT, which claims to cover all of written human knowledge.

Furthermore, when LLM-based systems are deployed in open worlds, it is impossible to anticipate all harms, and therefore, it is impossible to perform traditional reliability engineering with guarantees. Instead, engineers must build meta-cognitive layers that monitor performance and look for anomalies, near misses, and failures (harms), diagnose the underlying causes, modify the system appropriately, and perform regression testing. It is this entire structure that needs to be evaluated for safety, not just the LLM.

In short, the LLM is the wrong unit of analysis for regulation.

Expand full comment

When you say "engineers must build meta-cognitive layers" for various purposes, do you mean engineers of the application or engineers building the LLM?

Expand full comment

I think both the LLM and the application will require meta-cognitive layers. For example, the LLM will need the ability to detect when it is not competent to answer a query from the application. And the application will also need to handle out-of-distribution cases for aspects that do not rely on the LLM.

Expand full comment

Current chatbots already do a lot more than generate text. There is a whole event loop, where LLM is just one of the cogs. That's the nice thing about LLM. It need not be a black box. It plays nicely with external memory, retrieval, external tools. A very extensible architecture.

Expand full comment

Sensible regulation is good. Yes, current systems have a "lack of robustness, explainability and trustworthiness".

There's precious little in terms of good rules one can come up with, however.

At this stage, all one can likely do is asking companies to use due diligence in testing their systems, react quickly if there's a mishap, submit a plan for how they plan to ensure safety, etc.

Mandatory quality standards make no sense now. Nor do caps on compute, getting a license, disclosing their training data, etc.

This may change when systems become more advanced.

Expand full comment

Gary I agree with you, however, the arguments for AI safety is way too technical. Ordinary folks don’t understand the “language” so it doesn’t connect. Furthermore, guys like Hinton have no credibility, he comes across disingenuous and he only cares about humanity “now” that his neural network nonsense has crashed and burn.

Expand full comment
author

his student’s company wsa just valued at 86B, so crash and burn is not fully universally appreciated … yet

Expand full comment

Europe must take its part of responsibility in AI development control policy, Europe may be in the forefront but it cannot stand alone for long. Other states, blocks of states have to follow and support these efforts. European politicians are concerned with safety and societal impacts but are worried about economic competitiveness and growth, about technological progress, about geo-political rank. They are afraid of regulating their R&D on AI’s foundation models because they don’t want to be left behind, behind other countries without such regulations. The EU’s Act should be accompanied by an American Act and an Asian Act. As I have already said, power is at stake with future advanced AI systems. The issues over AI development will become soon sources of conflicts between states. There is a risk of a struggle for domination.

Expand full comment