71 Comments

"...Section 230 should not protect platforms from misinformation that their own tools generate. Regulations that spur greater accuracy might actually spur greater innovation."

As always, the question becomes: Who decides what is accurate and what is misinformation? The regulators? What shields them from industry capture, or from pursuing their own perverse incentives and political motives? Who watches the watchmen?

Also, since next-token predictions are based on training sources, will the regulators be picking and choosing those sources? If so, how transparent would that process be? What if something is considered misinformation one day and found out to be accurate the next(e.g. Hunter Biden's laptop)? What would the retraining process look like? Suppose there isn't always clear line between facts and lies (spoilers: there isn't) ?

If the ultimate goal is to make GPT and its ilk abandonware, maybe such censorship routines would do the trick, but at the cost of rigging up yet another powerful and unaccountable bureaucracy. A Ministry of Truth by any other name, and who would assuredly follow the same pattern of cancerous growth and mission creep as the rest.

Expand full comment

Point #1: The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs.

AI is yet another 55 gallon drum of jet fuel about to be pored on an already overheated knowledge explosion. Nobody in AI land seems capable of looking past the details of one particular technology to the larger picture in which all emerging technologies reside.

The "more is better" conventional wisdom you refer to is at least a century out of date. The technical brilliance of these innovators is obscuring the fact that they are backward looking philosophical children.

Simple common sense available to high school kids should be sufficient to inform us that if we insist on pushing the knowledge explosion forward faster and faster without limit it's only a matter of time until society at large won't be able to successfully adapt to the changes.

Nuclear weapons have been my generation's inexcusable crime. AI will be yours.

Expand full comment

A very nice collection of points.

Expand full comment

Great post! Another regulation that made things better for us and the industry - the airline industry (in terms of safety, not pricing).

Expand full comment

I understand the need for governmental regulation but I have no fear that the regulation of LLMs in China or elsewhere might lead to any breakthrough that might get us closer to AGI. In fact, I believe that the LLM phenomenon is a serious detriment to AGI research because it is sucking all the attention and resources that should be applied elsewhere. LLMs and deep learning practitioners have never been on the highway to AGI and those who believe that they took an offramp are deluded.

This does not mean that LLM funding should cease. It is a very interesting and valuable technology but there is nothing intelligent about statistics. Humans are intelligent but lousy at estimating probabilities, something that lottery and casino operators bank on. You, Marcus, favor a neuro-symbolic approach and this is fine and should be supported. There are other promising approaches. Only 1/10th of the current worldwide investment in AGI research should go to LLM/DL research in my estimation.

Expand full comment

The problem is the same as the cancel culture discussion. When should an utterance be held to a standard and who gets to decide the standard? Without defining "truth" one opens up everything to political propaganda. For example if we say ChatGPT must be anti-racist, much as we are against racism, how is that different than China? It feels as if you are calling for AI censorship too. Are you?

Given that the training set involves all accessible human utterances it seems likely that ChatGPT will make all the mistakes humans do. I can't see how that is ever fixed in humans, or in AI.

That would lead me to surmise that the real use of AI is not for facts or opinions but as an aide to human endeavors, and that we train humans how to use it. Trusting that they can and will.

It wrote some excellent SQL code for me last week, twice. Each time solving a problem that I would have taken a long tome to fix. It also did the same with an Excel Macro.

If the real fear is that humans will misuse it then of course that is right, just as they misuse every tool. But banning tools or seeking to regulate them for all of us seems the wrong step compared with education and software controls (like anti spam software or fraud detection software).

As for China, I suspect their government is not a good role model for regulation.

PS. I just re-read this and it is a bit more argumentative that I intended. Trying to discuss real issues.

Expand full comment

Interesting twist on possible unintended consequences. But I also doubt that the Chinese government would support an AI that tells the truth, that is far worse of risk for them than insulting the party leaders.

Expand full comment

I'm with Alan Turing on this. In his 1950 paper he said he believed that it would be more efficient to design machines that could learn rather than programming them explicitly with every detail. Symbolic AI can never compete with deep learning on a topic like NLP. To do so in symbolic AI, you would have to code for the major cases of the trillions of multi-dimensional connections in a neural net, which is way beyond human capability.

We may think the brain works by rules, but I'm personally convinced that's just a high-level conscious artefact we humans experience, and the majority of the underlying brain "processing" is, in fact, more similar to the stochastic inference carried out by LLMs except using washes of chemical neurotransmitters rather than electrons.

Having said that, I don't think we need to mimic the human brain exactly to create intelligence, any more than we copied the way birds fly to build an aeroplane. There are obviously creative shortcuts around what evolution took hundreds of millions of years to create. Ironically, using human intelligence to design AI now feels like a form of "creationism", except with us acting as "god" :)

We may be a few iterations away from perfecting LLM designs to reduce hallucinations, but it's only been since 2017 that LLMs became a real AI thing. Also, at the moment LLMs kind of feel a little too 'brute force' and energy intensive, but I think this is also going to change rapidly as the research around algorithms evolve. And who knows what new capabilities advances like quantum computing may bring to the field of AI in the not-to-distant future?

With respect to China.

At some point, I think the major G7 countries will each have their own AGI version of the "Manhattan Project". You could say China already has its and the party saying it wants to be "AI world-leader by 2030". Which is a terrifying thought.

However, I'm not sure that China can get to AGI first, because it has suffered huge brain drains over the decades, meaning many of its brightest and their children have left to work in the US (mostly). Also, the issue of its AI having to tow the party line may also cause it long term problems. What if a China-made AGI turns around and tells the Party that their method of governing is immoral and unethical?! I'm sure some Party members are sweating over this issue as we speak.

Regulation is no bad thing as you say Gary, but it must be appropriate for the technology. Restricting something that already assists millions of people to do their work more productively would be a fool's errand in my mind, especially when the West is so in need of productivity gains to help ease inflation.

Expand full comment

Real AGI has a much higher chance of occuring in China due to the centralization of funding and attention to regulation. And it's not like we don't already know that. "We see three at the absolute forefront," Brad Smith (of Microsoft) said in an interview in Tokyo with Nikkei Asia. One is Open AI with Microsoft, the second is Google, and "the third is the Beijing Academy of Artificial Intelligence."

If you think OpenAI has a stranglehold on AGI you really don't understand how first-movers work in technology, they are usually just catalysts not eventual winners.

Expand full comment

Ethical AI is an oxymoron. As you observed, so far AI alignment is just the regurgitated politics of its developers. The Chinese might impose their political point of view on AI, but it will be to their detriment. We will get the same thing if we ask the Federal government to regulate AI. Consider the current state of regulation based on our Climate religion.

Truthfulness is not the issue. The real issue is that AI doesn't know when it is lying. What's more, opinion must be separated from facts. Our media has blurred this distinction to the point where we tend to lose sight of that important difference.

Are we going to abandon "opinion"?

ChatGPT and its LLM relations are a bright shiny dangerous toy. They seem to have value. But will they fail simply because they can't be trusted? Would you buy a machine tool that produced products that you had to mic all the time? Precision and accuracy are not accidental.

More frightening is the idea that our culture no longer demands truthfulness. In which case our AI is going to toss us into the cesspool of the Internet.

Expand full comment

I for one am not falling for this "Make Cold Wars Great Again" line, as if a generation of tech geeks feel they missed out on the Manhattan Project. This is just recycling Kurzweil's singularity religion. As if the US never won the space race after Sputnik's first mover advantage. As if the Web should have been classified as a munition of national security concern.

We're currently fully experiencing Groundhog Day for all the novices who never heard of the Eliza effect. AGI is not only very far off, humans don't even understand basic human intelligence and consciousness in the first place ... let alone are we able to "simulate" it. "If only our LLM had 10000x the corpus, it will thus become artificial life that will smite our enemies!"

So before we rally the masses to bloodthirsty defenses to digital national borders with our AGI golems, maybe we should first pay attention to camel noses peeking under the tents with TikTok and the like.

Expand full comment

现在中国的 llm 暂时还比不过 openai 的 chatgpt(4),差距我们是承认的,不过我个人感觉 gpt4 体验上已经不是很友好了,包括速度和使用成本。另外如果你有 deepL 这样的翻译软件,可以尝试看这篇文章:https://mp.weixin.qq.com/s/VE-ea6RH7-Wwla0U5ssgVw

Expand full comment

So just as countries plan to roll out their own CBDCs, now there's a new movement for each country to create their sovereign AI chatGPT versions.

How countries are trying to navigate corporations that are nearly more powerful than them, this will be their solution. China and the UK are already on board.

Expand full comment

I keep seeing a conflation of regulation with law. "My best candidate? Placing very high standards around truth, with strong penalties for LLM-induced defamation and the wholesale spread of harmful misinformation." Universal laws against misrepresentation, defamation, fraud, and so on can be applied to AI. No need for regulations.

Expand full comment

It may be equally likely that a heavy regulatory approach that mandates certain technologies ends up as a kind of Lysenkoism at worst or a 5th generation computing project at best. It would be interesting to consult with someone with knowledge of Chinese regulatory regimes on what approach they might take. Maybe Ian Bremmer might be a good to talk to?

Expand full comment

The technocrats in the PRC have, in my view, forfeited credibility post Covid. They were using no scientific information to constrain their theatrical handling of the pandemic. Videos of workers spraying lye on surfaces destroyed my belief that the state can update their actions to reflect the reality that’s contrary to Xi’s edict.

Expand full comment