71 Comments

"...Section 230 should not protect platforms from misinformation that their own tools generate. Regulations that spur greater accuracy might actually spur greater innovation."

As always, the question becomes: Who decides what is accurate and what is misinformation? The regulators? What shields them from industry capture, or from pursuing their own perverse incentives and political motives? Who watches the watchmen?

Also, since next-token predictions are based on training sources, will the regulators be picking and choosing those sources? If so, how transparent would that process be? What if something is considered misinformation one day and found out to be accurate the next(e.g. Hunter Biden's laptop)? What would the retraining process look like? Suppose there isn't always clear line between facts and lies (spoilers: there isn't) ?

If the ultimate goal is to make GPT and its ilk abandonware, maybe such censorship routines would do the trick, but at the cost of rigging up yet another powerful and unaccountable bureaucracy. A Ministry of Truth by any other name, and who would assuredly follow the same pattern of cancerous growth and mission creep as the rest.

Expand full comment

Point #1: The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs.

AI is yet another 55 gallon drum of jet fuel about to be pored on an already overheated knowledge explosion. Nobody in AI land seems capable of looking past the details of one particular technology to the larger picture in which all emerging technologies reside.

The "more is better" conventional wisdom you refer to is at least a century out of date. The technical brilliance of these innovators is obscuring the fact that they are backward looking philosophical children.

Simple common sense available to high school kids should be sufficient to inform us that if we insist on pushing the knowledge explosion forward faster and faster without limit it's only a matter of time until society at large won't be able to successfully adapt to the changes.

Nuclear weapons have been my generation's inexcusable crime. AI will be yours.

Expand full comment

Well said, i agree. The same group of people seem to be involved in both.

Expand full comment

A very nice collection of points.

Expand full comment

Great post! Another regulation that made things better for us and the industry - the airline industry (in terms of safety, not pricing).

Expand full comment

I understand the need for governmental regulation but I have no fear that the regulation of LLMs in China or elsewhere might lead to any breakthrough that might get us closer to AGI. In fact, I believe that the LLM phenomenon is a serious detriment to AGI research because it is sucking all the attention and resources that should be applied elsewhere. LLMs and deep learning practitioners have never been on the highway to AGI and those who believe that they took an offramp are deluded.

This does not mean that LLM funding should cease. It is a very interesting and valuable technology but there is nothing intelligent about statistics. Humans are intelligent but lousy at estimating probabilities, something that lottery and casino operators bank on. You, Marcus, favor a neuro-symbolic approach and this is fine and should be supported. There are other promising approaches. Only 1/10th of the current worldwide investment in AGI research should go to LLM/DL research in my estimation.

Expand full comment

LLMs are doing great and their greatest asset over AGI is that they exist and perform, and they're improving every year. LLM bashing doesn't make sense.

Expand full comment

Lol. No.. LLms are future. All ai work should be on LLMS. artifical intelligence is field of broken promises. Some So called experts have never built the product instead of pulling it down. Gary Marcus is salty because neuro symbolic shit(no one will ever build it in this century) is not getting attention.

Expand full comment

You're just as dumb in your "no one will ever build it in this century". One guy could make it tomorrow, it's kind of obvious that whatever neural nets do, can be reproduced using higher level data structures & algorithms.

Expand full comment

The problem is the same as the cancel culture discussion. When should an utterance be held to a standard and who gets to decide the standard? Without defining "truth" one opens up everything to political propaganda. For example if we say ChatGPT must be anti-racist, much as we are against racism, how is that different than China? It feels as if you are calling for AI censorship too. Are you?

Given that the training set involves all accessible human utterances it seems likely that ChatGPT will make all the mistakes humans do. I can't see how that is ever fixed in humans, or in AI.

That would lead me to surmise that the real use of AI is not for facts or opinions but as an aide to human endeavors, and that we train humans how to use it. Trusting that they can and will.

It wrote some excellent SQL code for me last week, twice. Each time solving a problem that I would have taken a long tome to fix. It also did the same with an Excel Macro.

If the real fear is that humans will misuse it then of course that is right, just as they misuse every tool. But banning tools or seeking to regulate them for all of us seems the wrong step compared with education and software controls (like anti spam software or fraud detection software).

As for China, I suspect their government is not a good role model for regulation.

PS. I just re-read this and it is a bit more argumentative that I intended. Trying to discuss real issues.

Expand full comment

Hmmm. You sound like someone who's never studied any social science. You raise the issue of racism and say "I can't see how that is ever fixed in humans" after raising that issue. I can only think you are implying that you mean 'racism can't be fixed in humans'---which is flaming nonsense.

For example, there are historical examples of societies that weren't racist---such as ancient Rome. There are also more recent examples where profoundly racist institutions such as the US armed forces that were forced to become significantly less racist by government regulation.

Social problems aren't binary. There are generally on a curve, and regulation works by making a specific behaviour more or less prevelant---it's turning a dial, not flicking a toggle switch.

Moreover, human societies have a lot of inertia. That means many changes take lifetimes to achieve. Consider the fight against slavery, which took hundreds of years to eliminate in the USA and which still exists in isolated populations---both within and outside America.

You also suggest that "banning tools or seeking to regulate them for all of us seems the wrong step compared with education and software controls". When I was young you could still buy dynamite and Zyclon-B in the village hardware store without a permit. Now you have to have a permit and all dynamite sold in Canada comes with markers to help detectives trace it's source if it is used in a terrorist attack. I am really happy that it is now very hard for an ordinary citizen to buy both products, for what I hope are obvious reasons.

I have a real concern about the over-sized social influence of executives who control large tech companies. They may be expert coders. But they often seem to be below average when it comes to understanding history, politics, and, how societies in general work. It's a standard problem. Capitalism selects for leadership using an invalid criteria on the macro level, and you get people way outside of their area of expertise manifesting the Dunning-Kruger Effect.

Expand full comment

Interesting twist on possible unintended consequences. But I also doubt that the Chinese government would support an AI that tells the truth, that is far worse of risk for them than insulting the party leaders.

Expand full comment

That's the problem, no government would be willing to support AI that always told the truth.

Expand full comment

I think they’d be ok with systems that are defined to solve narrow problems. And probably adhere closely to dialectical materialism in their discourse

Expand full comment

They absolutely would, so long as they control what the AI considers facts. Which is essentially what Gary is saying they would do and something that no LLM can guarantee.

Expand full comment

Any relation to the late Marvin? He and Seymour Papert co-taught an AI course in the 60s.

Expand full comment

I'm with Alan Turing on this. In his 1950 paper he said he believed that it would be more efficient to design machines that could learn rather than programming them explicitly with every detail. Symbolic AI can never compete with deep learning on a topic like NLP. To do so in symbolic AI, you would have to code for the major cases of the trillions of multi-dimensional connections in a neural net, which is way beyond human capability.

We may think the brain works by rules, but I'm personally convinced that's just a high-level conscious artefact we humans experience, and the majority of the underlying brain "processing" is, in fact, more similar to the stochastic inference carried out by LLMs except using washes of chemical neurotransmitters rather than electrons.

Having said that, I don't think we need to mimic the human brain exactly to create intelligence, any more than we copied the way birds fly to build an aeroplane. There are obviously creative shortcuts around what evolution took hundreds of millions of years to create. Ironically, using human intelligence to design AI now feels like a form of "creationism", except with us acting as "god" :)

We may be a few iterations away from perfecting LLM designs to reduce hallucinations, but it's only been since 2017 that LLMs became a real AI thing. Also, at the moment LLMs kind of feel a little too 'brute force' and energy intensive, but I think this is also going to change rapidly as the research around algorithms evolve. And who knows what new capabilities advances like quantum computing may bring to the field of AI in the not-to-distant future?

With respect to China.

At some point, I think the major G7 countries will each have their own AGI version of the "Manhattan Project". You could say China already has its and the party saying it wants to be "AI world-leader by 2030". Which is a terrifying thought.

However, I'm not sure that China can get to AGI first, because it has suffered huge brain drains over the decades, meaning many of its brightest and their children have left to work in the US (mostly). Also, the issue of its AI having to tow the party line may also cause it long term problems. What if a China-made AGI turns around and tells the Party that their method of governing is immoral and unethical?! I'm sure some Party members are sweating over this issue as we speak.

Regulation is no bad thing as you say Gary, but it must be appropriate for the technology. Restricting something that already assists millions of people to do their work more productively would be a fool's errand in my mind, especially when the West is so in need of productivity gains to help ease inflation.

Expand full comment

About your "major cases of the trillions of multi-dimensional connections", every time you go up a step on the abstraction ladder, the number of connections summarized by one coding token goes up. You thus have no idea what it takes to make a neurosymbolic AGI.

Expand full comment

And ...

Expand full comment

You have no idea if it's beyond human capability ? Like, your intuition is wrong and you'd need to dig deeper before making such baseless assertions ?

Expand full comment

I hardly think they are "baseless assertions". If I was wrong, we'd all be talking about the progress made by symbolic AI right now instead of LLMs, which have all but mastered natural language by almost every metric you care to measure them by. After over 70 years, Turing's 1950 paper is just as relevant ...

Expand full comment

Symbolic AI hasn't progressed in the past 40-50 years because not enough people are thinking about solving intelligence using symbolic aka non-ML paradigms, because the symbolic paradigm has been sent to storage by some guys who failed to make it real decades ago and most of their descendants somehow blindly trusted their conclusions. Almost the same happened with deep learning before it took off, it took a single or a few guys to revive the half-dead body of neural nets.

So you base your assertions on the idea that the field of symbolic AI has been neatly explored, but machine learning was in (almost) the same state before it took off.

If you started looking at how you could solve AGI using symbolic means, you would notice that a lot hasn't been done, it's completely obvious. There's at least one huge structure that don't exist that should exist if symbolic AI had really been tried. It hasn't. There's still huge room for exploration.

At bare minimum, the real bare minimum, symbolic AI should be able to deliver flawed proto-AGI today. It's a straight path with little hurdles along the way. Where is it ?

Expand full comment

I am well aware of past AI winters. However, I must disagree with your premise that GOFAI can add anything significant to AGI. I believe that intelligence is a "hot mess" problem more suited too deepL.

The idea of "codifying intelligence" by trying to explicitly define all the rules and edge cases is too ambitious, given the limitations of the human mind, although it's possible a hybrid solution of the two could work. For example, LLMs could be used to advance symbolic AI through exploring, writing and maintaining the huge rule bases. But then you have to ask yourself, why not just use the LLM in the first place? (...I remain open to possible reasons, repeatability being one).

I'm a strong advocate that the human mind does not work through logic - it has no ALU as a computer does. Logic is layered over the abstract chemical neurotransmitter networks through social and physical interactions, most notably, education. If you accept this premise, there's no grounding reason why symbolic Ai should work - it's just another approach - certainly not based on the functioning of the human brain.

LLMs are more analogous to how the human mind works. First they are trained on the raw contents of human knowledge from the Internet, and subsequently fine-tuned to learn new skills like math (albeit not brilliantly so far, it's far better to use LLMs in an "agent" context to allow them to decide what to do to solve a problem, like outsource to a math tool like Wolfram as OpenAI plugins do and LangChain if you are a developer).

LLMs are still young, since 2017 as I'm sure you know, yet they can already write code and reply to your messages with great alacrity. It's the "Turing Test" in action, something that no symbolic AI could get close to ;)

Expand full comment

Real AGI has a much higher chance of occuring in China due to the centralization of funding and attention to regulation. And it's not like we don't already know that. "We see three at the absolute forefront," Brad Smith (of Microsoft) said in an interview in Tokyo with Nikkei Asia. One is Open AI with Microsoft, the second is Google, and "the third is the Beijing Academy of Artificial Intelligence."

If you think OpenAI has a stranglehold on AGI you really don't understand how first-movers work in technology, they are usually just catalysts not eventual winners.

Expand full comment

Agreed, Google is also hiding their most helpful AI technology from the public it spies on.

Expand full comment

Ethical AI is an oxymoron. As you observed, so far AI alignment is just the regurgitated politics of its developers. The Chinese might impose their political point of view on AI, but it will be to their detriment. We will get the same thing if we ask the Federal government to regulate AI. Consider the current state of regulation based on our Climate religion.

Truthfulness is not the issue. The real issue is that AI doesn't know when it is lying. What's more, opinion must be separated from facts. Our media has blurred this distinction to the point where we tend to lose sight of that important difference.

Are we going to abandon "opinion"?

ChatGPT and its LLM relations are a bright shiny dangerous toy. They seem to have value. But will they fail simply because they can't be trusted? Would you buy a machine tool that produced products that you had to mic all the time? Precision and accuracy are not accidental.

More frightening is the idea that our culture no longer demands truthfulness. In which case our AI is going to toss us into the cesspool of the Internet.

Expand full comment

I for one am not falling for this "Make Cold Wars Great Again" line, as if a generation of tech geeks feel they missed out on the Manhattan Project. This is just recycling Kurzweil's singularity religion. As if the US never won the space race after Sputnik's first mover advantage. As if the Web should have been classified as a munition of national security concern.

We're currently fully experiencing Groundhog Day for all the novices who never heard of the Eliza effect. AGI is not only very far off, humans don't even understand basic human intelligence and consciousness in the first place ... let alone are we able to "simulate" it. "If only our LLM had 10000x the corpus, it will thus become artificial life that will smite our enemies!"

So before we rally the masses to bloodthirsty defenses to digital national borders with our AGI golems, maybe we should first pay attention to camel noses peeking under the tents with TikTok and the like.

Expand full comment

"AGI is not only very far off, humans don't even understand basic human intelligence and consciousness in the first place "

Man, this sound so dumb. You don't have a better idea of when AGI will come than the specialists of the field, and humans have quite a good understanding of intelligence and consciousness because, well, intelligence and consciousness permeates their existence.

Expand full comment

That's a bit like saying fish are experts on water. We barely know how the brain works when it comes to cognition, consciousness, and decision-making.

Even current AI network models are based on how we thought the brain worked *in the 1950s*. It's way more clueless than you think.

Expand full comment

The fish aren't general intelligences, we are. We are experts on water without even living in the water. Do you think the smartest of us would be clueless about something they use, monitor, question, observe, every single day of their lives ? Can you look me at my avatar and parrot this baseless claim once again ?

Notice that i haven't even used the millions of scientific paper on cognition out there, i don't even need that. 2000 years old books already give good sketches of consciousness and cognition.

Expand full comment

We don't know its mechanics enough to engineer it. But understanding cognition and sentience is still black art mojo nonsense when it comes to recreating it. Personally, I subscribe to the theory that intelligence isn't intrinsic but rather extrinsic: it's a property that exists only in relation to other things. None of that has been modeled.

AI is no more intelligece than a Tesla is "autopilot". The terms are completely wrong and fooling people unaware of the Eliza effect.

Instead, all we have to date are people who have confused infinite computing capacity and infinite LLM sizes as a proxy for sentience. Which is simply the argument that you don't need to know where you're going as long as you can run faster. It's farcical.

Expand full comment

It's a matter of opinion i guess, at how you quantify how much we know and don't know. I can unfold almost all of my thoughts processes and explain why i came to my conclusions, to me it's enough to engineer an AGI.

I don't know who would make a theory that intelligence isn't intrinsic but rather extrinsic, it's obvious it's a mix of both.

But Teslas are autopilot. Better pilots than humans along many metrics.

It's not farcical, you have no idea if LLMs can achieve AGI, as written everywhere the words we write on paper are a projection of the real world we live in, LLMs develop a theory of mind through text only for example. As you have more text, there are more patterns you can extract from the text. It's totally obvious if you think about it for a minute.

To make that claim that LLMs can't achieve AGI, you'd have to prove that there isn't enough in text to understand the world. It hasn't been proven yet. I thought along your line too, but i have to admit LLMs have exceeded all our pessimistic anticipations and have room to grow.

Expand full comment

现在中国的 llm 暂时还比不过 openai 的 chatgpt(4),差距我们是承认的,不过我个人感觉 gpt4 体验上已经不是很友好了,包括速度和使用成本。另外如果你有 deepL 这样的翻译软件,可以尝试看这篇文章:https://mp.weixin.qq.com/s/VE-ea6RH7-Wwla0U5ssgVw

Expand full comment

So just as countries plan to roll out their own CBDCs, now there's a new movement for each country to create their sovereign AI chatGPT versions.

How countries are trying to navigate corporations that are nearly more powerful than them, this will be their solution. China and the UK are already on board.

Expand full comment

I keep seeing a conflation of regulation with law. "My best candidate? Placing very high standards around truth, with strong penalties for LLM-induced defamation and the wholesale spread of harmful misinformation." Universal laws against misrepresentation, defamation, fraud, and so on can be applied to AI. No need for regulations.

Expand full comment

It may be equally likely that a heavy regulatory approach that mandates certain technologies ends up as a kind of Lysenkoism at worst or a 5th generation computing project at best. It would be interesting to consult with someone with knowledge of Chinese regulatory regimes on what approach they might take. Maybe Ian Bremmer might be a good to talk to?

Expand full comment

In a reply I'll make on my own blog, I am also going to make the point about MITI and the Fifth Generation failure. Fans of industrial policy and centralized control of innovation used to go on about MITI as better than a non-centralized system -- until it failed, and now it's conveniently forgotten.

Expand full comment
Apr 25, 2023Liked by Gary Marcus

The counterpoint is that lack of a democratically-created industrial policy is ceding industrial policy to the VC community. And their failures in the last few years are all around us: crypto scams, surveillance capitalism, healthcare fraud, etc.

We have a defense industrial policy on the brink of irrelevance as well. The domestic industrial base we attempted to preserve isn’t equipped for getting artillery shells to Ukraine, their biggest need, unless you count South Korea as part of it.

My point is that a public debate around pilicy is a good way to get what we need, because our current VC-based policy is a disaster. China won’t have one, so we need insight into whether their bureaucracy will make the right moves.

Expand full comment

The technocrats in the PRC have, in my view, forfeited credibility post Covid. They were using no scientific information to constrain their theatrical handling of the pandemic. Videos of workers spraying lye on surfaces destroyed my belief that the state can update their actions to reflect the reality that’s contrary to Xi’s edict.

Expand full comment