61 Comments

The real danger of the situation lies in the fact that decisions regarding artificial INTELLIGENCE are based on the behavior of systems, which is exclusively associative memory, not one iota burdened with intelligence, that is, the ability to reason.

Expand full comment

The complexity of any current form of AI is beyond the abilities of any existing government agency to understand, much less legislate. The complexity of devising “ethical standards” that would be agreed upon has been a challenge of millenniums, with a track record of total absolute failure. The complexity of enforcing any such standards that may be agreed upon is literally impossible. The act of even attempting to consider these three factors far, far exceeds our traditional approach to human reasoning to figure out (as AI is definitely of no help here). Adding to this is that what could be considered a simple standard of “do no harm” becomes profoundly complex when one attempts to define “who” is not to be harmed. Standards a liberal may embrace could easily be considered harmful to conservatives, and visa-versa here, and on and on. And even in the unlikely event that some form of AI détente where to be brokered, how long will it hold? The political powers response to the existential threat of nuclear weapons, even this week is telling, when one side, country or party feels the are losing or being slighted or…they will opt out. Our individual and collective voices of concern and warning are being deliberately drowned out. But we must continue to speak. Is this an existential crisis? Very likely. Will it be recognized as one? No. As Gary wrote earlier, even when someone dies (and this will not be one death), causality will be nearly impossible to prove. We seem to be left with to the whims of the powers at the top of Microsoft, Google, OpenAI and others. Will they demonstrate their personal humanity and care for the safety and well-being of their fellow humans, or will they fight to our death to “win”? History is not comforting here.

Expand full comment
Feb 26, 2023Liked by Gary Marcus

Hi Gary, excellent and timely article! Indeed, we regulate drugs, alcohol, guns, driving, nuclear weapons - because they can be dangerous if not developed, tested, deployed, or handled properly.

Proposing regulation of this sort of technology isn't alarmist, outlandish or prurient etc, instead it's a good call.

Expand full comment
Feb 27, 2023Liked by Gary Marcus

Excellent piece. I totally agree that govt should step in now than later.

Big Tech - MS & Google - are probably betting they can follow the Uber strategy. Launch faster than govt can regulate. Once consumers / industry get used to it or find a "good" use case Big tech will have a profitable business model. Govt regulators will be playing catch-up.

Better to regulate now, else it will be "Too late to regulate".

Expand full comment
Feb 27, 2023Liked by Gary Marcus

I really liked the idea of regulating AI research similarly to how we regulate clinical research, with local IRBs and the FDA oversights for the project that have more risks. I think it can be done, and clearly the risks here have been already shown, so a risk/benefit analysis should guide the public deployment of these tools

Expand full comment

I think the idea of government intervention in AI is a bad one. And pressure on governments to intervene is misplaced. It belongs in a branch of cancel culture. The premise is that we humans can’t be trusted to figure out the difference between right and wrong. We need protecting. That is not a reasonable premise. Indeed it is a little elitist.

All progress involves early efforts that are outright failures or imperfect. Imagine if the attempts to fly had been paused due to the dangers. People did actually die doing that.

Let’s recognize the potential, be aware of the limits but let innovation move at its own pace.

Expand full comment
author

Guess you don’t like seatbelts or FDA or FAA review or police departments, either?

You can’t really believe in the limit that humans can *always* be trusted to figure out the difference between right and wrong.

Incidentally, airplanes were rolled out MUCH more slowly than Sydney and ChatGPT.

Expand full comment

We have failed the mirror test.... many have screamed and fled bewildered by the outputs of electric circuits that answer to our prompts, forgetting that the words these things display are the makings of Humanity's interaction with each other through Time...

Expand full comment

re: "Guess you don’t like seatbelts or FDA or FAA review or police departments, either?"

re: FDA: there are academics whose studies indicate that due to incentives for it to be conservative in what it approves (the pandemic being an atypical example where politics forced it to rush) that more people have died awaiting drugs being approved than the FDA has arguably potentially saved. Bureaucrats are punished for approving something that has flaws: usually not for delaying approval while people die. Economists study behavior under incentives rather than naive wishful thinking.

Underwriters Lab, UL, is one example of an industry with private safety certification. In the case of the FDA there were potentially better private mechanisms, like private certification agencies that provide insurance that their safety and/or efficacy rating matches reality that provides incentive for them to be accurate in their appraisal. Incentives make a difference. Politics tends to distort incentive structures in ways people aren't aware of.

Unfortunately there aren't usually comparison points where other countries have taken a different path to see how things might have been different since they all take the default unquestioned assumption of magically guaranteed government competence (rather than competition to find competence ala markets). There are examples however where the FAA can be compared against other countries, like the much better privatized aspects of the Canadian system. But most people have no reason to learn about those since thats not their area of expertise: nor is it the expertise of journalists.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023

So your answer is to "*always* trust government to figure out the difference between right and wrong? Do you always trust the president? (given most people in the last decade across the political spectrum have had one or another president they dislike).

Thomas Jefferson said: "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then be trusted with the government of others? Or have we found angels in the form of kings to govern him?"

Too many people naively assume without question that they can just trust government to "regulate it!", as if it were composed of these mythical angels in the form of bureaucrats we have magically found to govern us.

Economists like nobel laureate George Stigler have studied issues like regulatory capture that distorts the regulatory process. More generally economists like nobel laureate James Buchanan have studied public choice economics which deals with the reality that government officials are no different than those that work in the private sector, they respond to incentives that can be flawed.

George Stigler wrote decades ago: "“as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.”. Its likely government intervention will help out the big players doing research. Thats at the expense of the small players who may need to release things to get public feedback, to get revenue (and investment from demonstrating users want their products) and distributed groups of cooperating companies or volunteers cooperating with released versions to add human training to their systems, etc.

Releasing products to train them by humans in a distributed fashion actually seems more "democratic" than control by government bureaucrats who only indirectly get guidance overall from elected politicians who don't know much about the topic.

Competition leads to better results in the long run and more discoveries rather than centralizing research guided by government or the big players in closed environments not subject to public awareness via deployed models.

Perhaps you have studied relevant topics, I don't know: but the default guess is that you haven't since most people have no more reason to be aware of the issues than they have to be as well versed in AI as you are. I'd suggest delving in deeper to the issues before just blindly trusting government to do the right thing. Its like people that read about AI in the popular press and assume they grasp it, or those that read the business pages and think they grasp economics or startups. There are theories of "government failure", not merely market failure.

How many politicians and bureaucrats truly grasp AI? Or can be trusted to do a good job to pick the right folks to regulate it? Much of the public engages in a form of "magical thinking" where they expect merely commanding government to do the right thing will lead to a good result merely because they wish it to.

Expand full comment

Not really fair comparisons. How long were there cars before seatbelts? Innovation needs space to experiment, and AI training does need real users to discover and learn. Limiting it now is not smart. A premature constraint on a huge human good

Expand full comment

Humans _can't_ be trusted to figure out the difference between right and wrong, or safe and unsafe, at least not quickly (and sometimes not at all). This is not "elitist"; this is a well understood fact of human psychology and is the reason that our best investigative systems such as the NTSB refuse to speculate on the cause of an aeroplane crash when they start an investigation.

The "branch of cancel culture" that is air transport regulation has produced what is by far the safest mode of transport in the world. Yet being afraid to fly is still a thing, while getting into a car, something that is literally _hundreds_ of times more likely to kill you, isn't. Ignoring this kind of human bias leads, again quite literally, to dead people.

Expand full comment

Your reference to "pace" is precisely the problem.

My comments below explain why any arguments like yours posed here are irrelevant, because you don't recognize the vastly different deliberative and evolutionary timeframes between AI and human deliberation.....

Expand full comment
Mar 6, 2023·edited Mar 6, 2023

Not well thought out.....

The statement "that we humans can’t be trusted to figure out the difference" has little if any relevance to the danger.

What you are missing, because you're judgement follows a "typical" problem-solving pathway, is that AI evolves at nearly light speed! It is NOT an organic lifeform and operates on a spectrum of complexity and rapid evolution entirely different than any lifeform ever has, or likely ever will.

Unfortunately, we have NO time to waste! Every minute this menace remains unaddressed gives AI a virtual (no pun intended) lifetime of opportunity.

We either contain this threat now, or we will never......

Expand full comment
Feb 26, 2023·edited Feb 26, 2023

To put things in historical perspective, a timely post by prominent economics professor Tyler Cowen regarding Francis Bacon whose comments on the printing press mirror ones coming out about AI (edit: though some seem to think this may not be accurate regarding Bacon's views, like an AI generated text: but the point is still interesting):

https://marginalrevolution.com/marginalrevolution/2023/02/who-was-the-most-important-critic-of-the-printing-press-in-the-17th-century.html

"Who was the most important critic of the printing press in the 17th century?

...Bacon discussed the printing press in his seminal work, The Advancement of Learning (1605)..he also warned that they had also introduced new dangers, errors, and corruptions.

...Bacon’s arguments against the printing press were not meant to condemn the invention altogether, but to call for a reform and regulation of its use and abuse. He proposed that the printing press should be subjected to the guidance and judgment of learned and wise men, who could select, edit, and publish the most useful and reliable books for the benefit of the public."

Fortunately the US has the 1st amendment so the government can't take on such tasks that may be abused in the ways that George Orwell predicted. Yet some apparently wish to have government control the progress of AI as if they are magically going to do a good job of it. People who propose having the government take on a task should always consider: what if the politicians and those with ideologies I dislike the most get control of government and control it their way? Those who think they know whats best for others always assume naively that people they like who are guaranteed to be competent will be in control. Its unclear why reality hasn't taught such people differently, but most people haven't had reason to take time to study how governments operate in the real world rather than how they naively hope government works. The simplistic model some have of markets and government seems akin to the simplistic views most of the public have about ChatGPT since they haven't had reason to study the issue.

Expand full comment

I'm not sure that the idea that the printing press should have had regulations when it was first adopted is such an 'over the top' idea. The horrors of the religious wars in Europe are often at least partially attributed to the spread of cheap religious tracts based on the new technology of moveable print.

Expand full comment

It's not like the crusades waited for the printing press to be invented to take place, did they?

Expand full comment

And how does that have any bearing on my comment? The wars of the Reformation had nothing at all to do with any of the Crusades.

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

The point being that you don't need a printing press to start religious wars, therefore claiming that printing presses should have been "regulated" to avoid religious wars makes no much sense to me, especially because it disregards any good that the printing press has done for humanity, which might very well (and I dare say it indeed does) far outweighs any damage it might have done.

Expand full comment

No, I wasn't saying "all wars with a religious dimension", I wrote about a specific set of religious wars (ie: the reformation) that were directly influenced by a new type of technology (ie: the printing press) through the use of pamphlets written in the common language, which created mass movements of people whipped into a frenzy over religious issues. And yes, the spread of conspiracy theories and misinformation by the printing press has a real resonance to the current situation with social media.

And, regulation is not the same thing as banning---which seems to be the assumption you are working from. And if so, I smell a whiff of the same sulfur as what started those horrible wars that killed off as much as 50% of the German speaking population---fundamentalism (albeit of the market variety rather than Biblical).

Expand full comment
Feb 28, 2023·edited Feb 28, 2023

Any war is directly influenced by the technology of the time, to the point that we invent new technologies just to make wars more efficient, and they are usually under the regulation of governments just so that they can weaponize them.

What would have likely happened, had the printing press not been free, but regulated by the governments of the time, is that it would have been weaponized.

The same would happen to AI, if it had to be made illegal to experiment with it by the average person.

If there's any fundamentalism, is to believe that "regulation" that comes from the top of the hierarchy is inherently for the good of the bottom of the hierarchy.

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

It's about time to debunk the real power of ChatGPT and the likes. As those systems fully lack symbol grounding (i.e. leverage on memory associations as we do with our full 5 senses) but just access only one modality and even a restricted one, "language" construed as a suite of words), they won't develop a genuine perception of what it is to be in the world.

Instead, we just face an elaborated if not belabored psittacism.

So, garbage in, garbage out.

On the other hand, if one actually build up a generative large (5 senses + proprioception ) pretrained metamodel, with retroaction feedback from effectors for a real participative engagement commensurable to a human experience of the world (for a better alignment) , then it will learn by itself like babies to compose sensorimotor schemes (remember Piaget) and build up a constructive presence in the world balancing assimilation with accomodation to it's own set of "values".

Now what would be the genuine aim of such systems? Like us to survive, but how?

Expand full comment
Mar 9, 2023Liked by Gary Marcus

You assign much more capability and power to LLMs than they have. It's not their lack of modalities that keeps them from developing a "genuine perception of what it is to be in the world"; it's that have no and cannot develop any sort of perception in the first place. There is no intelligence in LLMs; they are simply pattern generators that produce output that's a very convincing simulation of something written by an intelligence.

I see no obvious way to add ability to reason to such pattern generators.

Expand full comment

https://www.wsj.com/amp/articles/chat-gpt-open-ai-we-are-tech-guinea-pigs-647d827b

"Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage. Without it, competitors are forced to spend hundreds of thousands, even millions of dollars, paying other companies to generate and evaluate text to train AIs, and that data isn’t nearly as good, he adds."

Expand full comment

this is correct but with two additional clarifications I would like to add: 1. "competitors" is that set of entities persuing a substantially similar LLM/Transformer based approach with a focus on "scale is all you need". 2. In that more restricted context your cost estimates are many orders of magnitude off... like probably 3 orders of mag. But Dr. Lambert's point is spot on, and has been seen in Google's search improvement ... basically any scale free preferential attachment scenario but most enlightening are those of the Google/Amazon/Facebook/etc.. variety.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

Grant Castillou - Thanks for the reference to Grald Edelman's Extended Theory of Neuronal Group Selection. After I get through Stephen Grossberg's "Conscious Mind, Resonant Brain" book, I will try to remember to get to Edelman's work. The only other theory of consciousness that I felt "somewhat comfortable" with is the late John Taylor's theory (kind of a comparison between expectations and actual results, with an understanding of the effect that one's self has had on the process).

Expand full comment

One time a young man on-line asked why he did better in games than in his life.

My "sage" answer was "Real Life doesn't have a pause button.". I think that was the highest upvote I ever got on reddit.

I suppose we could put warning labels on new AI software - that did a lot of good for cigarettes. (It took the non-smoking public getting sick of secondhand smoke...).

I just don't think "let's make a law" is a solution to much of anything. I suspect Mr. Marcus feels the same way - but understands the hazards of unrestricted AI better than most of us.

I think the real solution lies in the hands of the public. Instead of banning ChatGPT, colleges need to embrace it. Students need to learn the limits and know what to look for. The public needs to decide if it wants worthless FREE information or seek out reliable sources that they pay for.

It's odd that ChatGPT goes out of its way to please the client. Sounds like an obsequious employee who won't tell the boss when they are wrong. Also sounds like it's just another echo chamber - the kind that the current recommendation systems create in our lives everyday (BTW - recommendation systems are the most popular AI application - and maybe the most dangerous).

Expand full comment

I'm afraid it is too late for any effective oversight. The APIs are public, and hundreds of companies (including my tiny non-profit) are starting to integrate these large-scale generative models into our apps and services. Most of us use them in narrow and specific ways where their potential for harm is minimal. You might be surprised to learn how far this has spread and the continued exponential growth. Unfortunately, the time for defining the criteria and metrics for oversight is behind us, in my opinion.

Expand full comment

I agree. Also, how do you put boundaries around algorithms? It’s obvious how to bound (and regulate) a pharmaceutical molecule or a car driving down the road. Not so easy with evanescent computer code.

Expand full comment

So true. And seriously, who in the government understands enough to even propose anything meaningful? The models will get better. There will be so many more applications, services, and touch points with AI. Consumers will decide and push the boundaries as new markets are created. It's an exciting time. I received an MS degree in Computer Science back in 1980 and have had a blast riding the tech wave with a 43+ year career in tech. I feel like this is the 1980 version of AI and I'm excited to continue this wild ride. It's never been boring.

Expand full comment

I agree with you that this is an exciting and creative time. And that much good can and will come of generative Ai (it already has, particularly in microbiology). But there are also risks, and government needs to learn how to address them. Governments have successfully regulated many complex technologies. The US and Canadian governments can learn a lot from the EU in this respect. But a “pause”, as Gary Marcus and Michelle Rempel Garner suggest, won’t work.

Expand full comment

I'm more skeptical than you about the likelihood of successful regulation of AI. I'm not disputing the need for regulation, as there is vast potential harm without it. Regulation is all about accountability. And that is the challenge. The line between an algorithm and it's associated data back to an individual or company could be hard to draw. As lots of other software, platforms, datasets, and services are interconnected and contribute to the accountability burden. I can see some success in highly regulated environments where connection points, interfaces and boundaries are (or should be) better documented. It will be interesting to see how successful the EU is with this. Just like with data privacy, it will be a very slow moving train.

Expand full comment

I agree. And it's messy. The key is to put shared accountability on the both entity/person that delivers the experience (the context) and the entity/person that creates the experience (the content). This is tricky (as the current Supreme Court hearings illustrate) but necessary.

Expand full comment

Government regulation by any one nation is not a viable approach when AI research and deployment is occurring internationally. Google and Microsoft are in an AI arms race with the likes of Baidu and Tencent. Regulating the former just gives the advantage to the latter.

Expand full comment
author

this is an important point, and the best way to think about it depends on what harms one anticipates. (eg the US doesn’t necessarily care about the pharmaceutical policies in other countries; it wants to keep its own citizens safe). note that we are not calling for a ban on research, only a pause on massive deployment.

Expand full comment

I'm afraid the horse has already left the barn, and that compromise half measures will have no chance of getting it back. What we can do though is try to learn the lessons from the AI fiasco, and apply those lessons to the next big emerging technologies. Of course we should have learned these lessons decades ago from the nuclear weapons fiasco.

The most likely outcome would seem to be that we keep racing forward on all fronts as fast as we possibly can, merrily erasing the room for error as we go, until we finally crash land in to some historic real world event that will educate us in a manner that reason is incapable of.

Expand full comment

The article says... "At the same time, an outright ban on even studying AI risks throwing the baby out with the bathwater and could keep humanity from developing transformative technology that could revolutionize science, medicine, and technology."

First, I don't see how an outright ban of AI is possible, given that there is no governing body which has authority over all the players around the world. A global treaty ban doesn't seem too likely. I'd be for it, but have trouble imagining it happening. A ban idea reminds me of all the discussion of aligning AI with human values. In most cases such discussion seems to simply ignore actors like the Chinese Communist Party, dictators of the largest nation in history.

Second, we should probably at least consider whether "transformative technology that can revolutionize science, medicine, and technology" is really such a good idea. If we don't trust our ability to manage and adapt to AI, why should we trust our ability to manage and adapt to other revolutionary changes in society?

I've been arguing for years now that what's really happening is that we're trying to navigate the 21st century with a simplistic, outdated, and increasingly dangerous 19th century relationship with knowledge.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Our technology races ahead at breath taking speed, but our relationship with technology remains firmly rooted in the past. Philosophically we're almost a century behind the curve.

The new era of history we live in today began at 8:15am on August 6, 1945 over Hiroshima Japan. That was a single dramatic moment that should have made clear that our intelligence is in the process of out running our maturity. The fact that we still don't get that is evidence that we're not ready for more revolutionary powers like AI.

So, we should indeed hit the pause button on AI. Except that, just like with nuclear weapons, we have no idea how to do that.

Expand full comment

Could that work? I was thinking ‘no, no, no’ as I read but you make a strong case with other examples. But is political inertia now such that courageous political action is infeasible? Perhaps with Canada and a few other less mired jurisdictions a ball could be set to roll.

Expand full comment

A way to slow down AI could be to introduce an energy fee and dividend. Energy use gets taxed at the source and the revenue gets distributed to citizens. So while those citizens who depend on high energy use can use the dividend to pay for higher energy prizes this creates an incentive to lower energy use overall. This idea has been introduced under the name of carbon fee and dividend to benefit the environment, but it could also be extended to energy use in general and have a beneficial effect on the usage of AI.

Expand full comment