61 Comments

The real danger of the situation lies in the fact that decisions regarding artificial INTELLIGENCE are based on the behavior of systems, which is exclusively associative memory, not one iota burdened with intelligence, that is, the ability to reason.

Expand full comment

The complexity of any current form of AI is beyond the abilities of any existing government agency to understand, much less legislate. The complexity of devising “ethical standards” that would be agreed upon has been a challenge of millenniums, with a track record of total absolute failure. The complexity of enforcing any such standards that may be agreed upon is literally impossible. The act of even attempting to consider these three factors far, far exceeds our traditional approach to human reasoning to figure out (as AI is definitely of no help here). Adding to this is that what could be considered a simple standard of “do no harm” becomes profoundly complex when one attempts to define “who” is not to be harmed. Standards a liberal may embrace could easily be considered harmful to conservatives, and visa-versa here, and on and on. And even in the unlikely event that some form of AI détente where to be brokered, how long will it hold? The political powers response to the existential threat of nuclear weapons, even this week is telling, when one side, country or party feels the are losing or being slighted or…they will opt out. Our individual and collective voices of concern and warning are being deliberately drowned out. But we must continue to speak. Is this an existential crisis? Very likely. Will it be recognized as one? No. As Gary wrote earlier, even when someone dies (and this will not be one death), causality will be nearly impossible to prove. We seem to be left with to the whims of the powers at the top of Microsoft, Google, OpenAI and others. Will they demonstrate their personal humanity and care for the safety and well-being of their fellow humans, or will they fight to our death to “win”? History is not comforting here.

Expand full comment
Feb 26, 2023Liked by Gary Marcus

Hi Gary, excellent and timely article! Indeed, we regulate drugs, alcohol, guns, driving, nuclear weapons - because they can be dangerous if not developed, tested, deployed, or handled properly.

Proposing regulation of this sort of technology isn't alarmist, outlandish or prurient etc, instead it's a good call.

Expand full comment
Feb 27, 2023Liked by Gary Marcus

Excellent piece. I totally agree that govt should step in now than later.

Big Tech - MS & Google - are probably betting they can follow the Uber strategy. Launch faster than govt can regulate. Once consumers / industry get used to it or find a "good" use case Big tech will have a profitable business model. Govt regulators will be playing catch-up.

Better to regulate now, else it will be "Too late to regulate".

Expand full comment
Feb 27, 2023Liked by Gary Marcus

I really liked the idea of regulating AI research similarly to how we regulate clinical research, with local IRBs and the FDA oversights for the project that have more risks. I think it can be done, and clearly the risks here have been already shown, so a risk/benefit analysis should guide the public deployment of these tools

Expand full comment

I think the idea of government intervention in AI is a bad one. And pressure on governments to intervene is misplaced. It belongs in a branch of cancel culture. The premise is that we humans can’t be trusted to figure out the difference between right and wrong. We need protecting. That is not a reasonable premise. Indeed it is a little elitist.

All progress involves early efforts that are outright failures or imperfect. Imagine if the attempts to fly had been paused due to the dangers. People did actually die doing that.

Let’s recognize the potential, be aware of the limits but let innovation move at its own pace.

Expand full comment
Feb 26, 2023·edited Feb 26, 2023

To put things in historical perspective, a timely post by prominent economics professor Tyler Cowen regarding Francis Bacon whose comments on the printing press mirror ones coming out about AI (edit: though some seem to think this may not be accurate regarding Bacon's views, like an AI generated text: but the point is still interesting):

https://marginalrevolution.com/marginalrevolution/2023/02/who-was-the-most-important-critic-of-the-printing-press-in-the-17th-century.html

"Who was the most important critic of the printing press in the 17th century?

...Bacon discussed the printing press in his seminal work, The Advancement of Learning (1605)..he also warned that they had also introduced new dangers, errors, and corruptions.

...Bacon’s arguments against the printing press were not meant to condemn the invention altogether, but to call for a reform and regulation of its use and abuse. He proposed that the printing press should be subjected to the guidance and judgment of learned and wise men, who could select, edit, and publish the most useful and reliable books for the benefit of the public."

Fortunately the US has the 1st amendment so the government can't take on such tasks that may be abused in the ways that George Orwell predicted. Yet some apparently wish to have government control the progress of AI as if they are magically going to do a good job of it. People who propose having the government take on a task should always consider: what if the politicians and those with ideologies I dislike the most get control of government and control it their way? Those who think they know whats best for others always assume naively that people they like who are guaranteed to be competent will be in control. Its unclear why reality hasn't taught such people differently, but most people haven't had reason to take time to study how governments operate in the real world rather than how they naively hope government works. The simplistic model some have of markets and government seems akin to the simplistic views most of the public have about ChatGPT since they haven't had reason to study the issue.

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

It's about time to debunk the real power of ChatGPT and the likes. As those systems fully lack symbol grounding (i.e. leverage on memory associations as we do with our full 5 senses) but just access only one modality and even a restricted one, "language" construed as a suite of words), they won't develop a genuine perception of what it is to be in the world.

Instead, we just face an elaborated if not belabored psittacism.

So, garbage in, garbage out.

On the other hand, if one actually build up a generative large (5 senses + proprioception ) pretrained metamodel, with retroaction feedback from effectors for a real participative engagement commensurable to a human experience of the world (for a better alignment) , then it will learn by itself like babies to compose sensorimotor schemes (remember Piaget) and build up a constructive presence in the world balancing assimilation with accomodation to it's own set of "values".

Now what would be the genuine aim of such systems? Like us to survive, but how?

Expand full comment

https://www.wsj.com/amp/articles/chat-gpt-open-ai-we-are-tech-guinea-pigs-647d827b

"Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage. Without it, competitors are forced to spend hundreds of thousands, even millions of dollars, paying other companies to generate and evaluate text to train AIs, and that data isn’t nearly as good, he adds."

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

One time a young man on-line asked why he did better in games than in his life.

My "sage" answer was "Real Life doesn't have a pause button.". I think that was the highest upvote I ever got on reddit.

I suppose we could put warning labels on new AI software - that did a lot of good for cigarettes. (It took the non-smoking public getting sick of secondhand smoke...).

I just don't think "let's make a law" is a solution to much of anything. I suspect Mr. Marcus feels the same way - but understands the hazards of unrestricted AI better than most of us.

I think the real solution lies in the hands of the public. Instead of banning ChatGPT, colleges need to embrace it. Students need to learn the limits and know what to look for. The public needs to decide if it wants worthless FREE information or seek out reliable sources that they pay for.

It's odd that ChatGPT goes out of its way to please the client. Sounds like an obsequious employee who won't tell the boss when they are wrong. Also sounds like it's just another echo chamber - the kind that the current recommendation systems create in our lives everyday (BTW - recommendation systems are the most popular AI application - and maybe the most dangerous).

Expand full comment

I'm afraid it is too late for any effective oversight. The APIs are public, and hundreds of companies (including my tiny non-profit) are starting to integrate these large-scale generative models into our apps and services. Most of us use them in narrow and specific ways where their potential for harm is minimal. You might be surprised to learn how far this has spread and the continued exponential growth. Unfortunately, the time for defining the criteria and metrics for oversight is behind us, in my opinion.

Expand full comment

Government regulation by any one nation is not a viable approach when AI research and deployment is occurring internationally. Google and Microsoft are in an AI arms race with the likes of Baidu and Tencent. Regulating the former just gives the advantage to the latter.

Expand full comment

The article says... "At the same time, an outright ban on even studying AI risks throwing the baby out with the bathwater and could keep humanity from developing transformative technology that could revolutionize science, medicine, and technology."

First, I don't see how an outright ban of AI is possible, given that there is no governing body which has authority over all the players around the world. A global treaty ban doesn't seem too likely. I'd be for it, but have trouble imagining it happening. A ban idea reminds me of all the discussion of aligning AI with human values. In most cases such discussion seems to simply ignore actors like the Chinese Communist Party, dictators of the largest nation in history.

Second, we should probably at least consider whether "transformative technology that can revolutionize science, medicine, and technology" is really such a good idea. If we don't trust our ability to manage and adapt to AI, why should we trust our ability to manage and adapt to other revolutionary changes in society?

I've been arguing for years now that what's really happening is that we're trying to navigate the 21st century with a simplistic, outdated, and increasingly dangerous 19th century relationship with knowledge.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Our technology races ahead at breath taking speed, but our relationship with technology remains firmly rooted in the past. Philosophically we're almost a century behind the curve.

The new era of history we live in today began at 8:15am on August 6, 1945 over Hiroshima Japan. That was a single dramatic moment that should have made clear that our intelligence is in the process of out running our maturity. The fact that we still don't get that is evidence that we're not ready for more revolutionary powers like AI.

So, we should indeed hit the pause button on AI. Except that, just like with nuclear weapons, we have no idea how to do that.

Expand full comment

Could that work? I was thinking ‘no, no, no’ as I read but you make a strong case with other examples. But is political inertia now such that courageous political action is infeasible? Perhaps with Canada and a few other less mired jurisdictions a ball could be set to roll.

Expand full comment

A way to slow down AI could be to introduce an energy fee and dividend. Energy use gets taxed at the source and the revenue gets distributed to citizens. So while those citizens who depend on high energy use can use the dividend to pay for higher energy prizes this creates an incentive to lower energy use overall. This idea has been introduced under the name of carbon fee and dividend to benefit the environment, but it could also be extended to energy use in general and have a beneficial effect on the usage of AI.

Expand full comment