41 Comments
May 1, 2023Liked by Gary Marcus

One thing we can do is to promote and adopt the authentication measures being developed by the Coalition for Content Provenance and Authenticity. https://c2pa.org/

The other obvious thing is to pass laws and regulations requiring clear labeling of synthetic communications and media and criminalizing the creation of deep fake clones of people without their permission.

We can do all of this NOW concurrently with establishing international institutions.

We can, of course, continue to work on methods for detecting deep fakes, but this is an arms race that probably cannot be won. This is all a consequence of passing the Turing Test, and I'm chagrined that I didn't see it coming.

Expand full comment

I am grateful for your work and for all you share with us. I love reading your pieces.

Expand full comment

A subgroup in our lab started reading Norbert Wiener's Human Use of Human Beings. Feels particularly timely in all of this.

Expand full comment

I was just reading the Times piece when this popped into my Inbox. Should be able to aid the effort Gary, will reach out.

Expand full comment

Good regulation, as good governance in general, needs to be built bottom-up rather than top-down. Calling for global regulation or governance that would ignore that fact will accomplish nothing of value.

In the end, LLMs are models spurting out probable words given the previous content and intent, rather than sentient entities. Those deploying them are also the first to regulate them, assuming that they are acting in good faith. The same for anything more advanced, like agents doing more advanced work.

Those who are not acting in good faith, truly bad actors, will also have to be handled at the lower levels. As for computer viruses or internet security threats, there will have to be specialized solutions to handle them.

This understanding of building bottom-up solutions is lacking in the current calls that call for some kind of global regulation, which can often be too far from the field to offer good solutions or understanding.

Expand full comment

Point well made Mladen. THE QUESTION, WHO IS TRUSTWORTHY ENOUGH? How could the evaluation process of this person satisfy all, of course, it never will. It seems we always come full circle on this one core issue. I think it could be more detrimental to society than AI itself.

Expand full comment
May 1, 2023·edited May 5, 2023

"Good regulation, as good governance in general, needs to be built bottom-up rather than top-down."

There's no basis or validity to this sweeping assertion, which is essentially meaningless, applying an engineering concept to an area where it doesn't apply. The corresponding legislative concept is "local", but local legislation really doesn't apply to this global context.

"Those deploying them are also the first to regulate them"

"first" isn't relevant.

"assuming that they are acting in good faith"

The whole point of regulatory legislation is that a) you can't assume this and b) even people acting in good faith often fail to appreciate the danger of what they are doing or the best practices to avoid it.

Expand full comment

You are wrong. It is a general social and political concept, and not an engineering one.

It is called the principle of subsidiarity. This principle applies to all governing and regulations.

The first starting point is self-regulation. Think about it why this is true. Those who are close to the problem know much better what to do than those who are far. Only when there is some type of failure, then you need to include someone broader. That's what I said it should be bottom-up.

In the example of AI, you can also see it in the case of OpenAI: they are making an effort to self-regulate because otherwise, their product wouldn't be useful. After that, they are also asking the public and their users.

If you are ignorant of this core principle, which people often are, you will get regulation or governance that does not understand well what to do, becomes too rigid or restrictive, and in the end, fails to accomplish what its starting intention was. This happens all the time in politics, and for this important social issue, this failure of governance could be even more critical than ever. You have to build such regulations in a structured, hierarchical way, rather than thinking that a top-down directive would solve all.

Expand full comment

Marcel, I think Mladen is correct. I worked in bank risk management, doing model governance. Model risk is an operational risk. The same principles of model risk management in a highly regulated industry such as retail banking can be used for AI governance. Even though US banks must follow Federal Reserve SR 11-7 guidance, we have the freedom (and are encouraged) to build bottom-up solutions. We begin with RCSAs (risk control self-assessments), i.e. self-regulation. Mladen mentioned both. Progressively tighter supervision is imposed by regulators if periodic audits reveal a pattern of failure.

The concepts (of a risk framework, policies, standards, procedures, and controls) are well-known and used in many fields. IT audit/ GRC is another example. Or in different domains, say, construction safety. It isn't specific to engineering.

As for trusting that people will act in good faith, that is a necessary starting point. If a technology or tool is excessively regulated, it will likely become too burdensome to bother using it... however, I understand your concern.

--Firearms are an example of a technology that is dangerous AND heavily regulated. Even with restrictions such as licensing and training, bad actors can subvert and misuse this technology in harmful ways. (I'm American. I support Second Amendment rights; I'm not implying that firearm ownership is so dangerous that it should be outlawed.)

--Pharmaceuticals are another technology that can be misused (through ignorance or ill-intent) to cause harm. Manufacture, sale, and dispensing of drugs is tightly regulated, but so beneficial that it is worth the effort to comply with laws governing use.

Are LLMs (i.e. models that output probable words given training data and prompt intent) more dangerous than firearms yet lack the benefits of prescription pharmaceuticals? I don't know. I've noticed that Gary Marcus's recent posts have a much more serious tone than in the past. If a technology is dangerous and its usefulness unclear, maybe it should be illegal. That was what Italy did initially, with OpenAI's GPT-2 (or GPT-3?) I'm not convinced that the benefit of OpenAI, Dall-e, et. al. are worth the downside risks. Tom Dietterich lists some of those risks in his comment of 1 May.

Expand full comment
Aug 18, 2023·edited Aug 18, 2023

Blathering about good faith and then saying you support Second Amendment rights is hilariously ironic. The invention by the NRA of a "right" to prevent regulation of firearms and then the legal establishment of this "right" by their agent Antonin Scalia in his outrageously anti-historical Heller decision is the epitome of bad faith in American politics. As Warren Burger said, the 2A “has been the subject of one of the greatest pieces of fraud, I repeat the word fraud, on the American public by special interest groups that I have ever seen in my lifetime.”

You say "I think Mladen is correct" but you never touch on the point of our disagreement. Both your comments and his are riddled with strawman arguments, like his "You have to build such regulations in a structured, hierarchical way, rather than thinking that a top-down directive would solve all" -- no one said that "a top-down directive would solve all" or anything like that ... but again hilariously ironically, "a structured hierarchical way" is a top-down formulation.

Mladen refers to the principle of solidiarity, but as I already noted that principle is about *local* control, not bottom-up vs. top-down ... neither of you paid any attention to what I actually wrote; you just like to hear yourselves talk.

Expand full comment

That is rather rude and denigrating. I support the US 2nd Amendment, in that I don't think it should be repealed. YOU are the one talking about the NRA! I only mentioned firearms as a technology that is dangerous, just as pharmaceuticals have aspects of danger. I was trying to structure an argument for why LLMs might be more dangerous than useful.

You claimed that bottom up governance is specific to engineering. Mladen said that wasn't true. I followed with a specific example of a very large field, risk management for banking and all financial services in the US, UK, and Europe (as governed by BIS and Basel IV), where the same ideas of bottom-up rules-based governance is implemented. Bottom-up governance is NOT necessarily done at a local level. There are consortia who implement it at a large regional level in other fields, although not at a national or supranational level.

Only once did Mladen's say, "You have to build such regulations in a structured, hierarchical way, rather than thinking that a top-down directive would solve all." I don't know what he means by that. Okay? It seems to contradict what he said prior. It was in the very last sentence of his 2nd comment. Typo? idk

Excluding that, his comments hold together and so do mine.

What are our straw mans? (straw men?)

I don't know what "the principle of solidarity" is in the context of regulation. Might you explain?

Expand full comment

P.S. No, I'm not wrong.

Expand full comment

Thanks, Gary. But I have to say: most people already believe loads of things that aren't true.

Expand full comment

This is a common but logically flawed irrelevant point-missing response. That people so readily believe things that aren't true makes the fact that LLMs are efficient producers of vast amounts of disinformation all the worse.

Expand full comment

The worst part of that is, it makes things **exponentially** worse. This is what people won't understand until it hits them on a very hard personal level. :(

Expand full comment

Excellent article. The argument for regulating AI globally is stronger than ever. However, you speak of bad actors while assuming that the powers that be are good actors? What is this assumption based on? What makes one organized group more ethical than another? Isn't the propensity for unethical behavior shared by all humans? As it is, the public's trust in government has reached a record low.

What if some subversive group or even a single individual working in a garage cracks AGI and decides to use it against the powers that be? What then? Since everything is computerized and globally interconnected nowadays, I can easily imagine a scenario whereby an AGI-powered system infiltrates the computers of the powers that be to gather powerful or sensitive information. The AGI system can then use that information to surreptitiously sabotage key machines around the world, and eventually cause a catastrophic collapse of the world order. It scares me to think about it.

Expand full comment
May 1, 2023·edited May 5, 2023

"However, you speak of bad actors while assuming that the powers that be are good actors? "

He didn't assume this.

"As it is, the public's trust in government has reached a record low."

The public is poorly informed. Trust in Republicans and other right wingers is certainly ill advised--and they are the very people pushing the idea that government is the problem.

P.S. Rebel Science's "opinion" is fallacious, intellectually dishonest, and counterfactual. His "freedom" to have it isn't at issue.

Expand full comment
May 25, 2023·edited May 25, 2023

@Rebel Science didn't say anything about trusting republicans. It is your own assumption based on your own bias against republicans.

Expand full comment

LOL. You have severe reading comprehension problems, stemming from your extreme intellectual dishonesty.

Expand full comment

That's your opinion. Mine is that government IS the problem. I value mine more than yours. Freedom of thought.

Expand full comment

The government including lots of 3 letter agencies and big corporations too.

Expand full comment
May 26, 2023·edited May 26, 2023

I cannot shake the feeling that AI is a form a stealing at a massive scale from creators. It's like stealing from every creator a fraction of a penny but all combined it's a massive operation. If the last 3+ years proved anything is that governments around the world cannot be trusted, corporations even more so. The only way would be for these companies to divulge their training sets, and allow creators to remove their creations from the training sets. How likely is that that is going to happen? In the software space, it is mind blowing that Microsoft used the software stored in their free github repositories to train their AI engine without paying back anything to the software creators. They sucked in their creative energy without giving them anything back. If I may use a metaphor, AI is like the dementors from the Harry Potter novels or the mutants from the Marvel universe that could borrow one's powers to use them against them. This is equivalent to a wealth transfer, and that wealth is the creativity which is an essential human attribute, and the time, the effort and the energy spent in the creation process. The training sets should have been opt out by default, not opt in. The fact that they disregarded the input of those that created content, what does it tell you? Imo, the only way is to fight back and relying on committees and corporations to self regulate is laughable. There are lawsuits against AI corporations by creators, but I am afraid that they won't win because their contribution in the big pot is so minuscule (see this article: https://www.newyorker.com/culture/infinite-scroll/is-ai-art-stealing-from-artists), that they cannot prove the AI productions contain elements of their creations. That's why AI is so diabolical.

Expand full comment
May 1, 2023·edited May 1, 2023

Do you not agree we already live in a world of bad actors that spread disinformation without having full AI already? And do we also all agree AI has just entered our social discord framing and helping to define our opinions? Or has it been playing a roll for a few months, years or possibly decades already and we are just now being enlightened to the presence of this technology? There are many fundamental questions to be answered before we even begin harnessing it's abilities to benefit our society.

Expand full comment
author

read the atlantic piece i linked, please

Expand full comment
May 1, 2023·edited May 2, 2023

Interesting read, sound as well Gary. Another driving factor is going to be once the public at large begin using "true" AI. When they have the full ability, for example, diagnosing a health issue. Do we believe something as fundamental as cost could keep them from seeking actual medical attention due to cost, transportation and insurance? Will AI begin to be given the ability, with clearance by the FDA & CDC to write prescriptions for a fraction of the cost & convenience driven by big pharma? So, who would be defined as bad actors if it's a misdiagnosis leading to possible detrimental harm? FDA? CDC? Big Pharma? I use this one example as a glimpse of how far down the AI rabbit hole we could fall. And that's just one of millions we will forever be suspect and tension AI would bring to the table. Although given the stand off on our national debt, maybe we would benefit in two ways.

1.) The debt would be paid, like most of us have auto pay from our various accounts to help keep up with timely bill payments.

2.) It would take the silliness of politics off the table, when a pressing and obvious fundamental task of government, is no longer of use to either or any party making them act so childish and obsord. Putting our standing in both actual credit rating drops and dropping our self standing on the world stage. Goodness knows, in these unpredictable times, whether it's war, inflation, food shortages, global warming, etc. Apparently our representatives need a nanny of sorts to help project stability of some kind.

Expand full comment

Your work is very interesting/important and I want to keep following your writing. I like supporting people's independent work, but I wish there were more flexible subscription tiers. I follow probably 10 to 15 writers here, and if I support all of them (which they deserve) it's over a hundred dollars a month (too much). I like the 'tipping' model that Post has set up to voluntarily pay something for reading a post. Could there be other levels (a dollar or two a month) or are the rates something set by Substack? I wonder if an increased number of paid subscribers at a lower rate might actually increase your revenue? Thanks! (Oh and A.I. is terrifying...)

Expand full comment
author

I haven’t actually accepted any yet, been so busy! I didn’t even ask for pledges but they added some kind of default. Some day I will investigate. Feel free to read for free; appreciation is enough :)

Expand full comment

Whether these things we created are intelligent or not, whether they are self-aware or not, whether they are aligned or not- all of these questions are beside the point. What we have done is create processes. Processes that can escape our control and act in unpredictable ways. Remember the "grey goo" nanobot scenario? No one worried about nanobot sentience or "intentions." It was the process that had the capability to bring down our civilization. So too with the algorithms teamed with databases. I may be unduly pessimistic but I believe they are already beyond our earnest attempts at control for the reason that we ourselves are beyond our own control. The box has been opened, the genie is out of the bottle. We have let slip the dogs of processes that now we won't be able to call back to our leashes.

We were stupid, but there is hope I think of while not eliminating the threat but at least significantly reducing damage

Expand full comment

I am coming from perhaps an odd perspective to ask: are methods to "control Large Language Models (LLMs)" heading in the direction of machine consciousness, and ultimately is this a requirement? There are many other arguments for the "need for machine consciousness", unfortunately I've lost track of my work reports from ~10 years ago. (yes - I can lose electronic files as well as paper).

Expand full comment

Thank you. Wonderful as always. Perhaps in your pursuits, a couple resources that could accelerate progress - (1) a framework for defining values, core concepts [of justice], principles, commitments, and measurement/reporting criteria published as guidance by the Monetary Authority of Singapore (https://www.mas.gov.sg/-/media/MAS-Media-Library/news/media-releases/2022/Veritas-Document-3B---FEAT-Ethics-and-Accountability-Principles-Assessment-Methodology.pdf); and (2) robust audit frameworks, as those established by https://forhumanity.center/.

Expand full comment

Perhaps this will help?

We should be clear that making one's living as an expert in any field isn't a public service, it's a business. Experts have built their lives upon their business, just like anyone else. They have mortgages, kids in colleges, family who depend on them etc. So the primary goal of any expert is the same as it is for any of the rest of us, to protect their source of income.

What this means is that a expert in any intellectual field has to promote and protect their reputation. And what that means is that the expert can't publicly wander too far beyond the group consensus of their field. If they do that, they put their reputation, career, and family at risk. Thus, experts are typically not free to follow the trail of reason where ever it may lead. Their room to maneuver intellectually is typically restricted to the arena defined as reasonable by the group consensus of their peers.

The group consensus can be radically wrong, including the group consensus of experts.

Evidence: We mass produced nuclear weapons, and then largely ignored them. Experts all across the board are overwhelmingly complicit in such ignoring. In the AI realm, watch for yourself how the experts will talk about the future of AI in millions of words, and never once mention that nuclear weapons can erase the future of AI in the next 30 minutes. If a high school kid did this in an essay assignment, you'd give them a grade of D.

Having utterly failed to manage one existential threat, the experts are now busy creating another existential threat. They want you to forget about the past where they have failed, because in that direction there is cold hard proof of their failure. They want you to turn your attention instead to the future, where they can pacify you with vague governance schemes which don't hold up to 2 minutes of non-expert scrutiny.

The experts aren't bad people. In fact, they are defending the interests of their family, which should be their highest priority. Just know this, their interests are not the same as yours and mine.

Expand full comment

Marcus writes, "At TED, and in companion op-ed that I co-wrote in the Economist, I urged for the formation of an International Agency for AI:"

Could you explain why you think we can create effective global governance of AI when we can't do that for nuclear weapons, a more urgent threat which is far easier to understand?

Marcus writes, "I was saying we need to slow down, and to focus on the kind of research that the pause letter emphasized, viz work on making sure that AI systems would be trustworthy and reliable."

How will AI be made trustworthy when many of the people who will develop and deploy AI can not be made trustworthy??

Marcus writes, "...the immediate development of a global, neutral, non-profit International Agency for ai (iaai), with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful ai technologies"

How in the world do you propose that we get all these powerful players to agree on anything meaningful and specific?

Metz said, "The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it"

In how many cases have the world's leading scientists agreed to limit how much science they do?

Marcus writes, "I have spent all my time since TED gathering a crew of interested collaborators, speaking to various leaders in government, business, and science, and inviting community input. Philanthropists, we need your help."

Pretty close to all such "experts" are trapped in the "more is better" relationship with knowledge philosophy left over from the 19th century. If you want help, here's some...

Bring these "experts" here. Watch how I'll pull the rug out from under their expert status, and watch how they vanish when confronted with inconvenient reasoning which threatens that status.

Artificial intelligence exists because human intelligence doesn't.

Hinton is starting to get this. Decades too late.

Expand full comment

Interesting read, sound as well Gary. Another driving factor is going to be once the public at large begin using "true" AI. When they have the full ability, for example, diagnosing a health issue. Do we believe something as fundamental as cost could keep them from seeking actual medical attention due to cost, transportation and insurance? Will AI begin to be given the ability, with clearance by the FDA & CDC to write prescriptions for a fraction of the cost & convenience driven by big pharma? So, who would be defined as bad actors if it's a misdiagnosis leading to possible detrimental harm? FDA? CDC? Big Pharma? I use this one example as a glimpse of how far down the AI rabbit hole we could fall. And that's just one of millions we will forever be suspect and tension AI would bring to the table. Although given the stand off on our national debt, maybe we would benefit in two ways.

1.) The debt would be paid, like most of us have auto pay from our various accounts to help keep up with timely bill payments.

2.) It would take the silliness of politics off the table, when a pressing and obvious fundamental task of government, is no longer of use to either or any party making them act so childish and obsord. Putting our standing in both actual credit rating drops and dropping our self standing on the world stage. Goodness knows, in these unpredictable times, whether it's war, inflation, food shortages, global warming, etc. Apparently our representatives need a nanny of sorts to help project stability of some kind.

Expand full comment