56 Comments
Mar 29Liked by Gary Marcus

It is indeed very worrisome.

"I am increasingly worried that the big tech companies are—in conjunction with a Congress that has not yet passed any substantive AI legislation—successfully architecting a grossly asymmetric world, in which the profits from the positive side flow to them, and the costs of the negative side are borne by everyone else." is very much to the point, but should we have expected otherwise? After all, even a founder of modern capitalism like Adam Smith already warned against listening to entrepreneurs when they argue political issues as they have only one motive: their own profit (definitely not the good of society) regardless how they cloak that message. Given their power in the information space, getting the genie back in the bottle seems an impossible task.

Expand full comment
Mar 29Liked by Gary Marcus

Thanks for the article, very interesting and timely for the charity I work with. We are very concerned about how these applications could be used

Expand full comment
Mar 29Liked by Gary Marcus

This is very true. At first, I was a bit elated and keen to see how Big tech would wiggle its way around this one. It's no secret that Big tech has over the years effectively kept IP at the core of Its business strategy, especially when fending off competition and innovation. So, naturally, I was curious to see how it would justify mindlessly infringing on humanity's labor and creativity without paying a dime.

I could not be more wrong. As you have pointed out Gary, the legislative picture slowly emerging is somewhat sinister. I can clearly see it going the way of separating the maker of the tool, the "tool", and the user of the tool. All copyright risk will fall on the user. And if there's more than a gazillion users, one can only assume that all policing efforts will be pointless.

I sure hope I'm just exaggerating the situation.

Expand full comment
Mar 29Liked by Gary Marcus

Trust is one of the foundations of a functioning society. We had a trust problem before OpenAI, and genAI is scaling the problem up, alongside social media, with no sign of anyone hitting the breaks.

My kid’s school district has a chatbot, and I don’t see why I’d use it, given that I can’t trust the answers. Meanwhile, online registration has been down for months. I emailed the school secretary, a human who could accurately and efficiently tell me what to bring to the office.

Expand full comment
Mar 29·edited Mar 29Liked by Gary Marcus

The path to truly benevolent AI/AGI boils down to one thing -- ALIGNMENT -- but in two flavours: (a) technical alignment (aligning AI with humans), and (b) societal alignment (aligning humans with humans). Of the two, societal alignment is by far the hardest problem, because it requires the many tribes (~200 sovereign, ~300 million corporate, etc) into which humanity is fractured to abandon their own short-term self-interest in favour of the long-term best-interest of the human species.

Expand full comment
Mar 30Liked by Gary Marcus

I was recently at a panel where the voice actor, Christopher Sabat, told an amusing story about how he discovered people were using AI to clone his voice online. He quickly used ChatGPT to generate a cease-and-desist letter and sent it to them. It worked an they were shut down. So there is some legal recourse for AI fakery, although it might be difficult in some instances.

Expand full comment

Bothersome, very bothersome. I just did a post with this title: What are the chances that the current boom in AI will “stupidify” us back to the Stone Ages?, https://new-savanna.blogspot.com/2024/03/what-are-chances-that-current-boom-in.html

Instead of an intelligence explosion through recursive self-improvement we may be facing an intelligence implosion through recursive enshittification.

Expand full comment

Why do you say there's no law that can address the digital clone/deepfake problem? The Lanham Act protects against false advertising, false endorsement, and unauthorized use of even unregistered trademarks, which some courts have held to include personal attributes such image, likeness, voice, etc. And state laws protect against these wrongs, as well as defamation, false light, invasion of privacy, etc. Is the problem enforcement -- that is, we don't know who the scammers are? That will be a challenge no matter what law exists. I certainly do agree we need better legislation, better legal tools at the federal and state levels. But I don't think it's correct to say "no law" protects against this problem.

Expand full comment

Just like ordinary crime. The criminals can always react much faster than the governments and law makers.

In this case, the Big Tech can and do run rings around all the governments of the world and lobby against any form of regulation that might conceivably limit their profit taking

Expand full comment

Sorry Gary, but Anna Makanju is wrong. It would be more correct to say that the race is on to find sufficiently many lies to convince us that AI has any positive applications at all so that we can distract lawmakers and the public sufficiently to strengthen the tech-hegemony and thereby gain further control of all of society.

There is simply no world where AI will improve society in any way. The belief in such a fairy-tale only exists because we have been brainwashed to such an extent that we still believe the lies of big-tech.

It's a shame the race isn't on to find new ways to dismantle and destroy AI, rather than hope beyond all logic that anything good can come out of AI.

Expand full comment

And it will get much,much worse as more and more people have access to the open source generative AI and can do whatever they want with these unreliable pieces of software and turn them into very dangerous weapons.

Expand full comment

The underlying issue which requires more examination is that an accelerating knowledge explosion is producing new challenges faster than we can figure out how to meet them. Once that is understood, then focusing on particular problems with particular technologies begins to seem like an unhelpful distraction.

Imagine that you're working at an Amazon warehouse processing packages as they roll off the end of an assembly line. The packages keep getting bigger and bigger, and coming at you faster and faster. At first you can keep up by working harder and smarter. But if the assembly line continues to accelerate, sooner or later you will be overwhelmed no matter what you do. The only real solution to that situation is to stop focusing on the packages (emerging technologies) and start focusing on the assembly line (knowledge explosion).

Trying to meet these challenges by focusing on particular technologies one by one by one is a loser's game. By the time we make AI safe, five new challenges will have emerged.

Expand full comment

Maybe someone will write some code with the capability of thoroughly confusing data mining and Internet search/consumer profiles of individuals, jumbling the data trove to the extent of meaninglessness and disrupting the predictions.

Or perhaps AI could clone my image and data and create all sorts of alternative profiles for me. For example, I wonder how the Siren Servers are interpreting the latest iteration of Michell Janse's e-dentity, which is of course in turn linked to those of her household, family, and friends.

This is only the beginning! We've barely left square one!

Expand full comment
Mar 31·edited Mar 31

Re: software library security, this has been a problem before ChatGPT exploded on the scene, and there has been vigorous discussion as to how to fix the problem.

It is unfortunate that the solutions to the problem are now frantically trying to differentiate themselves in interviews, having been retrenched en masse. Competent technical leads and developers have had to suffer tremendous mental strain as every non-technical person chanted the “software development is dead” falsehood. And outsourcing is all the rage nowadays.

A powerful but under appreciated solution is to have these competent system engineers serve as board members. Unfortunately it is an unpopular idea because boards are often hostile to unorthodox (but incredibly lifesaving) ideas like these (the pushback is astounding).

Meanwhile, the city burns as the emperor enjoys his grapes on the royal hilltop recliner.

Expand full comment

« Private profits, public losses ». That's why, since the the toothpaste is out of the tube, now AI behemoths are begging governments to regulate them.

Expand full comment

I think it, and Gary writes it with better insider knowledge than I could (which I why I'm cross-posting this for our readers tomorrow). The "attack surface" and points of failure for generative AI are huge, the methods for fixing them are uncertain at best, and as more organizations run into big obstacles trying to implement it, the shine is coming off Generative AI, at least as models are currently grown. Generative AI will not only hallucinate, it is also "exploitable by default." Not my phrase, the words of safety researcher Adam Gleave who has been testing GPT and other models for more than a year. https://podbay.fm/p/the-cognitive-revolution/e/1711574700 What's the use case for putting "exploitable by default" tech in your business?

Expand full comment