56 Comments

It is indeed very worrisome.

"I am increasingly worried that the big tech companies are—in conjunction with a Congress that has not yet passed any substantive AI legislation—successfully architecting a grossly asymmetric world, in which the profits from the positive side flow to them, and the costs of the negative side are borne by everyone else." is very much to the point, but should we have expected otherwise? After all, even a founder of modern capitalism like Adam Smith already warned against listening to entrepreneurs when they argue political issues as they have only one motive: their own profit (definitely not the good of society) regardless how they cloak that message. Given their power in the information space, getting the genie back in the bottle seems an impossible task.

Expand full comment
author

Point of my next book is that the People need to step up or all is lost

Expand full comment

Which probably is THE point to make at this time.

The question is: how are people going to become convinced they have to step up and the direction they have to step up into? Overall, many are already convinced they have to 'step up' (core Trump supporters for instance already are), but they have been convinced that more, not less corporate freedom is for the good of all, so they;re stepping up, true, but in what direction are they pushing?

The alternative is that we end up with all kinds of horrible results (societally, ecologically) and only after the catastrophe (worse than WW2, definitely) we might come to our senses (for as far as we have them, I have to add). That happened after WW2 when people decided that unfettered lies, unfettered power for 'big money', too much inequality, etc. were 'bad' (as these were at the core of what eventually led to WW2).

The trajectory we're on doesn't look good and it is questionable if humans have enough intelligence (other than the 'malleable instinct' that makes up most of it)

Expand full comment

How should people act? should they start chasing hackers, insulting their phones and powerful entities at home (by assuming that their metadata is being listened to or watched, in an attempt to disrupt the surveillance and big tech flow, or offend developers to make them program better recommendation systems), stop using technology, protest more, poison AI systems with stupid information (people are already doing this, unironically). All attempts at advocacy seem a bit reckless or futile, and can potentially backfire on people (hypertargeting, career loss, being "canceled", etc.). It's so hard to make real change in the real world, especially when you're dealing with big government and big tech companies.

Expand full comment
Mar 29Liked by Gary Marcus

Thanks for the article, very interesting and timely for the charity I work with. We are very concerned about how these applications could be used

Expand full comment
Mar 29Liked by Gary Marcus

This is very true. At first, I was a bit elated and keen to see how Big tech would wiggle its way around this one. It's no secret that Big tech has over the years effectively kept IP at the core of Its business strategy, especially when fending off competition and innovation. So, naturally, I was curious to see how it would justify mindlessly infringing on humanity's labor and creativity without paying a dime.

I could not be more wrong. As you have pointed out Gary, the legislative picture slowly emerging is somewhat sinister. I can clearly see it going the way of separating the maker of the tool, the "tool", and the user of the tool. All copyright risk will fall on the user. And if there's more than a gazillion users, one can only assume that all policing efforts will be pointless.

I sure hope I'm just exaggerating the situation.

Expand full comment
Mar 29Liked by Gary Marcus

Trust is one of the foundations of a functioning society. We had a trust problem before OpenAI, and genAI is scaling the problem up, alongside social media, with no sign of anyone hitting the breaks.

My kid’s school district has a chatbot, and I don’t see why I’d use it, given that I can’t trust the answers. Meanwhile, online registration has been down for months. I emailed the school secretary, a human who could accurately and efficiently tell me what to bring to the office.

Expand full comment
Mar 29·edited Mar 29Liked by Gary Marcus

The path to truly benevolent AI/AGI boils down to one thing -- ALIGNMENT -- but in two flavours: (a) technical alignment (aligning AI with humans), and (b) societal alignment (aligning humans with humans). Of the two, societal alignment is by far the hardest problem, because it requires the many tribes (~200 sovereign, ~300 million corporate, etc) into which humanity is fractured to abandon their own short-term self-interest in favour of the long-term best-interest of the human species.

Expand full comment

I was recently at a panel where the voice actor, Christopher Sabat, told an amusing story about how he discovered people were using AI to clone his voice online. He quickly used ChatGPT to generate a cease-and-desist letter and sent it to them. It worked an they were shut down. So there is some legal recourse for AI fakery, although it might be difficult in some instances.

Expand full comment

Bothersome, very bothersome. I just did a post with this title: What are the chances that the current boom in AI will “stupidify” us back to the Stone Ages?, https://new-savanna.blogspot.com/2024/03/what-are-chances-that-current-boom-in.html

Instead of an intelligence explosion through recursive self-improvement we may be facing an intelligence implosion through recursive enshittification.

Expand full comment

The intelligence implosion feels like it's already here; we've achieved artificial stupid intelligence. The future of ASI is now.

Expand full comment
Mar 29·edited Mar 29

Artificial Stupidity -- I'm afraid it's a very painful phase we're going to have to go through.

Expand full comment

There won't be either recursive enshittification nor intelligence explosion. We will gradually adapt to the new tools, as we adapted to everybody having a voice on the internet.

The real problems will show up when we have sentient agents at the service of governments with an agenda.

Expand full comment

I think your worry is understandable, but misguided; sentience is far away - but the dangers of *seemingly* sentient agents are not.

Expand full comment

Why do you say there's no law that can address the digital clone/deepfake problem? The Lanham Act protects against false advertising, false endorsement, and unauthorized use of even unregistered trademarks, which some courts have held to include personal attributes such image, likeness, voice, etc. And state laws protect against these wrongs, as well as defamation, false light, invasion of privacy, etc. Is the problem enforcement -- that is, we don't know who the scammers are? That will be a challenge no matter what law exists. I certainly do agree we need better legislation, better legal tools at the federal and state levels. But I don't think it's correct to say "no law" protects against this problem.

Expand full comment

Just like ordinary crime. The criminals can always react much faster than the governments and law makers.

In this case, the Big Tech can and do run rings around all the governments of the world and lobby against any form of regulation that might conceivably limit their profit taking

Expand full comment

Sorry Gary, but Anna Makanju is wrong. It would be more correct to say that the race is on to find sufficiently many lies to convince us that AI has any positive applications at all so that we can distract lawmakers and the public sufficiently to strengthen the tech-hegemony and thereby gain further control of all of society.

There is simply no world where AI will improve society in any way. The belief in such a fairy-tale only exists because we have been brainwashed to such an extent that we still believe the lies of big-tech.

It's a shame the race isn't on to find new ways to dismantle and destroy AI, rather than hope beyond all logic that anything good can come out of AI.

Expand full comment
Apr 1·edited Apr 1

I respect your opinion… but my family has been Blessed with shopping recommendations, turn-by-turn navigation, Bayesian spam filtering, camera image/video stabilisation, credit card fraud detection and funny facial distortion filters on social media apps - all AI real world use cases.

Thank God for these big data powered luxuries which previous generations only fantasied about.

Expand full comment

Of course, you are listing the benefits for yourself in the short-term, which is one of the key drivers of technology: people who support technological growth because they are safe from its negative effects and will probably be safe in the future. Ask someone whose job is being replaced, and the answer will be different.

By the way, we wouldn't need credit card fraud detection if the world weren't so entrenched in technology in the first place, and shopping recommendations...how consumeristic. Bayesian spam filtering isn't even AI, and funny facial distortion? Continued technological development that requires the plundering of Earth's resources is not an acceptable trade so you can have silly facial distortion filters.

I wouldn't consider any of these things luxuries at all. They are a pathway to us being mere bags of blood consuming media for no other reason than to further an unsustainable society where ALL we do is consume and don't give back in a symbiotic way to the planet. It's frankly disgusting..

Expand full comment
Apr 2·edited Apr 2

FYI effective naive Bayesian spam filtering doesn’t require GPUs nor neural networks (so no obscene power requirements) to function effectively.

And since we are on the topic, I am figuring out how to port my Tensorflow 1.x distribution-based NB prediction classifier to the latest dependencies, if anyone has tips, I’d appreciate them 😊

Expand full comment

Dr Polak, I am with you on your lament about unchecked consumerism and the plunging of the planet into further environmental chaos, thank you for the reminder.

Prof Marcus and many concerned experts on this substack continue to tirelessly alert a rather disinterested corporate world on the serious environmental challenges that power hungry data centres pose. LLMs, in particular, seem to be guilty of consuming unbelievable amounts of resources with a questionable ROI. But it’s easy to drown in hype and lose sight of this (the April Fools post even caught some avid readers off guard).

While the AI genie had left the lamp a long time ago, it can be used for good. And many effective non-GPU ML techniques require modest amounts of compute and energy. We need determination and creativity to utilise the strengths of tech to steer efforts to keep environmental destruction at bay.

FWIW in my other AI biz I am evaluating the financial and operational risk of carbon sequestration efforts. It’s a long and difficult road - regulatory requirements mean that I will still need to provide services even after I’ve passed away. I can be accused of many things but not trying ain’t one of ‘em 😊

Expand full comment

Well Simon, I certainly can't fault you for standing your ground. You make some reasoned arguments and I respect your consideration of the environment, even if we still do not agree on the overall benefits of AI. But as you said, and as Heidegger pointed out, perhaps the path to true enlightenment is contained within the danger.

Expand full comment

Also, I find it mildly insulting and also simultaneously amusing that I’m painted as someone not familiar with job loss due to technological disruption.

I’m gonna stop typing so I can go to the corner and have a good laugh! 😆

Expand full comment

I don't think you are ignorant at all. I am sure you are familiar with it. I only emphasized the other point of view since you placed the technologies you mentioned in an only positive light.

Expand full comment
Apr 1·edited Apr 1

Thank you, I appreciate your reply.

FWIW I think it is unhelpful to be painting with such broad brush strokes. I am learning that absolutist “sweeps” can come back to haunt me.

Expand full comment

That is rather ironic, given that technophiles and technologists paint science and technological advancement in a universally positive light, often explicitly but always implicitly. Technology, even though it is society changing (in mixed ways, and often in the ways that it improves it improves by moving in a sequence of gradually diminishing local maximae), is hailed as improvements almost exclusively in modern media.

My brush strokes only appear broad because they go so much against the prevailing indoctrination of modern society of endless unsustainable progress, and because I take radically different axioms than the typical ones of consumerism. But before you accuse me of broad brush strokes, which may in some sense be necessarily due to the requirement of dialectical discussion to disabuse ourselves from the pathological behaviours of consumerism, perhaps you should go accuse technologists for painting technology in the broad brush stroke of universal good.

Personally, I am happy to take an absolutist stand against AI, just like I believe we should take an absolutist stand against genocide.

Expand full comment
Apr 1Liked by Gary Marcus

Ah, about that technologist “positive light” claim… hehe, this is _the_ place for debunking LLM myths.

(This substack is almost like a digital version of that pub from “Cheers”… and Kirstie Alley, you are sorely missed 🌷)

Expand full comment

And it will get much,much worse as more and more people have access to the open source generative AI and can do whatever they want with these unreliable pieces of software and turn them into very dangerous weapons.

Expand full comment
author

I fear you are correct and hope you are wrong

Expand full comment

So do I, but I don't have much hope, sadly.

Expand full comment

Making fakes is an ancient practice. When AI becomes more powerful, putting its code out there will likely be not good. But we are way too early for that.

Expand full comment

We don't have to wait, it is already happening via the various open source repositories and by all the "fine tuning" that is being done via Apps, etc.

Expand full comment

For the the problems are very manageable.

Expand full comment

How? I see no evidence for that statement.

Expand full comment

The underlying issue which requires more examination is that an accelerating knowledge explosion is producing new challenges faster than we can figure out how to meet them. Once that is understood, then focusing on particular problems with particular technologies begins to seem like an unhelpful distraction.

Imagine that you're working at an Amazon warehouse processing packages as they roll off the end of an assembly line. The packages keep getting bigger and bigger, and coming at you faster and faster. At first you can keep up by working harder and smarter. But if the assembly line continues to accelerate, sooner or later you will be overwhelmed no matter what you do. The only real solution to that situation is to stop focusing on the packages (emerging technologies) and start focusing on the assembly line (knowledge explosion).

Trying to meet these challenges by focusing on particular technologies one by one by one is a loser's game. By the time we make AI safe, five new challenges will have emerged.

Expand full comment
author

not sure about the solution but the bigger and bigger packages is a great image

Expand full comment

Maybe someone will write some code with the capability of thoroughly confusing data mining and Internet search/consumer profiles of individuals, jumbling the data trove to the extent of meaninglessness and disrupting the predictions.

Or perhaps AI could clone my image and data and create all sorts of alternative profiles for me. For example, I wonder how the Siren Servers are interpreting the latest iteration of Michell Janse's e-dentity, which is of course in turn linked to those of her household, family, and friends.

This is only the beginning! We've barely left square one!

Expand full comment
Mar 31·edited Mar 31

Re: software library security, this has been a problem before ChatGPT exploded on the scene, and there has been vigorous discussion as to how to fix the problem.

It is unfortunate that the solutions to the problem are now frantically trying to differentiate themselves in interviews, having been retrenched en masse. Competent technical leads and developers have had to suffer tremendous mental strain as every non-technical person chanted the “software development is dead” falsehood. And outsourcing is all the rage nowadays.

A powerful but under appreciated solution is to have these competent system engineers serve as board members. Unfortunately it is an unpopular idea because boards are often hostile to unorthodox (but incredibly lifesaving) ideas like these (the pushback is astounding).

Meanwhile, the city burns as the emperor enjoys his grapes on the royal hilltop recliner.

Expand full comment

« Private profits, public losses ». That's why, since the the toothpaste is out of the tube, now AI behemoths are begging governments to regulate them.

Expand full comment

I think it, and Gary writes it with better insider knowledge than I could (which I why I'm cross-posting this for our readers tomorrow). The "attack surface" and points of failure for generative AI are huge, the methods for fixing them are uncertain at best, and as more organizations run into big obstacles trying to implement it, the shine is coming off Generative AI, at least as models are currently grown. Generative AI will not only hallucinate, it is also "exploitable by default." Not my phrase, the words of safety researcher Adam Gleave who has been testing GPT and other models for more than a year. https://podbay.fm/p/the-cognitive-revolution/e/1711574700 What's the use case for putting "exploitable by default" tech in your business?

Expand full comment