35 Comments

If we were the parent of a teenager who wanted their first car, we'd ask them to prove that they were ready for such power. So for example, if the teenagers was always crashing their moped, we'd tell them to fix that first, and then maybe we'll talk about a car. Simple common sense, right?

Here's a reason based path forward for AI. Get rid of nuclear weapons, and prove beyond question that we are going to conquer climate change.

Having thus proven that we are capable of fixing our mistakes when they occur, that would be a rational basis upon which to develop new powers of substantial scale.

Expand full comment

Gary, you asked, so … The short-term risks to democracy and the 2024 elections cannot be overstated. But, if we survive that, the long-term risks are literally beyond our ability to even begin to conceptualize. In our “post truth” world, it has been extremely difficult to decipher what is more true or perhaps more accurate than what is not true or even a lie. Up till now, with enough time and effort, those who cared could find the ‘more true’ instead of the ‘not true,’ but that was before search purposely repositioned itself to become the ultimate delivery of the chaos. That said, in the slightly longer term, the false bravado and fake intelligence manifest by current iterations of pretend AI will create social turmoil and upheavals, as well as mental and wellbeing injury, harming individuals, families, communities, and countries in ways that go far beyond what is being discussed today. And there is no government, or coalition of governments, other than an authoritarian one, that can develop and enforce regulations quick enough to even attempt to stop this. And never in human history has there been any universal agreement on universal values, or any form of consensus on human values, (and the human values we may imagine, or desire cannot be found in biased data, and all data is biased). The bigger challenge, how to embed these ‘values’ into non-reasoning technologies and enforce adherence to these values, in the extraordinarily short time window required, cannot happen, except again by an authoritarian regime. Values in an authoritarian regime do not come from the consensus of the people, but are dictated ones designed solely to benefit the authoritarians – which in the end may not be a government at all.

Expand full comment

Physics has Newton's 3rd law. Do social scientists have a law of unintended consequences?

Have you noticed that while in the early days of the Internet, when there was no spam or clickbait, you could search on something and get a real, helpful result? But not anymore?

Is it possible that mass generation of "misinformation" (yech, that word should be banned) - will simply cause users to look elsewhere to find information they can trust?

Consider the rise of The Free Press (Bari Weiss). Or substack?

Isn't this a reaction to the failure of mass media to do their jobs?

I guess you are correct to worry about the consequences of AI - but what about that 3rd law.

Thanks for reading my mental wanderings.

Expand full comment

We are also mistaken if we think that "we" have the ability to stop the work. It's happening everywhere, and the pace is so explosive that if Silicon Valley washed away in a flood, progress would still be incredibly rapid.

Expand full comment

Speaking solely as an outsider to the industry (although I try to keep up to date on what I can) it’s increasingly difficult to worry about either. This is coming from a place of observing an industry that has made it seemingly clear they have no desire to self regulate or proceed with caution and for the general public there is nothing we can do to stop it. I believe young people especially might feel crippled as they have seen a similar example of climate change unfold in front of them, and the world is carrying on as if it’s business as usual.

Expand full comment

Misinformation has already undermined at least US democracy. People would literally rather die than believe something tagged "liberal."

OK, well, tribalism fed by misinformation (and racism).

Expand full comment

SHORT TERM: If AI development were to stop now, there would be problems such the multiplication of misinformation, but then the Internet presents these same challenges, so it's sort of just more of the same. A problem, but not a crisis.

LONG TERM: Unless the threat presented by nuclear weapons is met and conquered, there's unlikely to be a long term future for AI. Well, maybe VERY long term, like centuries from now, but nothing within range of our vision.

Any discussion of long term technological trends in any field which doesn't include reference to nuclear weapons should probably be dismissed as lacking adequate insight.

Expand full comment

"More of the same" is not "the same" when "more" means orders of magnitude more. Elsewhere I've mentioned the comparison of junk mail via the postal system and junk mail via e-mail. (The latter is essentially unusable now without effective spam filtering.)

Expand full comment

This might be a strange question, but is it not better to have machines that lack values and morals? Learning a machine what is good will make it capable of doing bad, intentionally.

Current systems simply fulfill the intent that we give it — which comes with obvious flaws, but the blame at least can be put on ourselves when misused.

Expand full comment

Text generated with AI should be labelled as such.

Expand full comment

Your article in _The Atlantic_ is great, and I'm very pleased that the "information spam" problem is starting to receive more attention in the mainstream. It's just too bad that, while we started looking at potential problems with AGI long before it's likely ever to be a problem (even now, it does not appear likely to be a problem soon) we're only really starting to seriously look at the "informational grey goo" now, when it appears there's a good chance we're already on the verge of it exploding.

Expand full comment

Yep, AI is very scary and we're only at the stochastic parrot phase, the kind of AI technology that is known to everyone. What if AGI is cracked in a garage by a small anti-establishment group or some lone-wolf, Isaac Newton type with a bright idea and an axe to grind? Intelligent machines will be fearless and highly motivated to do what they are trained to do. I'd hate to be on the receiving end of their wrath.

Brave new world.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023

From the Atlantic article: "More recently, the Wharton professor Ethan Mollick was able to get the new Bing to write five detailed and utterly untrue paragraphs on dinosaurs"

I'd suggest you aren't going to be taken seriously if you exhibit seemingly no ability to grasp the desire to use LLMs to generate creative text like this. I suspect most people will consider that either demonstrating a robotic lack of sense of humor :-), or just being so seriously over the top paranoid that you aren't thinking clearly.

I'd also suggest considering how you are coming across since many will see irony in this statement: "The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in" when you come across to many as an authoritarian who wishes to step in.

I'd also suggest that you appear to likely not be well informed about tactics people are exploring for things like spotting botnets on social media or the web, verifying that someone is human (Sam Altman has a separate company working on that, even if many question his approach), etc. Many of the things you are concerned about regarding misinformation at scale were issues even before the current generation of LLMs made the quality better and people (sometimes behind closed doors in a lab doing proprietary research) are working on these issues. There are troll farms of cheap human labor in poor countries generating content.

Perhaps you might consider using LLMs to get perspective on how your opponents might view what you write if you goal is to persuade them to consider your views. Many find your arguments superficial and poorly informed about the downsides of government intervention given you don't take them seriously enough to even address.. "Naive realism", or the "trap of certainty" might be phrases for you to look up. Then again I guess the goal might not be productive dialog among informed people, but merely trying to scare the poorly informed into handing authoritarian politicians the ability to control the development of AI.

Expand full comment
author

not particularly finding your condescension, strawpersoning of my arguments, and misattribution of motives helpful. jsyk

Expand full comment

Ah: so now you know how those who disagree with you feel about your strawpersoning their arguments. I was pointing out basically that issue, that you seem very uninformed about the counter arguments to your views and are strawmaning or ignoring the rather than considering them seriously. I suspect its partly a case of the "curse of knowledge": that those on different sides take many things for granted implicitly and aren't explicitly addressing them. As to "motives": I stated that if you wish to persuade people you need to construct actual non-superficial arguments addressing the concerns of those who disagree. I cynically stated the alternative: that you are merely trying to ignore those and just raise fear.

Expand full comment

"I'd also suggest considering how you are coming across since many will see irony in this statement: "The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in" when you come across to many as an authoritarian who wishes to step in"

I think it's a bit silly to compare Gary Marcus' caution for AI research to that of encroaching state authoritarianism. The former is at worst considered regulatory-overreach, while the latter suggests immediate, far-reaching corrosive political implications. There are plenty of critiques to make of Marcus' narrative (which I agree is rather selective in its caricature of the field), but these kinds of counter narratives do little to convince anyone.

Expand full comment

The essence of his arguments seem to be a desire to regulate speech that just happens to be produced by AI. There is a large segment of the tech world, from the EFF to those who are old style free speech absolutists like the ACLU was decades ago, who see Orwellian implications in those that don't exhibit much if any hesitation regarding their push for regulations to control content. Those who assume such regulations will be purely benign and nothing to worry about should consider how they'd react if Trump or someone else they hated were in control of creating them.

I suspect a large fraction of those who disagree with him will have the same reaction that I do, I suspect for instance prominent voices like Marc Andreessen (prominent in the tech world, even if not an AI researcher) have similar reactions to his writings: that they give the overall sense of moral panic akin to religious right types that were aghast at 30 years ago at the idea the internet would spread porn.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023Liked by Gary Marcus

It's not about wanting to regulate speech that "just happens to be produced by AI." The issue isn't who or what produced it: the issue is the sheer volume of it, and how that changes as the cost of producing it changes. Junk mail was a problem in the 1970s, too, but then you could easily survive without an automated filter for it; today an e-mail account without good spam prevention will be near-unusable.

Long before LLMs appeared, content farms were a big enough problem that they needed to be directly addressed by search engine providers, and at not insignificant expense.[1] But generating content was still an expensive proposition: Demand Media spent $100 M on generating content in 2010,[2] and its spin-off Leaf Media spent $80 M on generating content in 2015.[3][4]

Easily available LLMs are going to reduce the cost of content generation by at least two orders of magnitude. The likely effects of that seem to me pretty predictable.

[1]: https://www.technologyreview.com/2010/07/26/26327/the-search-engine-backlash-against-content-mills/

[2]: Demand Media's 2010 financial statement can be found in its annual report filed with the Securities and Exchange Commission (SEC) on February 28, 2011. The content cost figure is on page F-6. The report is available on the SEC's website: https://www.sec.gov/Archives/edgar/data/1365935/000119312511050765/d10k.htm

[3]: Leaf Group's 2018 financial statement can be found in its annual report filed with the SEC on March 1, 2019. The content cost figure is on page 41. The report is available on the SEC's website: https://www.sec.gov/Archives/edgar/data/1481513/000148151319000013/lfgr-12312018x10k.htm

[4]: Irony alert: both of those figures came from asking ChatGPT. Both links lead to an error message saying "key not found." The figures seem reasonable, pure logic says that it costs much more to have a human write something than ChatGPT, and even an order of magnitude change in the figure wouldn't affect my argument much, so I can't be arsed to chase these down and see how accurate they are. I assign a reasonably high probability that they're not dead accurate (though not wildly inaccurate) since I have seen plenty of other examples of ChatGPT being slightly inaccurate with figures (such as summing up numbers to 10 that actually sum to 11).

Expand full comment

In today's world its hard to image: but similar arguments regarding the increased volume of poor quality information were made when the printing press arose, radio, tv, and the rise of the commercial internet. I sympathize with the concerns: just not the methods.

They've gotten rather good at dealing with spam, and dealing with networks of bot created information. The issue is that humans want to see productive information and not that information, so other humans create tools to find the useful stuff and filter the rest. It doesn't matter whether the stuff that is filtered is from an army of cheap trolls in a poor country or AI generated. It doesn't matter if the useful stuff is human generated or AI generated.

Its paternalistic to try to shield adults from information they wish to see. The 1st amendment protects the right of humans to see information they wish: despite the desire of others who wish to silence voices we wish to hear. It doesn't matter if these paternalistic people feel they are doing it for our own good: many who wish to control others and give in to that authoritarian impulse claim to do so. Many authoritarian populists gain power by persuading a segment of the public to fear something so they can gain the right to control others to squash something: whether its immigrants, different races, or technology.

There was misinformation and problematic information created by humans before the rise of any technology, and society is still working on how to deal with that given the unfortunate reality that people don't know everything about reality and therefore they disagree over what is misinformation.

People collectively realized during the enlightenment that free speech and differing views is the way to deal with that, whether those views come from humans or AI.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023

After doing a little more investigation, I now have considerably less confidence in the figures and footnotes above. TLDR: the figures and URLs appear to have been made up by ChatGPT.

The XML document containing "key not found" I get for both the above URLs seems to be the "404 Not Found" page for the Edgar system. That system is explicitly declared to be an archive with documents going back to 2001. The URLs for documents that do exist are of similar style to the ones that ChatGPT gave me, and I've confirmed that archive.org's Wayback Machine is archiving a very substantial number of these (more than ten thousand under that URL prefix). Neither of the URLs given to me by ChatGPT are in the Wayback Machine. All this together leads me to believe that there's a good probability that the URLs given to me by ChatGPT never served valid data.

You can search for the Demand Media 2010 annual report and among the results you will find https://www.sec.gov/Archives/edgar/data/1365038/000104746911001615/a2202318z10-k.htm which is indeed that annual report. There is a cash flow table on page F-6, but it mentions nothing about content cost. ChatGPT said specifically, "spent approximately $104 million on content creation, which accounted for about 37% of its total operating expenses." I cannot find "104" or "37%" anywhere in the document.

How much Demand Media really spent that year on creation of what they call "content units" isn't obvious from the report, probably because they treat it as an "intangible" asset and and depreciate it over five years. The amortisation values for all intangible assets were about $33 M, $32 M and $34 M for 2008, 2009 and 2010, respectively, which seems to indicate that they were spending an average of about $165 M/year on content, software and other intangible assets in that broad time period, though don't hold me to that as I'm not expert at reading annual reports.

Expand full comment

I actually agree that Marcus does present his cautions with disproportionate alarm, but spam and astroturfing is not a new form of speech regulation. It is largely uncontroversial to ban spam and astroturfing campaigns. Marcus' argument is not that this behavior is new, but rather that innovations in text generation make such behavior more fruitful for malicious actors; I don't think that argument is unreasonable.

Where I am more critical of Marcus is his lack of engagement with astroturfing itself, and what I perceive to be an exaggeration of bad outcomes. As someone who's done (and is in the midst of publishing) work on digital astroturfing, there exists very little evidence to suggest drastic Russian influence on U.S election outcomes (campaigns did happen, but appear to have been largely unsuccessful by conventional metrics). That doesn't mean Marcus is wrong about the acceleration of generative methods, but it does place the debate in a more hypothetical space. I also personally think the rollout of ChatGPT exceeded my personal safeguard expectations. It's a nuisance to abuse that chatbot, even if it's not perfect, and I find it to be a tremendous study-tool and summarizer.

Expand full comment

People choose to use services that filter spam. There were bad filters at first and people sometimes switched providers to get better ones. Now most are fairly useful so people don't think about it.

Competition led to better tools to deal with it: not regulation. Again: I've commented on prior posts regarding fields of study regarding regulation like public choice economics and regulatory capture theory, whose founds won nobel prizes in economics. Merely saying "regulate it!" is superficial naive thinking that doesn't address the potential high risk of varied government failures. Its merely "I fear this, so I command government to do an excellent job of dealing with it for me" as if that wish were magically guaranteed to make it happen. People should be aware of how technologically clueless politicians and bureaucrats are and consider the potential downsides and flawed consequences.

Assume the politicians you hate most are the ones in charge of it: as they might be in the future, rather than naively assuming perfectly competent people will magically come up with better solutions than all the good minds in the private sector working to create tools to shield people in the ways that they want: rather than the ways political authorities or those who push for them to take control wish. The outcome is unlikely to be what even most of them expect or want themselves.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023

You're outright wrong that the spam problem did not lead to regulation. That you are unaware of any of the 47 examples of regulation on the "Email spam legislation by country" Wikipedia page should cause you to stop short really quick and consider how woefully uninformed you are.

And that's just the simplest, most obvious example of how uninformed you are. You also seem to be unaware of how much work in spam prevention was co-operative, not competitive effort: things like DKIM are not the result of companies competing with each other.

Expand full comment
Mar 15, 2023·edited Mar 15, 2023

Co-operation is part of competition. The tech world is built on vast amounts of cooperation between entities creating standards on all sorts of things voluntarily. And the fact that some countries regulate something doesn't mean its necessary or effective: merely that its the default simplistic approach. Do you seriously think gmail wouldn't have provided spam filters without regulation? Did the legislation bring any productive solution to the table for spam, any technology? Was any major email provider so dense they wouldn't have done it without politicians doing anything about it? I've been on the net for a few decades and in the commercial net world for decades, I've seen how misguided the vast majority of attempts at regulation have been.

Expand full comment

I think that Clarkesworld Magazine would pretty strongly disagree with you that the debate is in a "more hypothetical space."

Expand full comment

I am not referring to the acceleration of spam as hypothetical here: that appears to be observable. However, the claim that this spam will meaningfully sway the outcomes of elections--which is a much stronger claim--is largely anticipatory. Again, that doesn't mean we shouldn't be worried, but instead critical and alert.

Expand full comment

Just as spam filters arose for junk email: quality filters can evolve for content that'll do an initial pass. Its a more difficult job given the state of AI unfortunately and so there is the issue of throwing out the baby with the bathwater. Or they may start charging a refundable submission fee to cover the cost of reviewing submissions that make it through an initial crude AI filter. The intent being that they'd refund the fee to those they either accept or who they at least wish to encourage to try again or who they deem humans who made a worthy attempt. That would cut back on spamming of low quality submissions. They'd have incentive not to charge too much or refund too little since they want content.

People too quickly think "I don't know how to solve this problem, therefore wise government people must know better!" rather than giving time for the masses of people in the private sector to come up with solutions. Decentralized approaches put far more people to work trying to solve tasks than centralizing it in government.

Expand full comment

The question is moot: short-term is that people accept AI and start buying it wholesale. Which leads to the long-term problems it will spawn. Whenever it starts, it's short-term, but the consequences won't stop. Ever.

The real problem, though, isn't AI, it's humans. Most humans aren't particularly bright. But as Dunning and Kruger have pointed out, that doesn't stop them from stating things authoritatively that they flat out don't understand.

Doubt that? Just look around.

Expand full comment

You write, "Geoffrey Miller’s lately been campaigning for an outright pause on AI, both research and deployment. I have caused for something less: stricter regulations governing deployment."

Who will regulate those most likely to dominate the field of AI going forward, the Chinese Communist Party?

Regulations are like the lock on your front door. The lock keeps your nosy neighbors out, but it's worthless against anyone willing to break a window.

Casting my vote with Miller.

Expand full comment

I'm not sold that there is any particularly great risk of misinfo from AI in the near term, much less that AI or computer science is the right discipline to make that sort of call. Seems to me that stuff like spearfishing fraud is a much larger concern.

Expand full comment

As the most powerful allied power, the US was responsible for fighting misinformation in Germany after WWII. They did this in many ways. But the most important one was to create public broadcasting (ARD, ZDF) that was run independently from the government with representatives from all stakeholders on the board and mostly free of advertising. Wouldn't that be a cheap and also tried and tested solution to start addressing the problem of how to defend humanity against AI-generated content?

Expand full comment