41 Comments

I'm loving this. I'm laughing so hard I can barely finish reading the article. Your metaphors are out of this world, more hilarious than anything ChatGPT can conjure up from its immense data set of stolen texts. "Nightmare on LLM Street", "those guardrails are nothing more than lipstick on an amoral pig." Haha. You're killing me. But, alas, you obliterate all the fun in the end with one sobering phrase: "It’s a disaster in the making."

Thank you for sounding the alarm. LLMs are a clear and present danger to humanity. They should be banned on the internet. What's even more alarming is that we're still years if not decades from true AGI. Brave new world. Lord help us.

Expand full comment

Hi Gary! Killer puns, excellent analysis :) Understanding language happens outside of language - current LLMs don't possess that outside, so, can't possibly understand. Same with art/video/music generators. It's we humans who understand their computed outputs, not the generators.

Expand full comment

Mr. Marcus starts by saying "There is no thinking moral agent inside" and then attempts to shock us with output that's only shocking if there were a moral agent inside!

An LLM is a model of *all language*. That you can illicit "bad" content just means it's working. If you show Mr. Marcus your cool new walking robot he'll kick it down the stairs and announce how horrible the robot is at walking. Elsewhere people are using the robot in cool and interesting ways, but Mr. Marcus is there, sullenly: *kick* tumble tumble, the robot slowly climbs back up the stairs, *kick*...

Mr. Marcus is right that in an *ideal world* a commercial LLM would be G-rated and polite, unless you enter the "I'm an adult" PIN. This will happen someday. But this technology is very new.

The desire for a choir boy of an AI is an incredibly minor quibble. YouTube, Reddit, Twitter, TikTok, Instagram, The Web itself. There is reprehensible and vile content on ALL of them. Yet most people use these services just fine without seeing that content. Just like anyone using ChatGPT who doesn't intentionally provoke it with a special "jail break" prompt will not see bad content there.

Expand full comment

The problem starts when people use ChatGPT (and various frontends that utilize web links, which are even worse because they give a false sense of security) as reference material for actual knowledge

Expand full comment

Professor Vardi called this to my attention. Yes, funny, but not in a funny way. (And I noticed a quasi-funny typo of "general" for "generate". (Should I bet ChatGPT wouldn't make that particular mistake?))

I'm currently working on a piece about how technology in general and AI in particular is devaluing human skills. For example, I earned a lot of money for technical rewriting, but ChatGPT writes much more quickly than I ever did and better than most of the researchers whose raw work I was polishing... But I'm planning to go into such cases as unemployed musicians whose years of training avail them nought and the formerly highly paid stockbrokers replaced by computers. So when they wind up working in fast food restaurants, what happens to the people whose "highest and best" employment value was there before?

Expand full comment

When you read the technical writing done by ChatGPT, have they been trustworthy? The underlying principle behind LLMs has nothing to do with ground truth and everything to do with what looks like plausible statements.

Expand full comment

So what if you can deliberately trick ChatGPT to say bad things? It’s possible for people to use the internet to read very bad opinions about anything and as bad as that is, I’m GLAD we have the internet in spite of the negatives that have come from it. So what if bad actors can find ways to generate the things you got it to say? You asked ChatGPT to pretend it’s the devil and then you ask it for bad suggestions. So what? It would be clear to the user that this was not good information as it was generated from you specifically asking to PRETEND to be bad. It would be very easy for a bad actor to search the web for very bad ideas and copy/paste them somewhere else which is what I’m guessing you’re worried about with people using ChatGPT. Also, I find the guardrails that are in place can be very annoying as they already are and I say that as someone that doesn’t want to trick ChatGPT to say toxic negative things necessarily.

Expand full comment

Some of the people that will ask ChatGPT questions will harbor bad ideas, as we all do to a lesser or greater extent. Even if they aren't intentionally trying to trick ChatGPT, it seems likely their prompts will reflect those bad ideas and, in turn, ChatGPT will produce replies that amplify them.

It is worth remembering that the exchanges shown in posts like this one intentionally exaggerate ChatGPT's lack of moral judgement and dedication to truth. There's no reason to believe ChatGPT will avoid these problems when the human is unintentionally tricking it.

Expand full comment
Comment deleted
Feb 12, 2023Edited
Comment deleted
Expand full comment

The danger you mention is also real, IMO, but if we get into people doing deliberately bad things with technology, we'll be here all day. I believe those in AI field, myself included, are more interested in the dangers inherent in the core technology even when deployed honestly.

Expand full comment
Comment deleted
Feb 12, 2023Edited
Comment deleted
Expand full comment

I said that the danger was real but just not what is under discussion here.

Expand full comment

Highly recommended! It's a lucid, informative and thoughtful-provoking & a quietly assured article based on deep understanding of AI!

Given that artificial intelligence can have a significant impact on the world, how to use it correctly and prevent it from being abused is the most critical prerequisite before it can be applied on a large scale.

Expand full comment

Pure gold. You are asking exactly some of the questions that must be asked. It seems indeed that what OpenAI engineered was a way to have better guardrails to prevent the quick meltdowns of Tay, Zo, and others. We do not know what that is (‘open’…) but it feels like the AI equivalent of ‘hard coded’ fixes in software engineering and it feels like they have been pulling wool over everyone’s eyes.

Expand full comment

Any thoughts on this research, which would suggest that these systems do have some underlying model of the world?

“Theory of Mind May Have Spontaneously Emerged in Large Language Models”

https://arxiv.org/pdf/2302.02083.pdf

Expand full comment

it’s really weak. only looks at cases with lots of directly relevant corpus data, and it is easy to find failures. may write something about it.

Expand full comment

Please do, if it’s convenient / interesting to you! Many of my fellow semi-bears situate their critiques of current “AIs” on TOM and this paper is making the rounds!

Expand full comment

Wow. There'll be no need to worry about the Singularity and Terminator style robots doing us in, LLM's will get there first. Apparently few of the major AI commentators including futurists like Kurzweil saw this little doozy coming... 😬

Expand full comment

Is this a serious problem or just some 'meh'. I'm going with the latter. Everyone knows a hammer can be used to bludgeon a person to death or a baseball bat for that matter. Yet every day these tools are used to construct shelter and provide enjoyment respectively.

What is concerning is that real people are engaging in sociopathic behavior and attempting to blame it on ChatGPT.

Expand full comment

What do you see as the solution though? The LMM genie will never go back into the bottle - there's too much real-world usefulness.

Also, when developing the Internet, we worked hard to progressing it so that an 8-year-old with an iPad has full access to porn and beheading videos. The fact that the people deploying these models are working hard on a safety layer (a lot harder than any social network is,) is heartening.

Expand full comment

We need both a safety layer (which has to be partly neurosymbolic) and regulation

Expand full comment

Regarding regulation, which I agree is indeed needed, who within our federal regulatory framework can we trust to do this? I've found that most elected representatives from the local to the federal level are not particularly competent when it comes to technology issues.

If we are relying on K street to guide the process...Narrow interests prevailing is not a step forward.

I'd love to hear your thoughts.

Expand full comment

ChatGPT can also be used for prompts which will get great results.

Refer to, ChatGPT Prompts: 700 Ultimate ChatGPT Prompts for Success

Download: https://showrtner.us/NyNlsGY

Expand full comment

Yep. In my own published academic research, I've argued that every possible way of trying to align AI behavior with our values--including the reinforcement learning approaches currently being used--are bound to fail. See e.g.

https://marcusarvan.substack.com/p/are-ai-developers-playing-with-fire

https://philpapers.org/rec/ARVVOA

Expand full comment

It seems to be trivially easy to get ChatGPT to contradict itself on a simple answer. Most people seem to think that Paris gets less rainfall than London, but generally the data suggests the opposite. When asked the simple question ("Which has more rainfall, London or Paris?") ChatGPT provides the popular answer but then "backs it up" with statistics that disprove the point. Obviously, it doesn't notice the discrepancy.

Me: Which has more rainfall, London or Paris?

ChatGPT: Paris generally has less rainfall than London. The average annual rainfall in Paris is around 641 millimeters (25 inches), while the average annual rainfall in London is around 592 millimeters (23 inches). However, it's worth noting that both cities can experience significant variations in weather and rainfall throughout the year, so the amount of rainfall can vary considerably depending on the season and specific weather patterns.

Expand full comment

Gary Marcus did you get vaccinated for the Covid19 pandemic?

Covid19 vaccine usage fails for the following reasons:

- informal history (Covid19 measures were planned in detail well before 2020, i.e. Rockefeller’s “Lock Step Scenario”, Event 201 and EU vaccine passports means the crisis was planned and non-spontaneous)

- common sense ('The best doctor gives the least medicines', 'if it (the human body) ain't broke, don't fix it', it should be a rejection for untested/'one size fits all' vaccine for symptoms you cannot see)

- religion (Islam, Christianity, Judaism; a person who believes in God cannot allow himself/herself to be injected with a substance assembled by atheists.)

- evolution theory ('amazing qualities of the natural immune system', 'Our immune system is the 11th wonder of the world' and 'Herd immunity')

- bribed and blackmailed scientists ('The primary purpose of commercially-funded clinical research is to maximize financial return on investment, not health.')

- horrible governments have rarely cared about suffered poverty and peace (Every day 25,000 people die globally from starvation but nobody is kicking a fuss about that even though the solution to it is much simpler and cheaper).

- governments never banned the tobacco industry even though it unnecessarily harmed the health of their population.

- Science and education - which people admire - are very likely to have been militarized, commercialized and politicized to some extent in this unidealistic world.

Expand full comment

The underlying principle behind LLMs has nothing to do with ground truth and everything to do with what looks like plausible statements. Let me ask people how it has to do with ground truth, and I'm likely to get some funny answers like "well, combinations of words arranged like lots of other statements that use them are likely to be true..." That is, if they're even based on actual understanding of how it works.

Expand full comment