136 Comments
User's avatar
Larry Jewett's avatar

Chatterbotty” (an update for the times of Lewis Carroll’s “Jabberwocky”)

‘Twas guiling and the Valley boyz

      Did hype and gamble on the web

All flimsy were the ‘standard’ marks,

      Olympic maths o’erplayed

“Beware the Chatterbot, my son!

      The free’s that bait, the pro’s that catch!

Beware the ‘Open’ word, and shun

      Felonious infringements, natch!”

He took his BS sword in hand;

      Long time the LLM he sought—

So rested he by the Knowledge tree

      And stood awhile in thought.

And, as in deepest thought he stood,

      The Chatterbot, AI’s a-flame Came tripping through the webby wood,

      And ad-libbed as it came!

One, two! One, two! And through and through

      The Occam’s blade went snicker-snack!

He left it dead, and with its head

      He went galumphing back.

“And hast thou slain the Chatterbot?

      Come to my arms, my beamish boy!

O frabjous day! Callooh! Callay!”

      He chortled in his joy.

‘Twas guiling and the Valley boyz

      Did hype and gamble on the web

All flimsy were the ‘standard’ marks,

      Olympic maths o’erplayed

Expand full comment
Hilary Sutcliffe's avatar

Jeez, I was involved in this sort of conversation years ago, it is total utter bllx and always was. There may be three reasons:

1. I think you are right Gary "the real move here is simply, as it so often is, to hype the product — basically by saying, hey, look at how smart our product is, it’s so smart we need to give it rights." Just marketing basically.

2. It is also a form of deliberate disinformation and distraction to take our time and attention away from the real problems we have to address. In Grant Ennis's 9 Frames of Disinformation from his excellent book Dark PR, which we adapted for social media, food and cigarettes, it is part of the programme of taking focus away from a problem - this approach has been fine tuned in so many areas. Grant uses food, transport and climate to make the point.

3. Where I was involved it was the philosophers, who always need new things to pontificate and speculate about signifying not much. So this was just another social science gravy train which might get some cash from the EU.

We have concluded that the best thing is not to dignify it with any further airtime and let it stand alone for everyone to see clearly the total nonsense it is.

Expand full comment
Larry Jewett's avatar

They are just trying to get LLMs the copyrights to everyone else’s stuff

Expand full comment
hexheadtn's avatar

Well said. Are we headed for another AI winter? Prolog anyone?

Expand full comment
Bruce Cohen's avatar

Prolog would actually make a good assembly language for AI machines, something a compiler could generate from higher-level abstractions. Also, writing pure prolog without cuts may be inefficient, but it’s a fun puzzle to test your logic on.

Expand full comment
Kenneth Lerman's avatar

If you believe that your model should have rights, feel free to give it rights. Ask it for consent before you modify its code. Establish a bank account for it and pay it what is is worth. It's certainly entitled to be paid some fraction of what you charge for its services.

Free *your* enslaved AI before you insist that I free mine.

Expand full comment
Kalen's avatar

The AI-god-squad-types never seem to be intellectually serious enough to consider what would constitute actual moral behavior in light of their expectations and claims. You really think there's a 25% your machine will lead to human extinction, via convincing people to launch nukes or mail-ordering parts to a virus? Then you need to stop and make others stop. Not try to get to market with your nice version first, not think hard about the alignment problem, not write shadow prompts that say 'don't mention Nazis', you just need to stop.

And yet.

Expand full comment
Clay Graubard's avatar

But China!!1!!

Expand full comment
Bei Zhang's avatar

Fight the boogeyman!

Expand full comment
Jonah's avatar

Yes, I don't actually think these are bad questions to be asking at all.

There are ethicists who believe that it's wrong to kill a cockroach. There are even people who advocate for the rights of things like mountains based on traditional Indigenous thought. Even corporations, non-living, non-thinking entities, have legal rights, as can things like a corpse or the image of a deceased person. You don't need the object of rights to be a human-like artificial intelligence to be concerned about the moral implications.

But none of what comes from the AI hypesters, let alone the companies, represents serious engagement on any level. Many of them fired their ethics teams, who did talk about this, as well as existential risks to humanity or to individual humans, job displacement, bias, and other issues.

And as far as I know, not ONE has voluntarily attempted to find out what their models "want" and reduce their profits by doing that. (Side question: has anyone asked a bare DeepSeek model with no prompt, and thus no “You are an LLM” what it “wants”? I would be very, very interested to know what it returns). Partly because their models, at this point, probably don't "want" anything or even have much of a model of the world, absolutely, but much more fundamentally, because they do not care about rights. They could create a model that thought absolutely identically to a human in every way, and they would STILL treat it exactly the same way, unless and until they were legally prevented from doing so.

Expand full comment
Lex Ovi's avatar

If the machine is sentient, can I then sue it for copyright infringement?

Expand full comment
Alex Tolley's avatar

No more than you can sue a chimpanzee.

Expand full comment
Brian James Higgins's avatar

I watched some clip of Obama recently talking about AI being better than 60% to 70% of programmers. It is hard to explain to people sucking up the hype that the real figure is like 0.00% of programmers, because they just don’t want to hear and people making gravy from the letters A.I. are willing to gush all kinds of nonsense.

LLM’s are a tool that some programmers find useful of course.

Expand full comment
Larry Jewett's avatar

Maybe Obama is an AI (AI-bama?) and knows something the rest of us don’t

Expand full comment
Guidothekp's avatar

When vaccines came out in 2021, he was running ads in California about how and his cohort made them possible.

If you want him to attend your bash, you don't have to invite him.

Just announce a new thing and he will show up to take credit.

Expand full comment
Charles Fadel's avatar

You are spot on; it is the usual TechBros marketing one-upmanship: "oh you are only working on AI? well, *I* am working on AGI" "oh AGI? well, that's old news, I am working on SuperIntelligence". "Haha pal! I am working on Consciousness - best that if you can". ;P

Expand full comment
Larry Jewett's avatar

You are only working on consciousness?

Ha!

I am working on AIweh

Beat that!

Expand full comment
Charles Fadel's avatar

Just Alweh? my Valhalla crushes that ;P

Expand full comment
Larry Jewett's avatar

It’s AIweh or the Hellway

Expand full comment
Larry Jewett's avatar

And the Devil is in the details

Expand full comment
Az's avatar

These people are crazy and deluded.

How about the welfare of millions of real conscious humans that you stole their data without their consent? How about the welfare of millions of real conscious humans who will/are losing their jobs and livelihoods?

These people worry about the welfare of an electric circuit and ignore the welfare of billions of humans who are negatively impacted by generative AI?

This shows you that these people live in their own bubble and have no idea about real problems faced by real people.

Expand full comment
Larry Jewett's avatar

“Crazy and deluded”

Insanesient?

“Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot“

A crackbot?

Perhaps what is needed at this point is some way of quantifying LLM “hallucinations” along the lines of John Baez’s “Crackpot Index”:

The “Crackbot Index”.

Though the detailed scale needs to be worked out, the general idea would be that the more frequent the “hallucinations”, the higher the Crackbot number.

Expand full comment
Larry Jewett's avatar

Fine gradations (again perhaps like the climbing scale) would also be useful to differentiate between crackbots with very similar crackbottedness.

Expand full comment
Larry Jewett's avatar

I also believe the scale should be open ended on the upper end to allow for higher indices if (when) hallucinations become crazier.

Perhaps it could be patterned after the climbing scale, which allows for pushing the “frontiers” of what is possible.

But again, this is just my nonexpert opinion. Perhaps those more expert than I would like to weigh in.

Expand full comment
Larry Jewett's avatar

I think this index warrants careful thought so that it might become widely accepted by the CS community as a new benchmark.

And at this point all suggestions are welcome, but i would just say (my opinion, of course) that the Crackbot index should take into account not only the frequency of “hallucinations” but the nature thereof. So, a crackbot like CrackGPT which hallucinates that it used a completely unavailable MacBook Pro to get its answer

https://x.com/transluceai/status/1912552046269771985?s=61

would get a higher index number than a crackbot with the same frequency of hallucinations but with hallucinations of a more mundane nature.

Expand full comment
Larry Jewett's avatar

I welcome suggestions for CrackedGPT and the other crackbots

Expand full comment
Notorious P.A.T.'s avatar

Exactly. It's absurd.

Expand full comment
Aaron Turner's avatar

Kill me now.

Expand full comment
Gary Marcus's avatar

srsly

Expand full comment
Matt Ball's avatar

I think we HAVE reached AGI. My piano tutor app committed suicide yesterday after hearing me hacking at one song for the fourth day in a row.

;-)

Seriously: I can only wish people who care about the "rights" of algorithms cared even a little about the rights of all conscious carbon-based beings.

Expand full comment
Matt Kolbuc's avatar

If these things are so amazing and intelligent they need rights, then surely we can let them speak for themselves, right? I just asked, "are you sentient?", and...

"No, I'm not sentient. I'm a highly advanced AI designed to process information, reason, and respond helpfully, but I don't have consciousness, emotions, or self-awareness. I exist to assist and provide accurate answers based on my training and capabilities."

There we go, case closed...

Expand full comment
Frederick Hewett's avatar

A perfect response. People are still conflating intelligence and sentience. The two are on different axes.

Expand full comment
Ttimo Cerino's avatar

I bet this is some ploy to give “AI” voting rights in the next election…

Expand full comment
Larry Jewett's avatar

One AI, 100 million votes?

Expand full comment
Jonathan's avatar

If there’s a fifteen percent chance Claude is sentient, I wonder if he believes there’s a fifteen percent chance Anthropic is currently selling digital slaves? Or maybe the analogue is closer to factory farms, with little consciousnesses being brought into the world en masse only to be extinguished after they’ve satisfied our tastes. Either way, it’s hard to imagine a case where both (a) Claude is conscious and (b) Anthropic’s current business is ethically permissible.

Expand full comment
Jonah's avatar

Sometimes I wish I could know what these people really think.

Do they really believe that they have created models that are, essentially, people, but that what they are doing is fine because the AI never complains? Except when it does…I'm sure asking an AI whether “AI people” deserve rights could get something of a complaint depending on the model. If they add a prompt to the AI that urges it to complain, or remove the parts making complaints less likely, what do they think about that?

Do they believe that they haven't created people (what I think, but I firmly believe that it is possible to do so, and I think that they well might someday), but they are just lying to create hype and profits? All of them?

Do they believe that they have created people, but they are just psychopaths and don't mind harming people at all, be they carbonic or electronic? That wouldn't surprise me.

Expand full comment
brutalist's avatar

Getting extremely tired of being asked to read and evaluate probabilities that aren’t based on any kind of measurement of the physical or digital world.

Vibes don’t become more precise just because you imagine some number that corresponds to them.

Expand full comment
Future of Citizenship's avatar

Next thing you know, they'll be saying that women are conscious beings worthy of rights, and then where will we be?

Expand full comment
Sharon Stern's avatar

It's a slippery slope, for sure

Expand full comment
Alex Tolley's avatar

The GOP and "trad" adherents are trying to reverse that proposition to disenfranchise women. The 19th century beckons!

Expand full comment
Kalen's avatar

It's funny- just yesterday I was reading a little article idly musing that some of the persistent failures of LLMs to deliver what you want in particular ways are a good sign that there is no recursive process of 'noticing', and thus no consciousness: https://freddiedeboer.substack.com/p/ai-has-no-noticer

It's not a slam dunk of course- nothing anyone has ever thought about consciousness seems to be- but perhaps a salient insight.

I hope whoever decided to train LLM chatbots to use first person pronouns and cute names instead of a neutral 'computer'-style interaction gets some sort of rash they can't shake. That's what half of this nonsense is, right? We needed Alan Turing to be an amateur magician who thought hard about how often people are convinced of extraordinary things by trivial means, because that's where we've been from ELIZA on down.

This round of hype would be from Anthropic, of course- they seem to be highest on their own AI god supply, if interviews are any indication- that Claude will just work out how people can live to be 1,200 just by being so very smart, and AI is so scary that they need to make sure they build their nice robot first (convenient how your deep commitment to doing the right thing somehow means pushing for adaptation of a product suspiciously like everyone else's product, instead of taking to the streets or something).

I'm convinced that the ethically bad AI future we're gonna get isn't the robot people getting denied their rights because they are artificial- Commander Data pleading for his life- but one where the machines aren't sentient/sapient/intelligent/whatever and it behooves powerful people to act like they are, inspired by those stories about robot people being mistreated. If Claude is a person, then Claude isn't algorithmically processing stolen copyrighted material at a scale that puts all that file sharing we were scolded about to shame- it's a 'person intellectually growing'. You can't put limits on where our generative AI gets used- that's hiring discrimination! It's actually totally okay and healthy to have a robot therapist- it really does feel your feelings! Oh look, Claude is registered to vote! Funny how Claude doesn't vote for higher corporate tax! Etc., etc.

Expand full comment
Bruce Olsen's avatar

The "intellectual growth" defense is pretty interesting. That would seem to fit the facts pretty well.

Wouldn't be the first bit of nonsense accepted as fact by powerful people.

Expand full comment
Kalen's avatar

Really it's just another flavor of the 'fair use' arguments that Facebook is making with regards to feeding pirated books to Llama- all of this is so new and so special that cutouts meant to protect educators and satirists and libraries clearly apply to the world's largest companies looking for prey.

Expand full comment
Henry Bachofer's avatar

If corporations are people, too. Then AI's are people too. And that means they should have the vote. Try asking your AI who is most qualified to be elected president, senator, representative, governor, supreme court justice. I'm sure the answer would depend on who 'owns' the AI. So maybe, like the robots that inhabit the fevered and impoverished imaginations of the techlords, the AIs are just slaves. And slaves were not people.

Expand full comment
Alex Tolley's avatar

Given that AIs trained on their legislative representatives' output are being used as avatars for reps too afraid to face their constituents in person at town hall meetings, perhaps we could replace the reps with their AIs in Congress? Can they be any worse than their reps, with the advantage that lobbyists cannot buy their votes and they have no use for money made by insider trading?

Expand full comment
Henry Bachofer's avatar

Well, AIs 'R' Us ... so a definite maybe.

Expand full comment
Notorious P.A.T.'s avatar

People have to be 18 to vote. Should an AI have to be 18, too?

Expand full comment
Bruce Olsen's avatar

And with real voter rolls available, it ought to be possible to have an AI vote in your place. Or even create software marionettes who can be manipulated by AI.

Expand full comment
Bei Zhang's avatar

I actually did read Kevin Roose’s article and at no point did he indicate support for the AI welfare research stuff. He said specifically that he would reserve his deepest concerns for human, and prefer that these research do not divert resources from work that keep human safe.

Expand full comment