30 Comments

If your actions have no consequences, your representations have no meaning.

Expand full comment

I hope this sort of debacle also prompts some careful reflection on the whole idea of “guardrails” — because it’s such a terrible metaphor.

The prominence given to the word belies technologists thinking about safety too late and too poorly.

“Guardrails” dominates every AI safety discussion. Often the word is the solitary mention of protective measures.

But think for a minute about what is a guardrail?

Real world guardrails save drivers from catastrophic equipment failure or personal failure (like a heart attack).

They are the safety measure of last resort!

But in AI it’s all they talk about — as if it’s the only way to mitigate against bad AIs!

Holistic safety-in-depth tries to account for bad weather, poor roads, design errors, and murderous drivers. But with AI they seem to expect failing models to … what … just bounce around between imaginary barriers until they come to a stop?

And don’t get me started on the physics of real guardrails. They’re designed by engineers with a solid grasp on the material properties it takes to stop an out-of-control lorry. Yet AIs don’t obey the laws of physics. We have scant idea how Deep Neural Networks work let alone how they fail.

Expand full comment

I'm no graduate of Tumblr U, but: the AI indicating a rabbi is Native America by giving them buckskin and a war bonnet is just hilariously "problematic". The kind of image you'd expect to see as the punchline a cringe comedy.

Great going, Google. Ordering your kinda-racist AI to be racially diverse may not have been the cure you thought it was.

EDIT: A friend pointed out that Blazing Saddles may have been to blame.

Expand full comment

gemini the cringe comedian lol

Expand full comment

This is downright insulting. Insulting to Caucasians, as well as Native Americans and Black Americans. Those races have greatly suffered and now being openly mocked by Google in the form of memes.

AI is a the biggest troll on the planet — and should be blocked.

Expand full comment

A tool that twists history to fit today's virtuous flickering lamp is useless. But what should we expect from a tool trained with the cesspool of the internet. If the LLM were a person, the results would be the same, GIGO amplified - and not even interesting fiction. As the internet is loaded with more LLM generated material, it will be like the feedback in an audio system that eventually causes deafness.

Even with original material, carefully curated, it is exceedingly difficult to establish a valid context of past events.

Churchill said it really well (Quote from thefp.com)

Quoting Churchill:

"It is not given to human beings, happily for them, for otherwise life would be intolerable, to foresee or to predict to any large extent the unfolding course of events. In one phase men seem to have been right, in another they seem to have been wrong. Then again, a few years later, when the perspective of time has lengthened, all stands in a different setting. There is a new proportion. There is another scale of values. History with its flickering lamp stumbles along the trail of the past, trying to reconstruct its scenes, to revive its echoes, and kindle with pale gleams the passion of former days."

Expand full comment

Thanks for the Churchill quote. And yes, when you stand back AI is merely a symbolic blip of our times.....there's nothing there.

Consciousness has four states like the seasons and the Universe. Every year we humans experience four seasons. However the context of the Universe we are in the seasonal winter that will last thousands and thousands of years. Look at tiny creatures in their seasons versus humans. Everything scales ..

Expand full comment

>Jamie Oliver

Note: it is John Oliver

Expand full comment

aargh. fixed online but can’t fix the email. thanks for spotting

Expand full comment

The thought of Jamie Oliver pontificating on your tweet made my day.

Expand full comment

I think that was an hallucination

Expand full comment

Boom tish! 😂

Expand full comment

I had Midjourney create images of black female doctors treating white kids a couple of months back. Only half the doctors were white. That's a start.

But none were female.

I guess the training material had been selected to prevent being labeled racist. But not sexist.

https://rnaea.files.wordpress.com/2023/11/gctwnl_doctor_black_female_treating_male_white_kids_b9e6f736-5f44-41d1-b651-148a11abb4d2.png

Expand full comment

Those examples from last year remind me the silly middle school riddles like, “what was the color of Napoleon’s white horse?” 🤣. I guess on the bright side, given that those riddles were a thing and middle schoolers often got it wrong, a case could be made that ChatGPT and Gemini are at a middle school level 😂

Expand full comment

I find the whole idea of steering a system that should be a reflection of the world as it is, warts and all, a rather unpalatable and quite frankly, dangerous prospect.

Expand full comment

The primary issue here is purported to be a lack of representation in outputs for groups of people and cultures that lack sufficient information and historical context on the internet.

I guess the first question we all should contemplate is; “Do these people and groups actually care?”

I’m all for people and groups that desire representation being provided with opportunities in the correct context. But we have to always take into account the actual desire and the accuracy of the context. As AI begins to integrate with more value add segments of the economy we as a society need to make a decision about what we actually value. Do we value surface level things like this or do we value the world changing discoveries that AI will power?

I for one believe we need to get over our petty squabbles and shift our energies to the surpluses of tomorrow.

Expand full comment

Markus is truly frightening. According to him, it's just a problem of technical immaturity, and given time, it will do a much better job at lying to us, putting words in our mouths, and using the ethos of the builders of the tool to propagandize their particular ideology. Goebbels would be proud.

Expand full comment

will be interesting to see if Gemini Pro does the same

Expand full comment

Well, ok, so it's easy to poke fun at an emerging industry in it's infancy. But, you know, all of us probably made mistakes when we were 3 years old. Let's put AI failures in to a larger context.

As America and Russia raced to see who would be the first to step upon the surface of the moon, they blew up quite a few rockets as they tried to figure out how to make it work. In one case, a test space capsule on the ground caught on fire and incinerated the astronauts inside.

Tens of thousands of people continue to die in automobiles every year, in spite of a century of development and very real progress in auto safety.

The tobacco companies continue to deliberately kill hundreds of thousands of Americans every year, while the rest of us yawn.

Close to nobody is interested in the thousands of massive hydrogen bombs which can at any time erase the modern world without warning in just a few minutes.

AI bugs don't begin to compare to human bugs.

Expand full comment

I dont think it is relevant how many people die in car crashes, due to tobacco smoke etc. Some of these are difficult to prevent, think about accidents, addicted to tobacco etc.

The issue here there is a new software program, going al woke on us. I am a minority myself, I do not appreciate AI is generating black Nazi soldiers. Thats woke going crazy.

Expand full comment

These people would never have used the social media platforms and search engines they control to bias against Trump, right? I mean that's just a crazy conspiracy theory, right?

Expand full comment

Hi Gary! What these results show over and over is this: reality doesn't result merely from statistics-driven, numerical interpolation of other realities :)

Expand full comment