Discussion about this post

User's avatar
TitaniumDragon's avatar

This isn't surprising, as someone who uses MidJourney a lot.

These AIs aren't actually intelligent in any way. They are good at producing "plausible" things but they still don't understand anything they're doing - which is obvious the more you dig into them.

The art AIs produce really nice art, but their "mistakes" are less obvious because art is more subject to interpretation to begin with. While they draw things with extra fingers, or extra limbs, or whatever, those are "obvious" errors, but the more subtle (and larger) sort of error is the inability for it to draw exactly what you want - what you quickly discover is that it isn't actually smart enough to intelligently interpret writing. It can produce a beautiful image, but it is hard to get it to produce something specific without feeding an actual image into it that already exists.

The thing is, the Clever Hans effect makes people see "close enough" content as being correct, until they actually try to directly wrangle the thing, at which point they discover that it doesn't actually know what you're telling it to do, it is just making a "plausible" image that might have words in the prompt describe it. Once you are too specific, it becomes clear it was faking it all along.

Expand full comment
Sean McGregor's avatar

Hi Gary and Ernest!

I lead the AI Incident Database [0] and we are preparing to rollout a new feature [1] for collecting incidents that share substantially similar causative factors and harms as "variants" [2]. The feature is meant to be lightweight to support mass collection of incident data not gated by our full editing processes. If/when your GPT inputs produce outputs consistent with our variant ingestion criteria, do you mind us mirroring the data?

Best,

Sean McGregor

[0] https://incidentdatabase.ai/

[1] https://github.com/responsible-ai-collaborative/aiid/pull/1467

[2] https://arxiv.org/abs/2211.10384

Expand full comment
7 more comments...

No posts