52 Comments

The real problem here is your normative notion of what an ant body is *supposed* to look like. Sora chose to show us a day in the life of an ant who is differently-abled, which I think is commendable. #RepresentationMatters

Expand full comment

Definitely an homage to differently abled. Notice a two-headed or bidirectional ant at 0:06.

Expand full comment

#angryupvote

Expand full comment

It appears Sora is a woke generative AI that overrepresents creatures that are differently abled. lol

Expand full comment
Comment deleted
Feb 18
Comment deleted
Expand full comment

Just to be clear, I was kidding.

Expand full comment

Have you considered writing satire? I think you might have a career there.

Expand full comment

Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking? Perhaps a willingness to subscribe to material with verified information?

FWIW, in 2024 I spend more time reading sources I pay for.

Expand full comment

"Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking?" Thanks to Gary's substack (and others) at least we have places where we can meet and discuss. Maybe we can take this as a starting point and also find other ways to organize?

Expand full comment

Given that social media misinformation has the pediatric dentist asking me whether it's okay to give my kids fluoride, which she highly recommends (but parents now think this element on the periodic table is unnatural and therefore dangerous, logical fallacies oh my), my optimism is low.

Expand full comment

I'm not sure. People are incredibly easily fooled. I'm still waiting for someone else to spontaneously notice that the bow and stern of one of the ships in the "pirate ships in a coffee cup" video just silently swapped places. But despite being in multiple forums where people were discussing this very example, all I saw from industry people was hype about a "world simulator" or "physics engine", and all I saw from normies was slack-jawed oogling.

Also I think it is a bit naive to speak of "sources with verified information". Who does the verifying? Every publisher has an angle and a bias, and all of them are populated by the same primates who are fooled by these videos.

Expand full comment

"Every publisher" may "have a bias" (I don't know.) But whatever the case, traditionally, the best of them have sufficient respect for time-bound knowledge that they don't just hoke up and confabulate whatever they imagine might suit their prejudices, or some agenda. Hard copy print media are difficult to counterfeit with no accountability at all, the way it's possible for unscrupulous Users to conjure up AI text, images, audio, video, etc. on a Screen nowadays, simply by making a general request of an AI algorithm.

One thing about the exclusively hard-copy text medium era- archived print journals in library Reference sections, with the Search function carried out by bound volume compendiums like those of the Reader's Guide To Periodical Literature: the Provenance was there. Articles typically had to go through a lengthy process of negotiating and editing in order to be published.

That's all still around, in bound volume form. It's merely more Inconvenient and time-consuming to be required to visit a library's reference section and use it than it is to do a keyword search.

That's much of the advantage of a computer, for many of us: it streamlines research capabilities with extraordinary quickness, compared to the Before Time. But if the online results are infected with an overly high quotient of bad data- like confabulated mockups assembled by an oblivious AI task savant- that streamlined research capability is rendered much, much less reliable. Disadvantages like that eventually begin to outweigh the time-saving convenience. We may well end up in an era where computers are viewed as quick and dirty first-draft guides to information sources whose provenance can only be reliably determined by the existence of hard-copy, bound, printed material verified by card catalog numbers, as entries in the Reader's Guide to Periodical Literature, the annual article index volumes associated with a given publication, etc.

Parallel processing means of verifying Provenance are destined to assume increased importance, for those of us who aren't inclined to be hoodwinked by Glamour and Illusion.

Expand full comment

Ok it seems you're more focused on the peer reviewed academic literature. I was referring more to news media. I think we agree that the quality level w.r.t. truth is on average higher in peer reviewed academic lit than in news media. But even there we have big systemic problems, esp if certain subfields. See for instance the replication crisis in psychological research, or the proliferation of pure ideology unmoored from evidence in the "-studies" fields as evidenced by the Sokal hoax and the Grievance Studies hoax more recently. Multiple instances now exist of researchers being hounded to withdraw their own papers when they reach data-driven conclusions or even just ask questions that may be considered at odds with the orthodoxy of critical social justice. Even in pure math there was recently a scandal of citation cartels being used to game index rankings among researchers in China and Saudi Arabia (not that this proves the untruth of the papers being cited, but it does provide evidence of powerful motives other than truth-seeking). My takeaway is that we all must remain vigilant - "peer review" should be a never ending process and no publication can be taken for granted as a purveyor of settled truth on the basis of prestige or reputation alone.

Expand full comment

I agree about the replication process. And I realize that the days of print media hegemony were not a Golden Age; a great many dissenting viewpoints simply went unheard, including arguments later shown to have the weight of facts, logic, and integrity on their side.

The most important feature was that it was possible to confirm facts and the veracity of events and their unfolding, with some research effort. Whereas nowadays I could run an entire Fake News Bureau from my laptop, with the help of a few "bulletins" and images mocked up by ChatGPT and DALL-E at my request. I can fake not only journalistic reports, studies, and articles, but I can also fake articles, etc. as fake sources for the fake endnotes and footnotes, to provide a fake aura of authority

This is what makes Provenance imperative, along with the confirmation capability conferred by Parallel Processing.

I realized back around 1999 that--between the ubiquity of computers and the personal information stored on them, increasingly worldwide Internet access, and the Hackability of standard levels of privacy protection by anyone with sufficiently elite skills, motivation, and diligence--anyone could potentially be their own Spy Bureau. For anyone of upper-middle class means or above, the capabilities were available as a line item in the household budget (particularly if they possessed their own skill set, or who knew the right person.) Then miniaturized video cameras began being made available as a consumer technology for the general public. And then smartphone networking, and then aerial drones...well, we see what's happened. It got crazy.

Now we can all run our own Disinformation Bureaus.

The counterintelligence against that it to develop an immune system against propaganda trickery, and the default of believing what we want to hear. We also need fact-checking and report confirmation, which is where the parallel processing of hard-copy journals has a role to play. Pixels on a screen are cheap, and they possess infinite plasticity. Bound and printed journals are not so effortless to produce.

One of the most important factors in my loss of respect for the perspicacity and integrity of the New York Times was when they announced around 10-15 years ago that they were destroying their own hard-copy archive for their newspaper editions. Saving money on warehouse expenses was more important to the owners than preserving the time-bound information of the acclaimed Newspaper Of Record in its original fidelity..

And now, c.2024, it's simple matter to get Chat GPT to confabulate fake news reports from earlier years or decades under a digital counterfeit of the NYT masthead, frame the graphics as if the articles had been archived on the Wayback Machine (or some similar indiligently maintained digital archive), and post the fake as a screen shot on Twitter...and it's entirely possible that eventually not even the NYT archivists will know for sure about its provenance and authenticity.

The NYT "scanned and copied" all of their daily print editions into digital form before destroying them, of course...of course! How reassuring is that? Not very. Not really.

Expand full comment

Really interesting points. Thanks for taking the time to lay them out. I'm now appreciating the social importance of archival functions. Speaking of the NYT on this front, I found the recently documented occurrences of "stealth editing" to be really disturbing. And as you say it is hard to really publicly prove they happened without a trustworthy archive. And they're just not possible outside of an infinitely malleable digital medium.

This could take us along a whole other tangent, but: there _are_ ways to maintain an un-tamperable archive of what's been written in the past nowadays by using cryptography and distributed consensus mechanisms that are robust to adversarial attacks. That is, you can publish to such a platform but provably cannot retroactively mutate what you have published without expending enormous resources. And you can sign what you have published in a way that proves no one else could have published it. But these technologies require institutions to opt in voluntarily.

Expand full comment

Minor (?) detail: The prompt says "pov footage", doesn't it? Am I wrong in thinking this means "from the ant's point of view"? The video is unlikely to be from another ant's pov, the view's from too high above. Ironically, if Sora has actually generated the ant's point of view, we would not have known the ant had a couple of legs missing.

Expand full comment

Smart comment? More like from the flashlight’s pov

Expand full comment

AI is like sugar.

Humans craving sugar was good as long as calories were important for survival. In times where we are swamped by industrial sugar, this craving is a danger to our health (and to healthcare more generally). At least, we agreed on laws that the sugar content of food and drinks must be labelled.

Humans craving attention was good as long as this craving forced us to seek attention from other humans. In times were we are swamped by AI generated content, this craving is a danger to mental health (and to the fabric of our society more generally). Shouldn’t we agree on laws that AI generated content must be labelled?

Expand full comment

"Shouldn’t we agree on laws that AI generated content must be labelled?"

Absolutely, but who is going to enforce the rules? Who will have time to look at all the hours of footage to find cheaters? Will even a special built AI be able to keep up with the task and would it work well (without too many disruptive errors)? And in any case do we all trust whoever runs the police AI?

Counterfeits exist of every product so too now will "informative video" be made. And eventually (maybe too soon) if some hostile party with an AI and big budget could flood the zone with many many thousands of copies everywhere. Then what will be the chances that anyone can locate any real footage at all? (unless you know the direct URL or such).

Slightly scarier thought: could anyone cause this flood by accident? Or just to see if they could, only to find that they have?

Expand full comment

"Absolutely, but who is going to enforce the rules?" I agree that laws can only be part of the solution. Social norms also need to change. We need to build a movement. People need to start defending themselves as humans against AIs. That doesnt mean that we shouldnt be using AI. Sure we should. But on our terms.

Expand full comment

So would you please define the rules on what is allowed and what is not allowed on the internet.

Should these rules be determined by Russia, Germany, EU, China, Canada, Saudi Arbia, UAE, Iran, UN, People at Harvard, or the people in the Texas government? I am confused about whose rules on what is false or fake you want to use? For example, some people might say that everything Trump says is false or fake, regardless of how he creates these messages. Should people be able to block this as well?

Expand full comment

Isnʼt more fundamentally this an issue of trust? Can I trust what I see, hear, and read? Can I trust myself to discern what is real? How do I know what is true? At the bottom, who is this I really? Who is this “I” “me” that’s apart, independent of this outside world that I can objectively judge and evaluate? How do I know what I know? What is this sense of knowing anyway? Examining the examiner is the first step without which leaves you with no way to know the quality of the product. In a crucial way the observer is actually part of the observed.

This prior step is missing here and most places living assumed and unspoken in the background.

Expand full comment

"Isnʼt more fundamentally this an issue of trust?" Yes, absolutely. Trust is crucial. From a software engineering point of view, part of the problem is that we dont really know what trust is, and even less how to engineer for trust ... I love all your questions and I agree with your last sentence as well.

Expand full comment

“…part of the problem is that we don't really know what trust is…” Yes, very good, and it's worse than that. If you'll step off a cliff with me, we don't really know what/who WE are. Sounds like an attack, I know. It's not meant to be an attack but rather an opening. Anyway, from my ontological studies, the best I can put into words is we are that which can't be put into words. Sorry, Mind, this is the one party you can not attend given your need for thingness and recoil at even the idea of nothingness.

If a dialog goes really well and the mind quiets enough, nothing presents, and there’s a lightness of being that disappears the instant we go to notice. Being is incredibly elusive. And some are holding on to the hubris of creating and harnessing that in a box. HA!

Expand full comment

The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct. They improvise, iterate, and interpolate based on a brief prompt and on the data they are trained with. It's a brainstorming / spitballing exercise, where the machine creates something out nothing. There are no brakes. Integrating user feedback and self checks will be the next great challenge.

Expand full comment

Indeed!

Expand full comment

I have learned recently that the present AI systems are already curated by humans, a staff of specialized workers which are analyzing system responses and giving feedback in order to improve them. This is called “reinforcement learning from human feedback”. And evidently this extensive human feedback is not enough to make the systems entirely reliable.

Expand full comment

In fact careful studies indicate that the RLHF has made ChatGPT in particular *less* statistically well-calibrated, reproducing human cognitive biases. Perhaps the "frequently wrong, never in doubt" phenomenon can be partially explained this way.

Expand full comment

Nope.

This process is also currently completely opaque from end users. That's an oversight. It doesn't need to be.

Expand full comment

"The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct." I am not sure that this is the main problem. AI will continue to improve. But any improvement of AI will only increase our dependence on AI. If we dont want to end up living in the "Matrix", the main problem, in my view, is a different one.

Expand full comment

This is another dimension of the Post Truth Era. As if the political dimension wasn't bad enough. Circling around the drain faster and faster...

Expand full comment

As a photographer I spot the weird anomalies in photos and videos quickly. Most don’t. But none of these anomalies surprises me in the least, having worked in the computer field for decades.

Of note also is that is that it used to be shocking to us OpenAI would be on the verge of releasing such tools that are so obviously not entirely ready qualitatively or ethically for prime time (despite their PR noise), but we've grown numb (well heck, Apple made us guinea pigs in the early days too: arrogance). This is not only because of the competitive race they are in, but that these kinds of errors are deeply entrenched in the architecture and limits of this kind of system, and can't be fixed with patches, upgrades, more data, more feedback, etc. The world is insanely complex compared to the crude basis of these LLMs in fundamentally simple set of a few mathematical equations repeated endless times.

Hell, we can't even build a single cell. Yet some somehow believe we are close to building something like a human being’s intelligence, when we have no idea what awareness is. It's laughable. What a show...

Expand full comment

This could look like AGI. Or like an "Applaus-o-meter." Or something else entirely...

I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass.

Expand full comment

Cannot be done. We will need a new architecture for that, based in facts rather than stats

Expand full comment

Tempted to actually try building out this text classification model as a first step.

https://artmeetscode.com/2024/02/18/what-is-truth/

Expand full comment

Just to clarify, I am not sure if you are saying AGI is impossible, or that training an LLM to distinguish positive statements from normative statements (as well from predictions and other types of linguistic expressions that cannot be evaluated as true or false) is impossible.

I would say both goals are fuzzy and highly subjective, but the second less so than the first. Without question it is possible to train a model to identify positive statements, just as a model can flag obscene or inappropriate language. This function must be completely separated from the task of evaluating those same factual statements to be true or false.

Expand full comment

I never said it would be easy...

Expand full comment

It is not just about not being easy. It is that currently there is no agreement on what would be a good way to apporach the problem (afaics).

Expand full comment

First step would be to define the problem. Having a consensual scientific definition of "Intelligence" is a necessary first step. Otherwise the term "Artificial Intelligence" is unfalsifiable.

Expand full comment

I totally agree. Intelligence, consciousness, trust, identity, privacy, etc ... there is a whole range of important notions which are used by engineers and which are in urgent need of philosophical clarification.

Expand full comment

I think it's quite possible that the "philosophical clarification" of the relevant aspects of the AI project might end up demonstrating that the goal of inducing evaluative intelligence into a machine is an illusion. A fool's errand.

Try applying the philosophical insights of Immanuel Kant to computer programs: https://plato.stanford.edu/entries/kant/

Kant's philosophical insights have value because the human perceptual and cognitive bandwidth- with its tropisms, biases, and limitations- is assumed as the default precondition for comprehending his work. (A high-functioning level of human verbal language facility is also required in order to grapple with Kant's insights when reading his work. But that's merely a technical skill. Technical skill can be learned and practiced. There's no way for a machine to "learn" animal perception and cognition faculties, which are entirely the province of dynamic living beings. That's a crucial distinction between machines and biological organisms.

A disembodied aggregation/selection/calculation/synthesis decision path capacity derived entirely from machine programming instructions has no inborn Nature, human or otherwise. https://ndpr.nd.edu/reviews/kant-and-the-laws-of-nature/

Expand full comment

"I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass."

The strength of LLMs is to compute statistical averages. How can one connect this with checking facts?

Expand full comment

We have a hypothetical function:

is_provable(x)

Input: String of up to 1000 characters in length.

Output: Boolean

Goal: Evaluate whether this string fits the definition of a factual statement (e.g. one that can be verified as either true or false)

It should be possible to train a text classification model to evaluate different statements and determine whether they are likely to be provable or impossible to prove.

Examples of provable statements:

Quantitative Expressions - Ex. "The population of the United States is 330 million people."

Comparative Statements - Ex. "The Nile is the longest river in Africa."

Direct Quotations - Ex. "John F. Kennedy told the people of Berlin, 'Ich bin ein Berliner.'"

Descriptions of Past Events - "On June 6, 1944, Allied forces landed on the beaches of Normandy."

In general, data that can be cited or attributed may be considered factual. However, this depends on trust in the methods and judgment of those compiling the information source.

Expand full comment

Here goes...

Expand full comment

I'm going to lay out a strategy in the comment thread below. Feel free to tear it to shreds...

Expand full comment

I managed to generate a very lovely looking cyborg Frankenstina by layering prompts and pushing DALL-E 3 to breaking point. It would be amazing to see a Sora version.

https://www.linkedin.com/feed/update/urn:li:activity:7163275662206693376

Expand full comment

No kidding? Ouch. I guess a little knowledge is a dangerous thing, but even more dangerous is applying a little knowledge.

Wonder where the parents that question Fluoride can find toothpaste?

Expand full comment

My 6 and 4 year olds could tell you that ants have 6 legs, like all insects. Get it together, Sora. And they love watching educational videos on YouTube Kids (the quality of which I already question sometimes, while knowing that scaling quality problems with genAI is worse, not the same).

Expand full comment

You're absolutely right, and it's a good point. But you are crucially missing something with all your critiques about AI: you don't go far enough. AI is not just an isolated phenomenon but practically a forgone conclusion of a consumerist-technological society. Due to its technical nature and low cost of reproducing, it cannot be controlled within a technological society.

In order to truly eradicate AI, we need to have a cohesive movement that goes far beyond regulating AI: we need to dismantle the entire system of large tech companies and advanced computing in general. It is taking us away from nature and from a relationship with the biosphere, and that's why we are seeing so many problems. We've got an obsession with energy usage that goes beyond living a good life, and we've got to stop that.

Expand full comment

I admire ants. This is a copy from David Attenborough's Incredible life of Ants and Empire of Ants.

Separately, please, please let's not attribute anything to a vague attribute "intelligence." This trait, and whatever it means to the Western world who was raised on it does not exist. If it did (I'm guessing that what it means is that all humans share this trait in a positive way) our world would be completely different.

Expand full comment

You and I both read The Free Press, I guess that's a start. I agree that these days it is definitely naive to expect to find sources with verified information, but maybe that's a result of the corruption of the media. The result is that we don't trust anything, we are like the kid who is taught to look for "micro-aggressions", we can't trust anyone.

Admittedly the media, politicians and many corporations think that lying is perfectly acceptable. Seems like an easy way to avoid consequences. But maybe that attitude poisons and weakens their structure.

I wonder if AI will some day be used to detect lies, now that would be a real game changer.

Expand full comment

Gary, do you realize this new capability is still being developed? This has not been released to the public for teenagers to be creative with. So your feedback on things to improve are likely to be helpful for the developers. But it seems you are in a full panic about this.

What would you suggest for controlling the creation of "false videos" however they are created? Even without a new tool, there are still FAKE videos being made and shared. How do you want to control this sharing of the information I consider fake or false?

Expand full comment

There is no way to preempt or "control" the vulnerability of AI to hallucination, or the exploitation of AI fakery to produce bogus artifacts and unacceptably noisy or false information. All anyone can do is to adopt a default of skepticism toward anything viewed on the screen, and increase skill at critical thinking and verification research. Including- and this is important- knowing when to walk away from the 4th order remove of the digital realm, in order to encounter and interact with other information modes that are more trustworthy and reliable.

Expand full comment