The real problem here is your normative notion of what an ant body is *supposed* to look like. Sora chose to show us a day in the life of an ant who is differently-abled, which I think is commendable. #RepresentationMatters
Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking? Perhaps a willingness to subscribe to material with verified information?
FWIW, in 2024 I spend more time reading sources I pay for.
"Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking?" Thanks to Gary's substack (and others) at least we have places where we can meet and discuss. Maybe we can take this as a starting point and also find other ways to organize?
Given that social media misinformation has the pediatric dentist asking me whether it's okay to give my kids fluoride, which she highly recommends (but parents now think this element on the periodic table is unnatural and therefore dangerous, logical fallacies oh my), my optimism is low.
I'm not sure. People are incredibly easily fooled. I'm still waiting for someone else to spontaneously notice that the bow and stern of one of the ships in the "pirate ships in a coffee cup" video just silently swapped places. But despite being in multiple forums where people were discussing this very example, all I saw from industry people was hype about a "world simulator" or "physics engine", and all I saw from normies was slack-jawed oogling.
Also I think it is a bit naive to speak of "sources with verified information". Who does the verifying? Every publisher has an angle and a bias, and all of them are populated by the same primates who are fooled by these videos.
Ok it seems you're more focused on the peer reviewed academic literature. I was referring more to news media. I think we agree that the quality level w.r.t. truth is on average higher in peer reviewed academic lit than in news media. But even there we have big systemic problems, esp if certain subfields. See for instance the replication crisis in psychological research, or the proliferation of pure ideology unmoored from evidence in the "-studies" fields as evidenced by the Sokal hoax and the Grievance Studies hoax more recently. Multiple instances now exist of researchers being hounded to withdraw their own papers when they reach data-driven conclusions or even just ask questions that may be considered at odds with the orthodoxy of critical social justice. Even in pure math there was recently a scandal of citation cartels being used to game index rankings among researchers in China and Saudi Arabia (not that this proves the untruth of the papers being cited, but it does provide evidence of powerful motives other than truth-seeking). My takeaway is that we all must remain vigilant - "peer review" should be a never ending process and no publication can be taken for granted as a purveyor of settled truth on the basis of prestige or reputation alone.
Really interesting points. Thanks for taking the time to lay them out. I'm now appreciating the social importance of archival functions. Speaking of the NYT on this front, I found the recently documented occurrences of "stealth editing" to be really disturbing. And as you say it is hard to really publicly prove they happened without a trustworthy archive. And they're just not possible outside of an infinitely malleable digital medium.
This could take us along a whole other tangent, but: there _are_ ways to maintain an un-tamperable archive of what's been written in the past nowadays by using cryptography and distributed consensus mechanisms that are robust to adversarial attacks. That is, you can publish to such a platform but provably cannot retroactively mutate what you have published without expending enormous resources. And you can sign what you have published in a way that proves no one else could have published it. But these technologies require institutions to opt in voluntarily.
Minor (?) detail: The prompt says "pov footage", doesn't it? Am I wrong in thinking this means "from the ant's point of view"? The video is unlikely to be from another ant's pov, the view's from too high above. Ironically, if Sora has actually generated the ant's point of view, we would not have known the ant had a couple of legs missing.
Humans craving sugar was good as long as calories were important for survival. In times where we are swamped by industrial sugar, this craving is a danger to our health (and to healthcare more generally). At least, we agreed on laws that the sugar content of food and drinks must be labelled.
Humans craving attention was good as long as this craving forced us to seek attention from other humans. In times were we are swamped by AI generated content, this craving is a danger to mental health (and to the fabric of our society more generally). Shouldn’t we agree on laws that AI generated content must be labelled?
"Shouldn’t we agree on laws that AI generated content must be labelled?"
Absolutely, but who is going to enforce the rules? Who will have time to look at all the hours of footage to find cheaters? Will even a special built AI be able to keep up with the task and would it work well (without too many disruptive errors)? And in any case do we all trust whoever runs the police AI?
Counterfeits exist of every product so too now will "informative video" be made. And eventually (maybe too soon) if some hostile party with an AI and big budget could flood the zone with many many thousands of copies everywhere. Then what will be the chances that anyone can locate any real footage at all? (unless you know the direct URL or such).
Slightly scarier thought: could anyone cause this flood by accident? Or just to see if they could, only to find that they have?
"Absolutely, but who is going to enforce the rules?" I agree that laws can only be part of the solution. Social norms also need to change. We need to build a movement. People need to start defending themselves as humans against AIs. That doesnt mean that we shouldnt be using AI. Sure we should. But on our terms.
So would you please define the rules on what is allowed and what is not allowed on the internet.
Should these rules be determined by Russia, Germany, EU, China, Canada, Saudi Arbia, UAE, Iran, UN, People at Harvard, or the people in the Texas government? I am confused about whose rules on what is false or fake you want to use? For example, some people might say that everything Trump says is false or fake, regardless of how he creates these messages. Should people be able to block this as well?
Isnʼt more fundamentally this an issue of trust? Can I trust what I see, hear, and read? Can I trust myself to discern what is real? How do I know what is true? At the bottom, who is this I really? Who is this “I” “me” that’s apart, independent of this outside world that I can objectively judge and evaluate? How do I know what I know? What is this sense of knowing anyway? Examining the examiner is the first step without which leaves you with no way to know the quality of the product. In a crucial way the observer is actually part of the observed.
This prior step is missing here and most places living assumed and unspoken in the background.
"Isnʼt more fundamentally this an issue of trust?" Yes, absolutely. Trust is crucial. From a software engineering point of view, part of the problem is that we dont really know what trust is, and even less how to engineer for trust ... I love all your questions and I agree with your last sentence as well.
“…part of the problem is that we don't really know what trust is…” Yes, very good, and it's worse than that. If you'll step off a cliff with me, we don't really know what/who WE are. Sounds like an attack, I know. It's not meant to be an attack but rather an opening. Anyway, from my ontological studies, the best I can put into words is we are that which can't be put into words. Sorry, Mind, this is the one party you can not attend given your need for thingness and recoil at even the idea of nothingness.
If a dialog goes really well and the mind quiets enough, nothing presents, and there’s a lightness of being that disappears the instant we go to notice. Being is incredibly elusive. And some are holding on to the hubris of creating and harnessing that in a box. HA!
The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct. They improvise, iterate, and interpolate based on a brief prompt and on the data they are trained with. It's a brainstorming / spitballing exercise, where the machine creates something out nothing. There are no brakes. Integrating user feedback and self checks will be the next great challenge.
I have learned recently that the present AI systems are already curated by humans, a staff of specialized workers which are analyzing system responses and giving feedback in order to improve them. This is called “reinforcement learning from human feedback”. And evidently this extensive human feedback is not enough to make the systems entirely reliable.
In fact careful studies indicate that the RLHF has made ChatGPT in particular *less* statistically well-calibrated, reproducing human cognitive biases. Perhaps the "frequently wrong, never in doubt" phenomenon can be partially explained this way.
"The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct." I am not sure that this is the main problem. AI will continue to improve. But any improvement of AI will only increase our dependence on AI. If we dont want to end up living in the "Matrix", the main problem, in my view, is a different one.
As a photographer I spot the weird anomalies in photos and videos quickly. Most don’t. But none of these anomalies surprises me in the least, having worked in the computer field for decades.
Of note also is that is that it used to be shocking to us OpenAI would be on the verge of releasing such tools that are so obviously not entirely ready qualitatively or ethically for prime time (despite their PR noise), but we've grown numb (well heck, Apple made us guinea pigs in the early days too: arrogance). This is not only because of the competitive race they are in, but that these kinds of errors are deeply entrenched in the architecture and limits of this kind of system, and can't be fixed with patches, upgrades, more data, more feedback, etc. The world is insanely complex compared to the crude basis of these LLMs in fundamentally simple set of a few mathematical equations repeated endless times.
Hell, we can't even build a single cell. Yet some somehow believe we are close to building something like a human being’s intelligence, when we have no idea what awareness is. It's laughable. What a show...
This could look like AGI. Or like an "Applaus-o-meter." Or something else entirely...
I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass.
Just to clarify, I am not sure if you are saying AGI is impossible, or that training an LLM to distinguish positive statements from normative statements (as well from predictions and other types of linguistic expressions that cannot be evaluated as true or false) is impossible.
I would say both goals are fuzzy and highly subjective, but the second less so than the first. Without question it is possible to train a model to identify positive statements, just as a model can flag obscene or inappropriate language. This function must be completely separated from the task of evaluating those same factual statements to be true or false.
First step would be to define the problem. Having a consensual scientific definition of "Intelligence" is a necessary first step. Otherwise the term "Artificial Intelligence" is unfalsifiable.
I totally agree. Intelligence, consciousness, trust, identity, privacy, etc ... there is a whole range of important notions which are used by engineers and which are in urgent need of philosophical clarification.
"I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass."
The strength of LLMs is to compute statistical averages. How can one connect this with checking facts?
Goal: Evaluate whether this string fits the definition of a factual statement (e.g. one that can be verified as either true or false)
It should be possible to train a text classification model to evaluate different statements and determine whether they are likely to be provable or impossible to prove.
Examples of provable statements:
Quantitative Expressions - Ex. "The population of the United States is 330 million people."
Comparative Statements - Ex. "The Nile is the longest river in Africa."
Direct Quotations - Ex. "John F. Kennedy told the people of Berlin, 'Ich bin ein Berliner.'"
Descriptions of Past Events - "On June 6, 1944, Allied forces landed on the beaches of Normandy."
In general, data that can be cited or attributed may be considered factual. However, this depends on trust in the methods and judgment of those compiling the information source.
I managed to generate a very lovely looking cyborg Frankenstina by layering prompts and pushing DALL-E 3 to breaking point. It would be amazing to see a Sora version.
My 6 and 4 year olds could tell you that ants have 6 legs, like all insects. Get it together, Sora. And they love watching educational videos on YouTube Kids (the quality of which I already question sometimes, while knowing that scaling quality problems with genAI is worse, not the same).
You're absolutely right, and it's a good point. But you are crucially missing something with all your critiques about AI: you don't go far enough. AI is not just an isolated phenomenon but practically a forgone conclusion of a consumerist-technological society. Due to its technical nature and low cost of reproducing, it cannot be controlled within a technological society.
In order to truly eradicate AI, we need to have a cohesive movement that goes far beyond regulating AI: we need to dismantle the entire system of large tech companies and advanced computing in general. It is taking us away from nature and from a relationship with the biosphere, and that's why we are seeing so many problems. We've got an obsession with energy usage that goes beyond living a good life, and we've got to stop that.
I admire ants. This is a copy from David Attenborough's Incredible life of Ants and Empire of Ants.
Separately, please, please let's not attribute anything to a vague attribute "intelligence." This trait, and whatever it means to the Western world who was raised on it does not exist. If it did (I'm guessing that what it means is that all humans share this trait in a positive way) our world would be completely different.
You and I both read The Free Press, I guess that's a start. I agree that these days it is definitely naive to expect to find sources with verified information, but maybe that's a result of the corruption of the media. The result is that we don't trust anything, we are like the kid who is taught to look for "micro-aggressions", we can't trust anyone.
Admittedly the media, politicians and many corporations think that lying is perfectly acceptable. Seems like an easy way to avoid consequences. But maybe that attitude poisons and weakens their structure.
I wonder if AI will some day be used to detect lies, now that would be a real game changer.
Gary, do you realize this new capability is still being developed? This has not been released to the public for teenagers to be creative with. So your feedback on things to improve are likely to be helpful for the developers. But it seems you are in a full panic about this.
What would you suggest for controlling the creation of "false videos" however they are created? Even without a new tool, there are still FAKE videos being made and shared. How do you want to control this sharing of the information I consider fake or false?
The real problem here is your normative notion of what an ant body is *supposed* to look like. Sora chose to show us a day in the life of an ant who is differently-abled, which I think is commendable. #RepresentationMatters
Definitely an homage to differently abled. Notice a two-headed or bidirectional ant at 0:06.
#angryupvote
It appears Sora is a woke generative AI that overrepresents creatures that are differently abled. lol
Just to be clear, I was kidding.
Have you considered writing satire? I think you might have a career there.
Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking? Perhaps a willingness to subscribe to material with verified information?
FWIW, in 2024 I spend more time reading sources I pay for.
"Is it too much to hope that the proliferation of this kind of trash will result in improved critical thinking?" Thanks to Gary's substack (and others) at least we have places where we can meet and discuss. Maybe we can take this as a starting point and also find other ways to organize?
Given that social media misinformation has the pediatric dentist asking me whether it's okay to give my kids fluoride, which she highly recommends (but parents now think this element on the periodic table is unnatural and therefore dangerous, logical fallacies oh my), my optimism is low.
I'm not sure. People are incredibly easily fooled. I'm still waiting for someone else to spontaneously notice that the bow and stern of one of the ships in the "pirate ships in a coffee cup" video just silently swapped places. But despite being in multiple forums where people were discussing this very example, all I saw from industry people was hype about a "world simulator" or "physics engine", and all I saw from normies was slack-jawed oogling.
Also I think it is a bit naive to speak of "sources with verified information". Who does the verifying? Every publisher has an angle and a bias, and all of them are populated by the same primates who are fooled by these videos.
Ok it seems you're more focused on the peer reviewed academic literature. I was referring more to news media. I think we agree that the quality level w.r.t. truth is on average higher in peer reviewed academic lit than in news media. But even there we have big systemic problems, esp if certain subfields. See for instance the replication crisis in psychological research, or the proliferation of pure ideology unmoored from evidence in the "-studies" fields as evidenced by the Sokal hoax and the Grievance Studies hoax more recently. Multiple instances now exist of researchers being hounded to withdraw their own papers when they reach data-driven conclusions or even just ask questions that may be considered at odds with the orthodoxy of critical social justice. Even in pure math there was recently a scandal of citation cartels being used to game index rankings among researchers in China and Saudi Arabia (not that this proves the untruth of the papers being cited, but it does provide evidence of powerful motives other than truth-seeking). My takeaway is that we all must remain vigilant - "peer review" should be a never ending process and no publication can be taken for granted as a purveyor of settled truth on the basis of prestige or reputation alone.
Really interesting points. Thanks for taking the time to lay them out. I'm now appreciating the social importance of archival functions. Speaking of the NYT on this front, I found the recently documented occurrences of "stealth editing" to be really disturbing. And as you say it is hard to really publicly prove they happened without a trustworthy archive. And they're just not possible outside of an infinitely malleable digital medium.
This could take us along a whole other tangent, but: there _are_ ways to maintain an un-tamperable archive of what's been written in the past nowadays by using cryptography and distributed consensus mechanisms that are robust to adversarial attacks. That is, you can publish to such a platform but provably cannot retroactively mutate what you have published without expending enormous resources. And you can sign what you have published in a way that proves no one else could have published it. But these technologies require institutions to opt in voluntarily.
Minor (?) detail: The prompt says "pov footage", doesn't it? Am I wrong in thinking this means "from the ant's point of view"? The video is unlikely to be from another ant's pov, the view's from too high above. Ironically, if Sora has actually generated the ant's point of view, we would not have known the ant had a couple of legs missing.
Smart comment? More like from the flashlight’s pov
AI is like sugar.
Humans craving sugar was good as long as calories were important for survival. In times where we are swamped by industrial sugar, this craving is a danger to our health (and to healthcare more generally). At least, we agreed on laws that the sugar content of food and drinks must be labelled.
Humans craving attention was good as long as this craving forced us to seek attention from other humans. In times were we are swamped by AI generated content, this craving is a danger to mental health (and to the fabric of our society more generally). Shouldn’t we agree on laws that AI generated content must be labelled?
"Shouldn’t we agree on laws that AI generated content must be labelled?"
Absolutely, but who is going to enforce the rules? Who will have time to look at all the hours of footage to find cheaters? Will even a special built AI be able to keep up with the task and would it work well (without too many disruptive errors)? And in any case do we all trust whoever runs the police AI?
Counterfeits exist of every product so too now will "informative video" be made. And eventually (maybe too soon) if some hostile party with an AI and big budget could flood the zone with many many thousands of copies everywhere. Then what will be the chances that anyone can locate any real footage at all? (unless you know the direct URL or such).
Slightly scarier thought: could anyone cause this flood by accident? Or just to see if they could, only to find that they have?
"Absolutely, but who is going to enforce the rules?" I agree that laws can only be part of the solution. Social norms also need to change. We need to build a movement. People need to start defending themselves as humans against AIs. That doesnt mean that we shouldnt be using AI. Sure we should. But on our terms.
So would you please define the rules on what is allowed and what is not allowed on the internet.
Should these rules be determined by Russia, Germany, EU, China, Canada, Saudi Arbia, UAE, Iran, UN, People at Harvard, or the people in the Texas government? I am confused about whose rules on what is false or fake you want to use? For example, some people might say that everything Trump says is false or fake, regardless of how he creates these messages. Should people be able to block this as well?
Isnʼt more fundamentally this an issue of trust? Can I trust what I see, hear, and read? Can I trust myself to discern what is real? How do I know what is true? At the bottom, who is this I really? Who is this “I” “me” that’s apart, independent of this outside world that I can objectively judge and evaluate? How do I know what I know? What is this sense of knowing anyway? Examining the examiner is the first step without which leaves you with no way to know the quality of the product. In a crucial way the observer is actually part of the observed.
This prior step is missing here and most places living assumed and unspoken in the background.
"Isnʼt more fundamentally this an issue of trust?" Yes, absolutely. Trust is crucial. From a software engineering point of view, part of the problem is that we dont really know what trust is, and even less how to engineer for trust ... I love all your questions and I agree with your last sentence as well.
“…part of the problem is that we don't really know what trust is…” Yes, very good, and it's worse than that. If you'll step off a cliff with me, we don't really know what/who WE are. Sounds like an attack, I know. It's not meant to be an attack but rather an opening. Anyway, from my ontological studies, the best I can put into words is we are that which can't be put into words. Sorry, Mind, this is the one party you can not attend given your need for thingness and recoil at even the idea of nothingness.
If a dialog goes really well and the mind quiets enough, nothing presents, and there’s a lightness of being that disappears the instant we go to notice. Being is incredibly elusive. And some are holding on to the hubris of creating and harnessing that in a box. HA!
The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct. They improvise, iterate, and interpolate based on a brief prompt and on the data they are trained with. It's a brainstorming / spitballing exercise, where the machine creates something out nothing. There are no brakes. Integrating user feedback and self checks will be the next great challenge.
Indeed!
I have learned recently that the present AI systems are already curated by humans, a staff of specialized workers which are analyzing system responses and giving feedback in order to improve them. This is called “reinforcement learning from human feedback”. And evidently this extensive human feedback is not enough to make the systems entirely reliable.
In fact careful studies indicate that the RLHF has made ChatGPT in particular *less* statistically well-calibrated, reproducing human cognitive biases. Perhaps the "frequently wrong, never in doubt" phenomenon can be partially explained this way.
Nope.
This process is also currently completely opaque from end users. That's an oversight. It doesn't need to be.
"The problem with generative algorithms right now is that they don't really contain good mechanisms to self-correct." I am not sure that this is the main problem. AI will continue to improve. But any improvement of AI will only increase our dependence on AI. If we dont want to end up living in the "Matrix", the main problem, in my view, is a different one.
This is another dimension of the Post Truth Era. As if the political dimension wasn't bad enough. Circling around the drain faster and faster...
As a photographer I spot the weird anomalies in photos and videos quickly. Most don’t. But none of these anomalies surprises me in the least, having worked in the computer field for decades.
Of note also is that is that it used to be shocking to us OpenAI would be on the verge of releasing such tools that are so obviously not entirely ready qualitatively or ethically for prime time (despite their PR noise), but we've grown numb (well heck, Apple made us guinea pigs in the early days too: arrogance). This is not only because of the competitive race they are in, but that these kinds of errors are deeply entrenched in the architecture and limits of this kind of system, and can't be fixed with patches, upgrades, more data, more feedback, etc. The world is insanely complex compared to the crude basis of these LLMs in fundamentally simple set of a few mathematical equations repeated endless times.
Hell, we can't even build a single cell. Yet some somehow believe we are close to building something like a human being’s intelligence, when we have no idea what awareness is. It's laughable. What a show...
This could look like AGI. Or like an "Applaus-o-meter." Or something else entirely...
I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass.
Cannot be done. We will need a new architecture for that, based in facts rather than stats
Tempted to actually try building out this text classification model as a first step.
https://artmeetscode.com/2024/02/18/what-is-truth/
Just to clarify, I am not sure if you are saying AGI is impossible, or that training an LLM to distinguish positive statements from normative statements (as well from predictions and other types of linguistic expressions that cannot be evaluated as true or false) is impossible.
I would say both goals are fuzzy and highly subjective, but the second less so than the first. Without question it is possible to train a model to identify positive statements, just as a model can flag obscene or inappropriate language. This function must be completely separated from the task of evaluating those same factual statements to be true or false.
I never said it would be easy...
It is not just about not being easy. It is that currently there is no agreement on what would be a good way to apporach the problem (afaics).
First step would be to define the problem. Having a consensual scientific definition of "Intelligence" is a necessary first step. Otherwise the term "Artificial Intelligence" is unfalsifiable.
I totally agree. Intelligence, consciousness, trust, identity, privacy, etc ... there is a whole range of important notions which are used by engineers and which are in urgent need of philosophical clarification.
"I would love to train an LLM simply to be able to distinguish facts (statements that can be proved either true or false) from the general linguistic morass."
The strength of LLMs is to compute statistical averages. How can one connect this with checking facts?
We have a hypothetical function:
is_provable(x)
Input: String of up to 1000 characters in length.
Output: Boolean
Goal: Evaluate whether this string fits the definition of a factual statement (e.g. one that can be verified as either true or false)
It should be possible to train a text classification model to evaluate different statements and determine whether they are likely to be provable or impossible to prove.
Examples of provable statements:
Quantitative Expressions - Ex. "The population of the United States is 330 million people."
Comparative Statements - Ex. "The Nile is the longest river in Africa."
Direct Quotations - Ex. "John F. Kennedy told the people of Berlin, 'Ich bin ein Berliner.'"
Descriptions of Past Events - "On June 6, 1944, Allied forces landed on the beaches of Normandy."
In general, data that can be cited or attributed may be considered factual. However, this depends on trust in the methods and judgment of those compiling the information source.
Here goes...
I'm going to lay out a strategy in the comment thread below. Feel free to tear it to shreds...
I managed to generate a very lovely looking cyborg Frankenstina by layering prompts and pushing DALL-E 3 to breaking point. It would be amazing to see a Sora version.
https://www.linkedin.com/feed/update/urn:li:activity:7163275662206693376
No kidding? Ouch. I guess a little knowledge is a dangerous thing, but even more dangerous is applying a little knowledge.
Wonder where the parents that question Fluoride can find toothpaste?
My 6 and 4 year olds could tell you that ants have 6 legs, like all insects. Get it together, Sora. And they love watching educational videos on YouTube Kids (the quality of which I already question sometimes, while knowing that scaling quality problems with genAI is worse, not the same).
You're absolutely right, and it's a good point. But you are crucially missing something with all your critiques about AI: you don't go far enough. AI is not just an isolated phenomenon but practically a forgone conclusion of a consumerist-technological society. Due to its technical nature and low cost of reproducing, it cannot be controlled within a technological society.
In order to truly eradicate AI, we need to have a cohesive movement that goes far beyond regulating AI: we need to dismantle the entire system of large tech companies and advanced computing in general. It is taking us away from nature and from a relationship with the biosphere, and that's why we are seeing so many problems. We've got an obsession with energy usage that goes beyond living a good life, and we've got to stop that.
I admire ants. This is a copy from David Attenborough's Incredible life of Ants and Empire of Ants.
Separately, please, please let's not attribute anything to a vague attribute "intelligence." This trait, and whatever it means to the Western world who was raised on it does not exist. If it did (I'm guessing that what it means is that all humans share this trait in a positive way) our world would be completely different.
You and I both read The Free Press, I guess that's a start. I agree that these days it is definitely naive to expect to find sources with verified information, but maybe that's a result of the corruption of the media. The result is that we don't trust anything, we are like the kid who is taught to look for "micro-aggressions", we can't trust anyone.
Admittedly the media, politicians and many corporations think that lying is perfectly acceptable. Seems like an easy way to avoid consequences. But maybe that attitude poisons and weakens their structure.
I wonder if AI will some day be used to detect lies, now that would be a real game changer.
Gary, do you realize this new capability is still being developed? This has not been released to the public for teenagers to be creative with. So your feedback on things to improve are likely to be helpful for the developers. But it seems you are in a full panic about this.
What would you suggest for controlling the creation of "false videos" however they are created? Even without a new tool, there are still FAKE videos being made and shared. How do you want to control this sharing of the information I consider fake or false?