17 Comments
author

we need both cultural and technological solutions. the latte can perhaps help with the former

Expand full comment
Nov 11, 2022Liked by Gary Marcus

Twitter, being a fast and hyper-compressed medium, amplifies misinformation more than truth. But fast and compressed media also amplify fear and hate more than trust and love: Hate Makes More Money than Love, and Moves Faster https://www.fairobserver.com/region/north_america/william-softky-tech-turncoat-truths-hate-online-social-media-facebook-news-78491/

Expand full comment

Or, we could solve this the way we always have: consider the source. Follow accounts you trust, block or don't follow those you don't. People concerned about misinformation seem mostly concerned about the effect it has on *other* people, not on themselves.

Expand full comment

This is a Cathedral and Bazaar problem. Until Mr. Musk took over, Twitter was a Cathedral. Now he would like it to become a Bazaar. Open source software benefitted from the Bazaar - because it had a supportive community and a built-in, highly effective noise filter. At this time Twitter does not - and the threat of computer generated noise is very real.

IMHO - there's at least two parts to creating a noise filter.

First is solving the ID problem. $8 a month might be a start. Maybe there are other ways to do this - but when in doubt - follow the money.

Second is AI. But that AI doesn't have to be anywhere near perfect - not even close to what we demand out of image recognition, classification or recommendation. It simply has to be good enough to guide the human to be skeptical (yes, I know we all should be...). Something that provides the "...Yes, but..." to information.

Given those two things - a human could start to filter out sources they don't trust - if you could collect and summarize those filters... You could update the model that is evaluating the information.

You want the Bazaar to train the model and reduce the noise. But not make a Cathedral judgement. Frankly - in the long run you don't want the computer or the biased staff making a judgement - you want that left up to the human.

Thank you for the work you do and the thought provoking posts. I've been trying to figure out how to become a "paid" subscriber - but if there's a way to do it - I haven't found it.

Expand full comment

Marcus writes:

"The problem is about to get much, much worse. Knockoffs of GPT-3 are getting cheaper and cheaper, which means that the cost of generating misinformation is going to zero, and the quantity of misinformation is going to rise—probably exponentially."

Yes, agreed, and this phenomena helps illustrate a key issue in the knowledge explosion more generally. For the sake of discussion, we might call this The Weak Link theory.

The Trump phenomena demonstrates what can happen when large vulnerable segments of the population are exposed to misinformation powered only by cable news and social media. Amplifying these forces further with AI is obviously not going to help.

The larger lesson we can learn from this is that AI coders and other technologists are typically not thinking holistically about all the factors at play in the environment they are developing for. These are highly educated people living in a world of other highly educated people. As example, it's not likely that, as a group, these very well educated technical experts have much personal experience with the MAGA crowd, other than to snort condescendingly at them when they appear on the TV.

The point here is that most of humanity is relatively poor, uneducated and vulnerable, compared to the technologist. And it's these poor uneducated vulnerable people who are most likely to be negatively affected by rapid changes. And to the degree they are negatively affected, or even fear being negatively affected, many of them all around the world are going to look for help to those promising a return to an earlier era (make America great again) when the world seemed more familiar, predictable and safe. This is not a prediction, it's already happening all over the world today.

So, for example, AI seems likely to put tens of thousands of truck drivers out of work at some point. Few of these laid off truck drivers are going to go to college to become software engineers. Most of them are probably going to jobs at Walmart, and lives in dumpy trailer parks on the wrong side of the tracks. And then, in their crushing disappointment, they will present a risk to the entire society, including the technologist.

The Weak Link Theory states that, generally speaking, we can only proceed in to the future at a pace that the bulk of humanity can successfully adapt to. If too many people are left behind, they will rise up and pose a threat to the entire system.

So, the fact that you and I may be able to navigate the misinformation tsunami is not sufficient. We need to design an information environment that most people can navigate. And as Marcus correctly observes, poring AI gasoline on the misinformation fire is not going to assist us in moving towards that goal.

Expand full comment

Almost 100% of us get our convictions based on positive reinforcement. There is very little in our intelligence that makes us independently critical. https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/

Expand full comment

Disclaimer: I don't use Twitter.

It's a somewhat confusingly formulated mission though. Since when did Twitter aspire to become Wikipedia? As far as I'm concerned, this should not be their mission at all; on the contrary, embrace Twitter as it is: A chaotic whirlpool of first impression, opinions, thoughts and emotions. News outlets (and the like) are there because people are, not the other way around. If Twitter actually had a motto even close to "the most accurate source of information about the world", it would not be Twitter. Could Instagram become the most efficient way for programmers to collaborate? I mean sure, it would take alot of effort, but also... what?!

Yes, people (with or without AI:s) spread misinformation, and this immense whispering game often masquerade as actual truth. The reason for this is more cultural than technological in my opinion (although the latter amplifies and monetize the former). In my experience (it is mine, and it can't be properly shared here), people are worse than ever at being sceptic, a word that has almost become synonymous with 'critical' or 'anti-'; giving/taking critique; as well as at accepting difference in opinion. Reasons for this are plenty, but possibly involve the fact that the ratio of the number of people we interact with to the number of experiences we share with them are getting larger at a staggering rate.

Realizing that limiting your experience of what happens in the world to social media (especially of the kind that monetize user content) probably isn't a good thing might be a start. This is surely not aided by selling it as the "by far most accurate source of information about the world". This does not mean that we shouldn't help people to cope with misinformation; we should: Human verifications (emphasis on 'human', not identity); some type of anonymous crowdsourced voting (c.f. stackexchange); mandatory delay before tweeting, retweeting, sharing etc.; bibliography/ref-like system; could all be effective and I'm sure there are several measures already in use. But, fostering the impression that Twitter is a reliable source of information about the world aids misinformation rather than stopping it.

Expand full comment

The most effective remedy against disinformation is the willingness of the information consumer to recognize disinformation.

Expand full comment

Or we could just let Twitter die eventually and move on with our lives, like we've always done since humankind.

Expand full comment

Gary, do you really believe this issue has a *technological* solution that is directed at the actual content on social media (e.g. Twitter)?

Expand full comment