Discussion about this post

User's avatar
Spherical Phil's avatar

Gary, you asked, so … The short-term risks to democracy and the 2024 elections cannot be overstated. But, if we survive that, the long-term risks are literally beyond our ability to even begin to conceptualize. In our “post truth” world, it has been extremely difficult to decipher what is more true or perhaps more accurate than what is not true or even a lie. Up till now, with enough time and effort, those who cared could find the ‘more true’ instead of the ‘not true,’ but that was before search purposely repositioned itself to become the ultimate delivery of the chaos. That said, in the slightly longer term, the false bravado and fake intelligence manifest by current iterations of pretend AI will create social turmoil and upheavals, as well as mental and wellbeing injury, harming individuals, families, communities, and countries in ways that go far beyond what is being discussed today. And there is no government, or coalition of governments, other than an authoritarian one, that can develop and enforce regulations quick enough to even attempt to stop this. And never in human history has there been any universal agreement on universal values, or any form of consensus on human values, (and the human values we may imagine, or desire cannot be found in biased data, and all data is biased). The bigger challenge, how to embed these ‘values’ into non-reasoning technologies and enforce adherence to these values, in the extraordinarily short time window required, cannot happen, except again by an authoritarian regime. Values in an authoritarian regime do not come from the consensus of the people, but are dictated ones designed solely to benefit the authoritarians – which in the end may not be a government at all.

Expand full comment
macirish's avatar

Physics has Newton's 3rd law. Do social scientists have a law of unintended consequences?

Have you noticed that while in the early days of the Internet, when there was no spam or clickbait, you could search on something and get a real, helpful result? But not anymore?

Is it possible that mass generation of "misinformation" (yech, that word should be banned) - will simply cause users to look elsewhere to find information they can trust?

Consider the rise of The Free Press (Bari Weiss). Or substack?

Isn't this a reaction to the failure of mass media to do their jobs?

I guess you are correct to worry about the consequences of AI - but what about that 3rd law.

Thanks for reading my mental wanderings.

Expand full comment
30 more comments...

No posts