50 Comments
May 4Liked by Gary Marcus

I was just thinking this morning how nauseating social media has become (I've felt that way about it more or less since the start tbh, but it did have more utility and courtesy in the past). The influx of AI is like a tsunami... massive amounts of water roiling with garbage, mud, and flailing victims caught in the surge (dead or alive), all coming at you at impossible speeds.

Expand full comment

I used to have to make sure I didn't spend too much time on Twitter. Now I find myself getting off naturally after just a few minutes on there. Everybody screaming at each other and reposting same links gets old.

Expand full comment

I last 30 seconds on Twitter... 😅

Expand full comment

Social media is fine, provided you restrict yourself to old-fashioned social media.

You probably know what I mean, unless you are young enough to have been in high school when Facebook came out.

Old-fashioned social media lacks any attempt to increase "engagement," and is generally not supported by advertising. Sometimes it charges subscription fees; sometimes it's run by hobbiests on their own dime. It has strong controls on who sees what you post, and what posts you see. Those controls are either in the hands of a moderation team, or more commonly of each individual user.

It's never funded by vulture capital, since the best-case potential returns are never astronomical.

Expand full comment

What would be some examples of social media of the old-fashioned kind?

Expand full comment

Not even close.

I’m afraid I’ve used “Social Media” since the early 80’s and I can still find my material quoted. Quite simply, the master decentralized posting flow model is called “UseNet” and today it’s a cesspool of despair unlike anything you can imagine.

It took about 2 years for it to become unusable after public access started at the end of the 80’s. It was hard to maintain civility and usefulness even when everyone could be personally identified in major research institutions globally. It collapsed completely in the early 90’s and today many major ISP’s will not allow its traffic to flow on their backbone.

All “Social Media” always devolve into chaos, it’s an entropic effect of investing $0 into removing noise - it won’t clean itself up spontaneously than more than an empty parking lot won’t remove weeds from itself, or water spontaneously turn into ice, or ash turn into wood.

I’m watching Substack devolve the same way - in realtime - as my “feeds” fill with unmodified cascades of mouth-foaming garbage.

I’ve watched these systems collapse one by one over the last 40 years.

Expand full comment
May 4·edited May 4

Most blogging platforms qualify; the direct descendants of LiveJournal are pretty good; there's also stand-alone blogging software an individual can install for themselves. Even substack is a not-so-good example of this genre, though somewhat focussed on many-to-one communication. (It may be an exception to the 'rule' above about venture capital; I don't know, but it appears to be quite profitable.)

There are still various bulletin board systems, often installed by members of groups wanting to talk to each other. Some of these groups can be huge, and the use primarily social. Honorable mention here to LibraryThing, where the common interest is reading.

Finally, there are still email-based mailing lists, set up and run by individuals or groups wanting to communicate.

Expand full comment

Discord.

Each server is a separate and isolated forum centered on a hobby or interest or IRL social group. Just like old school forums where if the forum maintainers didn't do a good job of policing the users it would decay and fall apart.

And you have to pay Discord for extra features rather than them being dependent on advertizing, so they have a subscription model.

Expand full comment
May 4·edited May 4Liked by Gary Marcus

That's one of the best titles of an article I've seen in a long time "An epistemic clusterfuck...". :)

Yeah, the ultimate Propaganda Propagator Machine sounds like. Or PPG (sounds kind of like RPG...).

Or a new kind of WMD – Weapon of Mass Disinformation?

Expand full comment

Shouldn't be shocking. Musk already treats Substack as a threat to Twitter in the way Twitter shadowbans Substack links and refuses to pull up images. Twitter is struggling because of Musk's epic mismanagement and he puts blame anywhere but in the mirror. He wants to make up for Twitter struggling by using AI to suck value from news organizations back onto his wavering social media platform. I don't think it will work. The error rate is too high, and the market for predigested news is not what Musk thinks it is. He gets engineering. He is also a genius at getting free marketing for his companies. That doesn't mean he or his little Grok monster will provide anything remotely useful when it comes to news. I think 6 months from now this will seem about as popular are rate limiting views on Twitter.

Expand full comment
May 4Liked by Gary Marcus

Why, oh why, do these tech-bros not learn something about the vulnerabilities of human intelligence? Including their own?

Expand full comment
May 4Liked by Gary Marcus

A) I'm glad I haven't been on X for well over a year now

B) X is a risk to epistemic security (https://internationalsecurityjournal.com/elizabeth-seger-exploring-epistemic-security/) - as Twitter there was a level of trust that came with public bodies using it as a tool to communicate. Now if someone started saying the bombs were heading in, and the AI amplifies it then it's likely to cause panic - AI on X is a global risk, not just an American one.

Expand full comment

Ugh. But at least we get to witness that fucking killer headline you just wrote. Well done sir!

Expand full comment

This is truly disturbing.

Expand full comment

Elon, like most people, is complicated, but given his megaphone we see those complications play out in public. I’m always stuck his personality dichotomy between being very pragmatic about certain things one the one hand, and being a techno-fetishist on the other with sci-fi visions of grandeur. These have manifested in his SpaceX espoused visions for Mars, FSD, and now his vision for AI on X. What’s even more complicated is that he seemed grounded in his view of the dangers of AI, but has swept those aside believing that his own farts don’t stink around these issues. It’s like AI is dangerous and can do stupid things unless I develop it with all those stupid things 🤪. Making sense of X has always fallen in the bucket of human endeavor alone as no AI in the near term will ever properly sort out the nuances of the cacophony of “opinions”, no matter how authoritative these may be. I find the denial of the complexities of human language, and how many fall for what amounts to parlor tricks of technology, to be fascinating 😃

Expand full comment

Determining what's "True" and what isn't is one of the hardest problems in AI (ditto philosophy). LLMs are simply not up to the task (by orders of magnitude), and (barring miracles) never will be.

Expand full comment

In my experience ChatGPT is WAY better at epistemology than the vast majority of people I know or encounter.

Expand full comment

Well, yes. The term Epistemic Clusterfuck is apt.

But, X is already full of shit. People who participate on X are mostly full of shit.

Why use it at all?

The simplest reason X lacks epistemic soundness isnt "source finding" because one could argue that retweeting preserves provenance, and sources seek to claim credit.

The problem is that there is no skepticism, no critique, no calls for evidence.

Musk gets angry at critique, and like Peter Thiel as well, feels like retaliation is justified, rather than questioning of arguments made or the interpretation of anecdotal evidence.

X is, it seems to me, is ABOUT "winning". It's a notion of epistemic validation that assumes Debate (Rhetoric) is the highest test of truth...

Nonsense.

Expand full comment

This emphasizes what I've considered the 2nd worst issue with these LLMs, the 1st being the hallucinations. Any AI system is useless if trained on opinions, especially of "non-experts". I'm being very charitable here. While there is considerable disagreement even amongst scientists, and other published experts, it is seldom based on the type of emotional blathering on social media today. I will now give my own non-expert opinion - business and government ventures should focus on AI expert systems where they have strictly controlled/vetted the training data. Specialized expert systems have a much greater ROI in the near to mid-term than generalized AI, and they are easier to troubleshoot and determine the logic of their results.

Expand full comment

> Any AI system is useless if trained on opinions, especially of "non-experts".

Do you actually think this is true?

Expand full comment

Yes, in fact I consider it to be worse than training a model based solely on published fiction. You'll note that I include my own opinions. Note that I am not referring to the published works and opinions of professionals and scholars. My reference is to exactly this kind of dialog here and in all other forms of social media, and even the comments/opinion sections of major news publications/sites. Why should an AI system trust the opinion of someone just because they have subscribed to a social media account, or even paid to access content behind a newspaper/magazine firewall? The most recently ridiculous item I've seen was the statement that some AI models were being trained on the output of AI models! No good will come of that.

Expand full comment

But how do you get from ~"not epistemically reliable" to "has zero utility"?

Expand full comment

That's a good question. I can only provide a circular argument - in order for AI to extract utility from opinion forums, it must first have the ability to discern between the misinformation, disinformation, and uneducated ramblings. However, that's exactly what the training is intended to accomplish. For any AI to reach that level of expertise, it must first be trained on "quality controlled" input. Opinion forums, like trash dumpsters, do frequently contain valuable items, but it's rare and requires a considerable effort.

Expand full comment
May 9·edited May 9

AI can write code for me. How do you classify that as not utility?

I sense some irony in your basic argument.

Expand full comment

Code is not an opinion. Nor are chemical formula or cooking recipes. I have found code on sites, including GitHub, that are "incomplete" at best. The advantage that you and I have in this realm are code analyzers, debuggers, and compilers. You and I, and those tools, are the intelligence trained to recognize junk.

Expand full comment

The best thing you could do to improve the world is quit using Twitter.

Expand full comment

A recipe for driving clicks through extremism I guess. How Darth Vaderish or Mr Burnsish

Expand full comment

A guest for my show has a computer that only accepts USB-A inputs. Sometimes I send guests microphones to get higher quality recordings. A certain mic on Amazon says it comes with a cable that is USB-C to USB-A. Naturally the question is which end plugs into the microphone and which end plugs into the computer. The specs for the mic do not say nor does the manufacturer web site. (So, first of all, F this manufacturer.)

Amazon has replaced user Q&A with AI Q&A. Now instead of people who bought and used the item answering FAQs, it’s an LLM. So I ask the AI, does the USB-C end plug into the microphone or the computer? It says “The USB-C end plugs into the computer. Just because I know about the hallucination problem, I then ask “Does the USB-A end plug into the microphone or the computer?” It says “the USB-A end plugs into the computer.”

I try six different variants on the question to see if it can tell me what I need to know. Each time, it gives completely contradictory answers. LLMs giving false information about consumer goods on Amazon seems like very bad business. Vendors are going to demand it be removed.

Expand full comment

Anyone who uses Xitter supports Elon Musk.

It's that simple.

Expand full comment

X: just say "No!"

Expand full comment