7 Comments
⭠ Return to thread

Good points. Yes, AI may often produce junk, but junk sells, and it soaks up dollars that might have gone to quality writing.

There's a McDonald's fast food joint in the little shopping plaza near our house. The drive up lane is always backed up. 90% of the shows on the streaming channels are not really fit for human consumption. But they're there because lots of people like them.

So many commentators seem to be basing their analysis of generative AI on what's happening NOW, at the dawn of this industry. I wasn't such a good writer myself, when I was 2 years old.

Expand full comment

What I imagine happening is that AI starts at the lower junk content end of the market replacing human writers. This seems to be happening already. And then, as AI improves, it gradually moves up the quality ladder replacing humans at ever higher levels.

Example: I read a philosophy professor who claimed that AI can already produce credible philosophy articles at the undergraduate level. He produced an example to illustrate, and it seemed convincing. If that's true, then at some point in time AI will probably be able to do graduate level writing too. And then maybe the professional level.

There will always be human writing, because some humans like to write. But will they be able to make a living writing in a market overwhelmingly flooded with super cheap written content of at least reasonable quality?

Will future audiences really care that much whether it was a human or machine that generated the content? Do you care that machines instead of humans made your car? Or do you just care what the car can do for you, and how much it costs?

This formula might provide an answer. Perceived value is determined by perceived scarcity.

As example, the Internet made it possible for almost anyone to be a writer with a global audience. Scarcity was destroyed, and the perceived value of written content sank. It seems AI will just take this already existing "scarcity of scarcity" situation to the next level.

Expand full comment

Any philosopher worth the name would recognize that it is an inductive fallacy to assume that because an LLM can now regurgitate an undergrad term paper it would inevitably yield be something original and cogent that a skilled graduate student could write.

Expand full comment

Thanks for the reply Marcus, much appreciated.

You could be right of course. There may very well be hard limits to what AI will ever be able to do. Best I can tell, nobody currently has much of any idea where those limits may lie. What we do know is that in just a few short years AI has surprised us with it's new abilities. Where that ends, no one can as yet say.

I'm not sure I can accept the premise that there is much "original" work in philosophy at any level. To me, the entire field seems mostly an endless recycling of what's already been discussed for hundreds to thousands of years. At least for the philosophy pros, the people who do it for a living, the focus seems more on crafting a polished presentation to establish expert status than it does on original thinking. I'm not even sure that those who do intellectual work for living are in a position to safely share original thinking.

More broadly, it's not clear to me how much original thinking or writing there is by humans more generally. It seems more a case that we absorb ideas from our environment, reshuffle the ideas a bit, our egos take ownership of those ideas, and then we restate the existing ideas in our own phrasing.

As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do. It's probably more accurate to describe what's happening as "my choice of words".

This perspective can be overstated of course, and I may be doing so. But to the degree it's true, to the degree that human thinking and writing is essentially mechanical most of the time, then it seems that some future version of AI may be able to successfully mimic much what we're doing.

Expand full comment

"As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do."

That's true to some extent, but at some point, writing has to reference the external world. If you say, for instance, "There is a fire in Hawaii," your statement refers to something that is occurring in the external world. Large Language Models don't currently have the ability to describe events in the real world: all they can do is reshuffle text that's already been created by humans.

How this applies to literature: a novel or a short story often incorporates the author's life experience in some way. E.g., Joseph Conrad was inspired to write *Heart of Darkness* after a real-life stint as a riverboat captain in Africa; Aeschylus drew on his experience as as soldier at the Battle of Marathon when he wrote *The Persians*; *Fear and Loathing in Las Vegas* was inspired by Hunter S. Thompson's real-life trip to Las Vegas, and so on—there are probably thousands of examples out there.

Until LLMs develop some kind of ability to interact with the physical world, they'll have a difficult time creating compelling fiction. I'm not saying this is impossible; it just means that LLMs as they're currently constructed won't be enough.

Expand full comment

We could consider the future of Substack. What's going to happen when AI can generate an entire network like this in just a few hours? If that can happen one time, it can happen thousands of times. Thousands of networks each with thousands of fake authors with their fake personalities and their millions of generated articles.

From a business perspective the question would seem to be, how much will the broad public care about the difference between human written articles on Substack, and mass produced AI content that has flooded the Internet because it is so cheap and efficient to produce?

EVIDENCE: Here is America roughly half the country has voted for Trump twice, and may do so yet again. They could be watching C-Span, but instead they're watching Fox. How discriminating do we expect these folks to be when it comes to consuming written Internet content?

How much do you and I care that our cars are now made largely by robots? That's what I see coming to the world of content. We net writers will care when we are replaced, just as the factory workers cared when they were replaced. But the broad public is not going to care.

Expand full comment

If the robots that build our cars had the reliability issues of ChatGPT, the auto industry would be in serious trouble.

And the task that car-building robots perform is analogous to a printer/copier, not an algorithmic text generator.

Expand full comment