You could be right of course. There may very well be hard limits to what AI will ever be able to do. Best I can tell, nobody currently has much of any idea where those limits may lie. What we do know is that in just a few short years AI has surprised us with it's new abilities. Where that end…
You could be right of course. There may very well be hard limits to what AI will ever be able to do. Best I can tell, nobody currently has much of any idea where those limits may lie. What we do know is that in just a few short years AI has surprised us with it's new abilities. Where that ends, no one can as yet say.
I'm not sure I can accept the premise that there is much "original" work in philosophy at any level. To me, the entire field seems mostly an endless recycling of what's already been discussed for hundreds to thousands of years. At least for the philosophy pros, the people who do it for a living, the focus seems more on crafting a polished presentation to establish expert status than it does on original thinking. I'm not even sure that those who do intellectual work for living are in a position to safely share original thinking.
More broadly, it's not clear to me how much original thinking or writing there is by humans more generally. It seems more a case that we absorb ideas from our environment, reshuffle the ideas a bit, our egos take ownership of those ideas, and then we restate the existing ideas in our own phrasing.
As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do. It's probably more accurate to describe what's happening as "my choice of words".
This perspective can be overstated of course, and I may be doing so. But to the degree it's true, to the degree that human thinking and writing is essentially mechanical most of the time, then it seems that some future version of AI may be able to successfully mimic much what we're doing.
"As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do."
That's true to some extent, but at some point, writing has to reference the external world. If you say, for instance, "There is a fire in Hawaii," your statement refers to something that is occurring in the external world. Large Language Models don't currently have the ability to describe events in the real world: all they can do is reshuffle text that's already been created by humans.
How this applies to literature: a novel or a short story often incorporates the author's life experience in some way. E.g., Joseph Conrad was inspired to write *Heart of Darkness* after a real-life stint as a riverboat captain in Africa; Aeschylus drew on his experience as as soldier at the Battle of Marathon when he wrote *The Persians*; *Fear and Loathing in Las Vegas* was inspired by Hunter S. Thompson's real-life trip to Las Vegas, and so on—there are probably thousands of examples out there.
Until LLMs develop some kind of ability to interact with the physical world, they'll have a difficult time creating compelling fiction. I'm not saying this is impossible; it just means that LLMs as they're currently constructed won't be enough.
Thanks for the reply Marcus, much appreciated.
You could be right of course. There may very well be hard limits to what AI will ever be able to do. Best I can tell, nobody currently has much of any idea where those limits may lie. What we do know is that in just a few short years AI has surprised us with it's new abilities. Where that ends, no one can as yet say.
I'm not sure I can accept the premise that there is much "original" work in philosophy at any level. To me, the entire field seems mostly an endless recycling of what's already been discussed for hundreds to thousands of years. At least for the philosophy pros, the people who do it for a living, the focus seems more on crafting a polished presentation to establish expert status than it does on original thinking. I'm not even sure that those who do intellectual work for living are in a position to safely share original thinking.
More broadly, it's not clear to me how much original thinking or writing there is by humans more generally. It seems more a case that we absorb ideas from our environment, reshuffle the ideas a bit, our egos take ownership of those ideas, and then we restate the existing ideas in our own phrasing.
As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do. It's probably more accurate to describe what's happening as "my choice of words".
This perspective can be overstated of course, and I may be doing so. But to the degree it's true, to the degree that human thinking and writing is essentially mechanical most of the time, then it seems that some future version of AI may be able to successfully mimic much what we're doing.
"As example, when I write I tend to think of what's being written as "my ideas", as I believe most people tend to do."
That's true to some extent, but at some point, writing has to reference the external world. If you say, for instance, "There is a fire in Hawaii," your statement refers to something that is occurring in the external world. Large Language Models don't currently have the ability to describe events in the real world: all they can do is reshuffle text that's already been created by humans.
How this applies to literature: a novel or a short story often incorporates the author's life experience in some way. E.g., Joseph Conrad was inspired to write *Heart of Darkness* after a real-life stint as a riverboat captain in Africa; Aeschylus drew on his experience as as soldier at the Battle of Marathon when he wrote *The Persians*; *Fear and Loathing in Las Vegas* was inspired by Hunter S. Thompson's real-life trip to Las Vegas, and so on—there are probably thousands of examples out there.
Until LLMs develop some kind of ability to interact with the physical world, they'll have a difficult time creating compelling fiction. I'm not saying this is impossible; it just means that LLMs as they're currently constructed won't be enough.