1 Comment
⭠ Return to thread

I completely agree in theory, but... it's hard. This senseless machine is just that, a machine, I won't argue that point. Still, that human urge to regard a word-making-thing as a thinking-thing has epochs of evolution behind it, and for pretty much all of them, this was a very accurate assumption. It's not going anywhere. And though I'm not smart enough to put my finger on it, I worry we might lose something very human as we adapt to this brave new world, where we cannot be sure what we speak to has a soul. (Literally or metaphorically, take your pick.)

I think I've managed, at least, to put LLMs in the same mental category as stuffed animals. I know they're not sapient, not remotely so. I would never prioritize an AI or a stuffed animal over an actual life. (If anything, the stuffed animal is probably more the valuable of the two, if it has emotional value even to a single toddler.) Still, in day-to-day operations, I can't help but pick up a stuffed animal more gently than I might a pile of clothes, and I can't help but be more polite to AIs than is strictly necessary or optimal.

Expand full comment