Share this comment
Knowing something is reflected in the behavior of the agent in question. If the thermostat makes decisions based on the current temperature, it knows the temperature. If it is reflected in its behavior, then the agent knows the bit of knowledge in question. The internals matter, of course, if we want to understand how the agent's behavio…
© 2025 Gary Marcus
Substack is the home for great culture
Knowing something is reflected in the behavior of the agent in question. If the thermostat makes decisions based on the current temperature, it knows the temperature. If it is reflected in its behavior, then the agent knows the bit of knowledge in question. The internals matter, of course, if we want to understand how the agent's behavior is produced but there is no promise that we will easily understand how a particular bit of knowledge is represented within a complex agent.
I have a handy three-word reply: Chinese Room Argument. Searle had already demonstrated decades ago how behaviorism doesn't fly.
It's not behaviorism just because the word "behavior" appears in it. Behaviorism, at least as Skinner had it, was just a reaction to the time and was silly, IMHO. As far as the Chinese Room Argument, a lot has been said already about that. It's just wrong. It has so many goofy elements that it fails completely as a thought experiment. Searle either didn't understand computers or was willing to sacrifice his professional reputation for the sake of claiming humans to be intrinsically superior to AI. That's not a position I have any respect for.
Since you are not really coming to grips with what I said in my comment, lets end it here. I have no interest in having a Chinese Room Argument.
Uh, that's not a counterargument. "Searle is wrong and what he said was goofy" isn't a counterargument. Do you even know how to make a point? You're arguing by assertion- Try actually addressing Searle's argument, starting with how the Chinese Room doesn't actually understand Chinese. If you disagree, try explaining how the room actually does.
The baseline referent of autonomous individual human intelligence is Corporeality.
if a machine has no Corporeality, then what kind of autonomous referent to Reality does it have?