Share this comment
Generative AI, and neural networks in general, don’t deal with referents, and they never truly will. Try asking it to give you a picture of a room with NO elephants in it and see what happens. They don’t deal with concepts. They pretend to, like all machines can only do. towardsdatascience.com/…
© 2025 Gary Marcus
Substack is the home for great culture
Generative AI, and neural networks in general, don’t deal with referents, and they never truly will. Try asking it to give you a picture of a room with NO elephants in it and see what happens. They don’t deal with concepts. They pretend to, like all machines can only do. https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46
I just told NijiJourney V6 to make me a digital painting of a room with no elephants.
And it gave me...
https://www.deviantart.com/stash/021g1zlbu5w8
A digital painting of a room with no elephants.
Not sure why you think this is impossible for these AIs to do. Indeed, people have been playing around with "negative space" to see what happens if you tell it NOT to make a bunch of things for like two years now; it's quite interesting. You can actually tell them to exclude things.
It isn't intelligent, but it is capable of that. All it is, ultimately, is a statistical weighting algorithm. Telling it not to include something means it actively excludes things like that thing.
Well, exact what prompt did you use because when I clicked I got nothing but a 403 server error. Also, I didn’t say impossible… that’s not what I said. I did say these systems don’t handle concepts, and your prompt may well serve to make my point.
https://drive.google.com/file/d/1wvhxAscK_NhlM7z8quh9nMplG_vFHt3g/view?usp=sharing
It's literally just "Digital painting of an empty room --no elephants --niji 6".
Cooked prompt. Not using English grammar. Why "digital painting?" Try "Create a picture of an empty room with no elephants" ...another demonstration would be the concept of "ouroboros". It doesn't have an actual concept of what an ouroboros is. The Webster dictionary definition is that of "a circular symbol that depicts a snake or dragon devouring its own tail" I ask it to just draw me an ouroboros and it keeps messing it up.
The bot did exactly what I told it to do using very simple instructions. The fact that I didn't use "English grammar" is irrelevant. A bot not speaking English doesn't mean it can't take instructions.
You didn't think that the bot could do that, or would function in that way, so are looking for reasons why you are right, rather than accepting that you are wrong and changing to the correct position.
It isn't intelligent but that doesn't mean it isn't useful or functional.
It's definitely not capable of producing literally anything, but that's true of anything, including people.
So did you try ouroboros or not
“Correct position”… that’s just laughable… you’ll see why sooner or later
I was with you until your last sentence. I see no reason knowledge and understanding can't be computed. LLMs are bad at it for reasons we completely understand. On the other hand, a thermostat "knows" the current temperature and knows what to do about it. Everything else is a matter of understanding what brains do and applying it. Statistical analysis of large amounts of text is useful, but solves the wrong problem.
If it doesn't deal with referents then how is anything "understood"? Also, "understanding what brains do" is a bit of a fallacious presumption. There's not going to be a "correct modeling," period. It's underdetermined. https://plato.stanford.edu/entries/scientific-underdetermination/
Knowing something is reflected in the behavior of the agent in question. If the thermostat makes decisions based on the current temperature, it knows the temperature. If it is reflected in its behavior, then the agent knows the bit of knowledge in question. The internals matter, of course, if we want to understand how the agent's behavior is produced but there is no promise that we will easily understand how a particular bit of knowledge is represented within a complex agent.
I have a handy three-word reply: Chinese Room Argument. Searle had already demonstrated decades ago how behaviorism doesn't fly.
It's not behaviorism just because the word "behavior" appears in it. Behaviorism, at least as Skinner had it, was just a reaction to the time and was silly, IMHO. As far as the Chinese Room Argument, a lot has been said already about that. It's just wrong. It has so many goofy elements that it fails completely as a thought experiment. Searle either didn't understand computers or was willing to sacrifice his professional reputation for the sake of claiming humans to be intrinsically superior to AI. That's not a position I have any respect for.
Since you are not really coming to grips with what I said in my comment, lets end it here. I have no interest in having a Chinese Room Argument.
Uh, that's not a counterargument. "Searle is wrong and what he said was goofy" isn't a counterargument. Do you even know how to make a point? You're arguing by assertion- Try actually addressing Searle's argument, starting with how the Chinese Room doesn't actually understand Chinese. If you disagree, try explaining how the room actually does.
The baseline referent of autonomous individual human intelligence is Corporeality.
if a machine has no Corporeality, then what kind of autonomous referent to Reality does it have?
Those are ultimately kludges and bandaids. It will be a perpetual band-aiding exercise. See this famous "Panda" example of how NNs don't actually identify objects mentioned in this article https://towardsdatascience.com/fooling-neural-networks-with-adversarial-examples-8afd36258a03
As long as NN is there it's going to be a kludge. NNs don't belong in a system involving any kind of legitimate epistemology; Their presence de-legitimizes knowledge claims.
NNs as a technology is a distraction that's holding back AI development as a whole
Sunken cost fallacy. No, I'm not the one standing on the wrong side of "ideology" here. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness