Discussion about this post

User's avatar
TTLX's avatar

I find two things interesting about the thinking of AI fanbois (for want of a better word).

1. They are "asymmetrically surprised". When an AI does something amazing and clever, they are rightly excited, but they downplay or ignore the same AI doing something stunningly stupid. Yet errors are surely at least as important as successes, especially if you want to figure out where all this is going.

2. They misunderstand understanding, either underestimating what a general intelligence actually is, or overestimating what can be achieved simply by using larger and larger training datasets. Do they think understanding is just a statistical artefact? Or do they suppose it's an emergent property of a sufficiently large model?

These things interrelate, because if you're not paying attention to the sheer insanity of AI's mistakes, you won't notice that it's not progressing towards general intelligence at all.

Where it's headed is perhaps more like a general *search* capability.

Expand full comment
Gary Marcus's avatar

I appreciate the speed of your replies, but there are many confusions here. Symbols precede modern cognitive science by a century; the algorithm that performs Monte Carlo Tree Search uses symbols to track a state in a tree; trees are pretty much the most canonical symbolic structure there is. (Standard neural networks don’t take them as inputs, but a great many symbolic algorithms do). It doesn’t matter when cognitive scientists appeal to MCTS or not; you are conflating cognitive science with a hypothesis and set of tools that are foundational to computer science. And again, it doesn’t matter what AlphaFold 2 *cites*; what matters is that the representations it takes on are handcrafted symbolic representations. Poring through citation lists is not the right way to think about this. Furthermore, I don’t say that “classic models of cognitive science” had any impact on those specific architectures (Alpha* and Google Search) at all; I am not sure where you are even getting that. Again I urge you to separate the engineering question from the cognitive modeling question. Here I was talking about the engineering questions, I said that these systems are hybrids of deep learning and symbols. (You are also wrong on Google Search; as far I know, they now use LLMs as one cue among many). You are also playing games by switching between current foundation models (somewhat narrow) and neural networks in general (neurosymbolic is older than foundation models and open to a variety of neural approaches); and certainly google has been using neural networks as a component in search since at least 2016. (And Google Search, the most economically successful piece of AI history, has used symbols from the beginning; PageRank, for example, is a symbolic algorithm).

Expand full comment
14 more comments...

No posts