Share this comment
Intelligence is not one thing, as in you have it or you don't. It is a giant collection of skills and their seamless integration.
Machinery for high-level approximate synthesis is absolutely necessary, which is what the vendors have now.
The logic will get better in areas where there's profit. Any failures will inspire custom solutions. No fundamental limits in sight.
© 2025 Gary Marcus
Substack is the home for great culture
Intelligence is not one thing, as in you have it or you don't. It is a giant collection of skills and their seamless integration.
Machinery for high-level approximate synthesis is absolutely necessary, which is what the vendors have now.
The logic will get better in areas where there's profit. Any failures will inspire custom solutions. No fundamental limits in sight.
That depends on how you look. The fundamental limit is that no amount of induction (statistics) gets you deduction (symbolic logic), even if you can approximate close. The other way around is true too as we found out 40-20 years ago.
To add, AI vendors are not in principle opposed to deductive reasoning or symbolic methods. The problem is that, in practice, those are just a different kind of approximation to the messy reality. They do not solve the problems.
AGI is an immense problem, and what is in our heads is not easy to represent properly. The industry is onto something by focusing on incremental work and leveraging data as much as possible. That will inspire new directions as need be.
Symbolic logic may be seen as part of our brain's efficiency apparatus, that is, it for instance prevents infinite scaling issues (e.g. the outlier problem, but also others).
The symbolic logic people had it the wrong way around. Stuff like emotions doesn't emerge from large amounts of discrete facts and rules (the problem being that 'large' here in effect is 'infinite'), but discrete facts and rules are created out of the non-discrete, messy, chaotic, stuff below. You can create Q out of R, but not the other way around.
In that sense it may be evolutionary related to our 'conviction' efficiency apparatus, such as convictions that 'incremental work' 'will' inspire new directions 😀. Or convictions that these 'new directions' — when mentioned now — are the equivalent of vapourware.
Symbolic logic surely has a place. When people learned to go from daily messy situations to high-level abstract rules, and then apply those rules in other contexts, that made us much smarter.
It is important to note, however, that abstractions alone cannot fix all outliers, just certain categories of them. The real work still happens at the messy detail level, where the rules you know many not apply, or you need to know how to apply them.
Which is true. And here you are close to why systems with massive discrete logic (like the bits and operands of digital computers) always have some trouble in messy reality. What holds for that scaling symbolic logic doesn't get you there probably is true to for discrete logic in general. (Just riding a hobby horse 😀).
As I see it, anything a human can model, a machine can model.
If a machine needs higher precision in calculations, that is easy to add. Same if it needs more memory. However, machines beat us in both of these by a very large margin already.
The issue AI is failing so far is the world representations we hold in our heads are outrageously immense and at multiple levels.
We can effortlessly imagine the whole universe, then zoom to an individual galaxy, a star, a planet, an atom, a quark.
We can switch in no time to talking about a science fiction book, historical trends, humor, and how any of these relate to anything in cosmology.
We never lose track of our train of thought as we do any of this. That suggests very clever representations.
Robots can build silicon chips at the level of 3 nanometers. I worked in that industry. The precision of the computer logic for physics work is not the problem.
The problem of AI is being able to operate at multiple levels of abstraction in a massively complex world. An unrelated issue.
Much work people do involves diligently going through steps, and checking what you get as you go. The feedback informs the next steps.
Symbolic and principled methods suffer from the same problems as LLM unless you are able to validate and model precisely what you are dealing with.