60 Comments
⭠ Return to thread

Grounding requires no symbols, indeed. But you have to model things somehow.

If you are arguing that none of the computational models developed for a century have no bearing on intelligence, that would require a solid argument.

Expand full comment

They don't (have any bearing on intelligence). For the past 70 to 75 years, it's all been about mimicking intelligent behavior. Whether symbol processing systems or ANNs, everyone has been and is still developing complex functions, trying to mimic the input-output behavior of thinking humans. And, not surprisingly, all they have to show for it to date is narrow AI. That's why people like Peter Norvig, who should know better, given their stature in the field, are claiming that AGI is already here: because they don't want to believe that all they've been doing their entire careers is developing complex functions, instead of making inroads toward understanding intelligence. If the AI community (or psychologists or neuroscientists) understood the nature of intelligence, no one would continue to develop systems the way they have been and still are (except perhaps to model some function of the mind-brain) if their intention is to achieve AGI.

Expand full comment

There are serious limitations using a neural net to fit inputs to outputs, indeed. We see that in practice. The systems do not understand what they do.

However, I think you are missing the significance of o3, AlphaProof, and upcoming agents. Neural nets are then used only to make a hypothesis. Then, more rigorous tools, including formal verifiers, simulators, code execution, etc., kick in, to keep the system honest.

With such an approach, AI explores the problem space with the neural net giving it ideas, and some model keeping the AI on track.

We are very early in this, but the approach is sound. It is like with people. First use your imagination, but then do rigorous work and adjust based on feedback.

Expand full comment

But as I said in my previous post, what you describe above is just mimicking intelligent behavior, albeit improved behavior. However, it is no different in principle than AI in the latter part of the 20th century. Just different processes and representations. Basically, history is repeating.

Expand full comment

We are back around to symbolic AI.

http://aicyc.org/2024/11/08/revolutionizing-artificial-intelligence-the-essential-role-of-semantic-ai/

TL;DR: AICYC's Semantic AI Model (SAM) revolutionizes artificial intelligence by combining a multi-lingual knowledge graph with large language models (LLMs), ensuring users receive verifiable, accurate information in response to questions. SAM addresses the shortcomings of traditional AI models by promoting transparency, multilingual access, verifiable knowledge, and continual learning. AICYC provides a secure, decentralized infrastructure governed by the AICYC DAO to protect the privacy and rights of its users, democratizing AI for lifelong learning.

Expand full comment

What separates mimicking something from the real thing?

Is it accuracy, predictability?

Or is it some philosophical thing?

Expand full comment

It's not about accuracy or predictability. And there are, of course, various philosophical arguments. But think about it. You're betting that the current approach to building systems by mimicking the behavior of a system as complex as the human brain with existing technology will just by chance (eventually) match its capabilities, all without a deep understanding of how the brain does it.

Expand full comment

How the brain does it is not the point. How the code does it is everything.

Expand full comment

I think that, despite the hype, the vendors want to offer incrementally better automation and make a profit.

Understanding the brain may not give a payoff. It is a highly custom ad hoc processing network.

I think a better bet is to model how we reason and integrate info. That insight can't be found in the brain wiring.

Expand full comment

It's not about the modeling - it's about conflating the model with the modeled.

Expand full comment

What separates the modeling from the modeled? This: all models of phenomena can be undone, in the stack sense - previous states (eg of fluid flowing) can be reversed merely by setting the model variables to past states, all the way to the start. In contrast, reality can never, ever, ever, be undone - time doesn't flow backwards.

Expand full comment

In our heads we also run models of the world. There is no difference between organic brains processing information and acting intelligently, or software-base systems doing same thing.

Expand full comment