1 Comment
⭠ Return to thread

This is a great discussion. I do however believe we're missing the bigger picture ... John von Neumann in "The Computer and the Brain", published posthumously in 1958 ... the main relevant items re the LLM Wall ... which I believe is a methodological training wall, not a real limit are the following considerations:

a) the most important points

- Lots of relatively low precision, slow (compared to electronic) NETWORKED processing nodes

b) capacity for massive parallel processing. The estimated brain consisting of (many?) tens of billion neurons, each connected to thousands of others, facilitating simultaneous processing across vast neural networks. The parallelism enables rapid integration of information from diverse sources,...

b) biological systems are inherently robust; the loss or malfunction of individual neurons doesn't typically impair overall function. This resilience arises from overlapping functionalities and the ability of neural networks to reorganize and adapt

c) stochastic, or probabilistic, nature of neural processes. Neuronal firing isn't solely deterministic; it's influenced by a range of variables, including synaptic weights and neurotransmitter levels, which can introduce randomness into neural computations. The stochasticity allows the brain to be highly adaptable, capable of learning from experience, generalizing from incomplete data, and exhibiting creativity.

TO ME, this suggests the work needs to shift to exploring different network structures for 'artificial neural networks' .. different architectures, different topologies, - Use what's know about the brain 'mechanical' architecture more wisely. really understand the brain synaptic organization and the recurrent loops in the brain to conceptualize new architectures and paradigms for using information ... clearly, real experience shows that people like Einstein didn't need to know everything about everything ... and have infinite data ... so. get smart, and get busy ... if course, the lazy way is to look for more training data and more training epochs ... but that's just me ... arguing that the long hanging fruit have been picked, and now one has to get smarter. Cheers. Stay positive, great things are coming (in the AI world)

Expand full comment