Discussion about this post

User's avatar
Kit Farr's avatar

The problem with LLM’s, which is also why the current architecture will never reach AGI, is epistemological. LLM’s are predicated on the false assumption that knowledge is essentially semantic. It is not. In knowledge theory, metaphysics precedes epistemology and provides the context in which the parts can be coherently related to the whole--metaphysics also clarifies which parts are not, and should not be, related. What is missing from LLM’s is an ontological understanding of reality. That is, of its principial or pre-theoretical antecedents and structure. The metaphysical dimension of knowledge is almost entirely missing from LLM’s. If hallucinations are to be solved and AGI approximated, numerous metaphysical models must be integrated into and must guide the construction of semantic relationships. (Otherwise, LLM’s will assume that everything is related to some degree to everything else, which is false and one cause, I think, of hallucinations.) A few such models would include: causality, anatomy, geography, mathematics, physics, ethics, citations, etc. AGI cannot occur without these epistemic structures that the human mind takes for granted. I am writing as a PhD student in knowledge theory. I find it astonishing (and concerning) that such expertise seems to missing from the construction of LLM’s.

Expand full comment
alwayscurious's avatar

Gary, can you write about the rush to build new energy resources, including the hyped small nuclear power plants, to service the "need" for the many planned data/AI centers?

If the problem of "hallucinations" is getting worse, can you foresee that it is rectifiable, and if so, might it be wise to wait on dedicating so much money on something so defective?

To me it's funny how climate change has been tossed in the bin now that the higher ups are in a hurry to feed their beast of data/AI which will, they hope, will be of great assistance in running the world.

Expand full comment
81 more comments...

No posts