Share this comment
Great insights as always. I'd also add that language models themselves, however large, are extremely limited in (a) representing reality; and (2) capturing imaginations. If y'all ever experienced the feeling "I don't know how to put <insert the last 5 ineffable subjects you wanted to talk about but could not> into words" then congratulat…
© 2025 Gary Marcus
Substack is the home for great culture
Great insights as always. I'd also add that language models themselves, however large, are extremely limited in (a) representing reality; and (2) capturing imaginations. If y'all ever experienced the feeling "I don't know how to put <insert the last 5 ineffable subjects you wanted to talk about but could not> into words" then congratulations, you have already run up the wall of ineffability in philosophy, especially its epistemology branch.
At the end of the day, language is an abstraction of thoughts and meanings of physical realities and imagined possibilities. Not all things could be abstracted; and for those could be abstracted, *something* is lost in the abstraction as in information compression. So however "smart" LLMs may become, they could only capture what language presents to them --- an abstracted-away, stripped-down, and biased-to-the-effable-only view of the world. So yes, 10 times yes to “...[A] physics engine’s model of the world" for any language models.