Discussion about this post

User's avatar
DL's avatar

We actually are carving up living human brains to see how language works. See Eddie Chang’s work at UCSF.

Expand full comment
Saty Chary's avatar

Hi Gary, interesting question (yours), and, Chomsky's response.

Here is how to tell that LLMs are all just a house of cards. If I create 50TB of text trash, by which I mean, it's full of absurdities about the world - the ocean is blue because it's in the process of cooling down after a giant fire, for example - and use it create an LLM, then it, or another LLM, will be unable to say "I call BS". That's because an LLM has derivative intelligence at best, it has zero (0) actual understanding of a single word, eg even 'a'.

Meaning doesn't reside in words, grammar, language.

*Meaning solely comes from directly and continuously interacting with the environment, physically.*

I know what passage of time, my right hand side, night time, death, large etc mean, on account of experience. Sure, I can read up on, hear about, watch, imagine etc, the rest (eg living in the ISS) but these are extrapolations atop fundamental, lived, experience which has zero computational substitute.

Expand full comment
27 more comments...

No posts