Discussion about this post

User's avatar
Mario Campos's avatar

Great article. Since Language Models are only learning the distriburion of words and with certain clever hacks (like attention) how to contextualize it, the next big models will appear to learn basic compositionality of most common examples, but will fail to generalize for complex unseen ones. There must be a better way that just building larger and larger models to approach AGI, and I think is important for the field to start explore alternatives to get closer before a new AI winter arrives.

Expand full comment
31 more comments...

No posts