Discussion about this post

User's avatar
Roumen Popov's avatar

"describe the ways in which large language models (LLMs) can carry out tasks for which they were not specifically trained" - how do they know the LLMs were not specifically trained, have they examined the terabytes of training data or the millions if not billions of instances of RLHF to be able to claim that. To declare that LLMs can do that, the first step would be for the LLM to learn simple arithmetic and demonstrate it with big numbers with a lot of digits (that can not be simply remembered from the training data). Until an LLM can be demonstrated to be able to do that, all claims of a magically emergent AGI are just bla, bla, bla. So, count me among the confused too :) Also, I think 10 years from now people will look back at the current events and claims from prominent leaders in the AI field and just shake their heads in bemused disbelief.

Expand full comment
CFB's avatar

Hi Gary. I basically agree with everything you said. I read the article a few days ago and was rather taken aback, especially by the condescending tone. Given the authors' stature in the field, they should know better than to make the kinds of pronouncements they did about today's systems. Along with Hinton's 60 minutes interview, there seems to be a lot of wishful thinking going around these days. This is shades of the '70s and '80s.

Expand full comment
41 more comments...

No posts