Share this comment
I think that your analysis here corroborates well the results published recently (arxiv.org/abs/2404.04125) and which you referred to in posts from 8 of April (“Breaking news: Scaling will never get us to AGI” and “Corrected link, re: new paper on scaling and AGI”). According to the cited paper, for models based on neural networks, an ex…
© 2025 Gary Marcus
Substack is the home for great culture
I think that your analysis here corroborates well the results published recently (https://arxiv.org/abs/2404.04125) and which you referred to in posts from 8 of April (“Breaking news: Scaling will never get us to AGI” and “Corrected link, re: new paper on scaling and AGI”). According to the cited paper, for models based on neural networks, an exponential rise in the volume of training data (and so of the computing volume) provides only a linear increase of accuracy. That means that if the data and computing volume are increasing only linearly, which is probably the case presently for the LLMs, the improvement could be very slow, near the margin of performance estimation error, nearly a plateau and not a very significant effect.