Well written. Speaking as a student of technological change, this is not surprising. Nor is it surprising that legions of fans deny it!
All exponential improvement curves eventually turn into S curves (i.e. slowing rate of improvement) as one or more fundamental limits start to bear. They can last for a long time - look at multiple versio…
Well written. Speaking as a student of technological change, this is not surprising. Nor is it surprising that legions of fans deny it!
All exponential improvement curves eventually turn into S curves (i.e. slowing rate of improvement) as one or more fundamental limits start to bear. They can last for a long time - look at multiple versions of Moore’s Law lasting 50+ years! But they cannot go on forever. (“If something cannot last forever, it won’t. I forget which SF author coined that.)
Fortunately for technology, another approach can eventually become relevant and its performance can surpass the old one. So the overall rate of improvement can look vaguely exponential, even though it’s a series of S curves. In fact LLMs in some ways were an example - a very different approach to AI rocketed past machine learning approaches, for some applications.
Of course this pattern is not proof, by itself, that LLMs are slowing down. But if not now, they will eventually. Others have written about why that will happen before “General AI” levels of performance.
However, as per OpenAIs own research, transformer based LLMs had already started to plateau in performance around gpt3.5. These models don’t reason, they memorize word patterns and act as extremely efficient Markov chain models. And there is no reason to believe that such a model , no matter how large the training data or parameter size will ever be able to reason. Still, as tools and simple question answer machines, they are very useful. They actually make us marvel at the awesomeness of brains and neurons, which can do so much with a fraction of the energy consumption of these constructs.
It’s not even a question of S curves though. Otherwise intelligent people literally believed that human intelligence was a 1950s-era neural net model merely accelerated with enough GPU cycle speeds and a massive data set.
Talk about riding reductionism off a Wile E. Coyote cliff.
May be off a slightly different tangent...I get the feeling it's a similar kind of operationalism that Watson, later Skinner introduced. The kind of operationalism that is also part of Steven's measurement model that dominates social sciences: as long as you map the empirical relations to numerical ones more or less consistently, you have measured. Measured what exactly? Doesn't matter, the model does not say, who cares? Turing's eponymous test is of the same kind, not even validated, just asserted. As long as current AI activities live in that bubble nothing will change. Even if the bubble bursts, nothing will change. "Let's not dwell on the past, let's move forward...". There won't be a truth commission. Current AI is good enough a pretext to fire people, hire younger ones at lower salaries in countries with lower labor regulations. Bottomline improves, profit improves, almost everyone is happy. And for all the expenses and unused graphic cards, there will be special tax regulations to ease the burden on the industry. After all, everybody was wrong, it was only business, not personal.
Well written. Speaking as a student of technological change, this is not surprising. Nor is it surprising that legions of fans deny it!
All exponential improvement curves eventually turn into S curves (i.e. slowing rate of improvement) as one or more fundamental limits start to bear. They can last for a long time - look at multiple versions of Moore’s Law lasting 50+ years! But they cannot go on forever. (“If something cannot last forever, it won’t. I forget which SF author coined that.)
Fortunately for technology, another approach can eventually become relevant and its performance can surpass the old one. So the overall rate of improvement can look vaguely exponential, even though it’s a series of S curves. In fact LLMs in some ways were an example - a very different approach to AI rocketed past machine learning approaches, for some applications.
Of course this pattern is not proof, by itself, that LLMs are slowing down. But if not now, they will eventually. Others have written about why that will happen before “General AI” levels of performance.
However, as per OpenAIs own research, transformer based LLMs had already started to plateau in performance around gpt3.5. These models don’t reason, they memorize word patterns and act as extremely efficient Markov chain models. And there is no reason to believe that such a model , no matter how large the training data or parameter size will ever be able to reason. Still, as tools and simple question answer machines, they are very useful. They actually make us marvel at the awesomeness of brains and neurons, which can do so much with a fraction of the energy consumption of these constructs.
I see the quote is credited to economist Herbert Stein: https://en.wikipedia.org/wiki/Herbert_Stein
It’s not even a question of S curves though. Otherwise intelligent people literally believed that human intelligence was a 1950s-era neural net model merely accelerated with enough GPU cycle speeds and a massive data set.
Talk about riding reductionism off a Wile E. Coyote cliff.
May be off a slightly different tangent...I get the feeling it's a similar kind of operationalism that Watson, later Skinner introduced. The kind of operationalism that is also part of Steven's measurement model that dominates social sciences: as long as you map the empirical relations to numerical ones more or less consistently, you have measured. Measured what exactly? Doesn't matter, the model does not say, who cares? Turing's eponymous test is of the same kind, not even validated, just asserted. As long as current AI activities live in that bubble nothing will change. Even if the bubble bursts, nothing will change. "Let's not dwell on the past, let's move forward...". There won't be a truth commission. Current AI is good enough a pretext to fire people, hire younger ones at lower salaries in countries with lower labor regulations. Bottomline improves, profit improves, almost everyone is happy. And for all the expenses and unused graphic cards, there will be special tax regulations to ease the burden on the industry. After all, everybody was wrong, it was only business, not personal.