Discussion about this post

User's avatar
Frank van der Velde's avatar

Indeed. It seems that scaling maximalism relies on the ambiguity of terms like 'big' and 'more'. Training sets on e.g. language in deep learning are very big compared to what humans use in learning language. But they are still minute compared to the 'performance' set of human language, which is in de order of 10^20 or more.

It would take about 10 billion people (agents) and 300 years, with 1 sentence produced and recorded every second, to get a training set of this size. It's fair say we are not there yet.

Also, even if we had a substantial subset, it would most likely be unevenly distributed. Maybe a lot about today’s weather but not very much about galaxies far far away (or perhaps the other way around). So, even with a set of this size it is not guaranteed that it would be statistically distributed sufficiently to cover all relations found in the performance set.

Deep learning is sometimes very impressive, and it could provide the backbone of a semantic system for AGI. But e.g. the fact that humans do not use training sets of the size of deep learning to learn language strongly suggests that the boundary conditions needed to achieve human-level cognition, and with it the underlying architecture, are fundamentally different from those underlying deep learning (e.g. see https://arxiv.org/abs/2210.10543).

Expand full comment
Phil Tanny's avatar

Let's see...

After seventy years we still have not the slightest clue how to make ourselves safe from the first existential scale technology, nuclear weapons. And so, based on that experience, because we are brilliant, we decided to create another existential scale technology, AI, which we also have no idea how to make safe. And then Jennifer Doudna comes along and says, let's make genetic engineering as easy, cheap and accessible to as many people as possible as fast as possible, because we have no idea how to make that safe either.

It's a bizarre experience watching this unfold. All these very intelligent, highly educated, accomplished articulate experts celebrating their wild leap in to civilization threatening irrationality. The plan seems to be to create ever more, ever larger existential threat technologies at an ever accelerating rate, to discover what happens. As if simple common sense couldn't predict that already.

Ok, I'm done, and off to watch Don't Look Up again.

Expand full comment
27 more comments...

No posts