Discussion about this post

User's avatar
Dakara's avatar

Your post covers a lot of current legitimate concerns. I think we should set aside p(doom) and start talking about p(dystopia) which is far more aligned to the reality of current risks.

One of the problems with AI, is that productive uses don't scale. You need human verifiers for the output to filter hallucinations etc. However, nefarious uses scale to the limits of compute. Something I talked about recently here, FYI.

https://www.mindprison.cc/i/164514378/hallucinations-amplify-ai-nefarious-use-effects

Expand full comment
Bill Benzon's avatar

My major worry is that the industry will get stuck in a sunk resources trap. So much money and time and effort is going into scaling things up that it will be almost impossible to break free of that commitment. So very few resources will go to developing other architectures.

It's become apparent that it's possible to tinker with these things endlessly and some up with changes/improvements here and there. And, as you've written about around the corner, these reasoning models have backed into some bits of symbolic architecture. No doubt they can tweak that endlessly.

So, things are just going to zig-zag around in the same space of architectures, with each zig being pronounced to be a breakthrough. The industry is just going to meander around in that space, always seeing AGI over the horizon, but never getting there.

* * * * *

Ah, just caught the term "p(dystopia." Love it! The road to p(dystopia) is paved with sunk costs.

Expand full comment
119 more comments...

No posts