Discussion about this post

User's avatar
Rebel Science's avatar

I'm not worried about a conscious AI because, regardless of the many claims, I don't think anyone knows what consciousness is. I'm, however, very worried about an AGI falling in the wrong hands. An AGI will behave according to its conditioning, i.e., its motivations. It will not want to do anything outside its given motivations. And its motivations will be conditioned into it by its trainers and teachers.

Expand full comment
Peter's avatar

What bugs me about the paper : all the indicators of consciousness could be implemented in some silly 2D matrix of integers world-of-a-kind, and yet nobody would dare hypothesize such a simple computer program is conscious.

It seems to further support that complexity and/or substrate are keys. Perhaps the heuristic of "if it's really smart and self-aware, it's probably conscious" is sufficient for preventing suffering.

Expand full comment
108 more comments...

No posts