8 Comments
⭠ Return to thread

You're just proving my point that people who believe in the magic AI faerie are naive and nihilistic. I encourage you to read more on the subject of approaching technology critically and responsibly.

Expand full comment

No, see the naivety is all in you.

It’s common for humans to think nothing will change. That’s being exceedingly naive though. Why? Because literally 120 years ago humans had not flown in a propeller aircraft. Only about 63 years after the Wright brothers developed the first aircraft, did humanity land on the moon. The rate of advancement is astonishing.

Even if you were to suggest that AI won’t get there for 20 more years, or even 50 more years, the simple fact is that it will, and once it does, the rate of scientific and technological advancement will be god-like.

Imagine where humanity would be if instead of approximately 80-100 billion humans having lived over the past 200 years, that 1 trillion humans lives. Where would we be? We would be so much further advanced you would think today was the Stone Age in terms of technology.

That’s what AI gets you though, imagine if you had 500 million copies of the best chemist who has ever lived working 24/7 in the cloud? Imagine 1 billion of the best physicists ever. The rate of change isn’t even worth thinking about because it’s impossible to even conceive of. Then once you get humanoid robots the AI genius cloud then has a physical body to go out and explore nature and do physical experiments, and doesn’t need humans to do a thing.

Expand full comment

Look, it sounds like you hate humans, you don't want us to have a purpose. You do you I guess. I'm no psychologist but what I think you really need is a friend. I hope you find one.

Expand full comment

No I just appreciate science and empirical truth. You erroneously believe that AGI reduces you to nothing but it’s merely your human hubristic nature that demands that nothing out does us. You have clear self-esteem issues wherein you are scared that you as a human will no longer lead the Earth in intelligence.

You are also erroneously fearful of AI when AI is only a tool unless and until it became sentient. A tool is merely a device humans use to help solve a problem or speed up a process.

AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.

Humans have no purpose aside from what our consciousness evokes, but it isn’t a true purpose, it’s merely a thought that appeases us. Even when AI becomes better than us at everything we can still create a fantasy of purpose just as we do today. It will just shift to more mundane or artistic things.

Expand full comment

“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”

A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.

Without an assessment of such risk, it is not really possible to legitimately decide whether pursuing AGI is a good idea or not.

Risk (of a particular outcome) = probability of occurrence X “consequence/impact” (positive or negative) of an outcome.

Overall risk = sum of risks associated with possible outcomes

I’m curious how one performs a risk assessment for AGI when the “cost” — and hence risk — associated with one of the possible outcomes (human extinction) is effectively infinite no matter how small the probability is that that outcome will occur.

If “better life for all humans” were the only counterweight to “human extinction” in the risk sum, pursuing AGI would simply not be a good idea because the negative infinity cost (and risk) associated with human extinction would overwhelm ANY positive (finite) contribution from a “better life” outcome.

But there is another possibility: that AGI might actually “save humans from extinction” (which might otherwise occur in AGIs absence) which would be a positive infinity contribution to the overall risk calculation.

If one includes the latter possibility, one is faced with summing positive and negative infinity contributions to come up with an overall risk and decide whether pursuing AGI will likely be a net positive or negative for humanity.

Needless to say, calculations involving infinities — especially, summing a positive and negative infinity - are “tricky” (to say the least)

Nonetheless, it appears that those pursuing AGI have already decided that AGI will be a net positive for humanity (I am assuming here that they are not just pursuing AGI for short term selfish gain, which may not be true, of course)

Given the quite obvious difficulties with performing a risk assessment for AGI, I am curious how the proponents arrived at the conclusion that it will be a net positive for humanity.

They should show their work so the rest of us can see it.

Expand full comment

But of course, even if they have performed the risk assessment, they won’t show us the math because lack of transparency is their MO, quite opposite to the way real scientists behave.

Expand full comment

"AI will lead to unbelievably better life for humans with 100% certainty if it doesn't kill everyone"

This is an absolutely cracking sentence :-D

Expand full comment

"If it does not kill everyone". Dude. I sincerely hope nobody put you in charge of anything important. Let's stop desecrating Gary's post with this nonsense. Bye.

Expand full comment