No I just appreciate science and empirical truth. You erroneously believe that AGI reduces you to nothing but it’s merely your human hubristic nature that demands that nothing out does us. You have clear self-esteem issues wherein you are scared that you as a human will no longer lead the Earth in intelligence.
No I just appreciate science and empirical truth. You erroneously believe that AGI reduces you to nothing but it’s merely your human hubristic nature that demands that nothing out does us. You have clear self-esteem issues wherein you are scared that you as a human will no longer lead the Earth in intelligence.
You are also erroneously fearful of AI when AI is only a tool unless and until it became sentient. A tool is merely a device humans use to help solve a problem or speed up a process.
AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.
Humans have no purpose aside from what our consciousness evokes, but it isn’t a true purpose, it’s merely a thought that appeases us. Even when AI becomes better than us at everything we can still create a fantasy of purpose just as we do today. It will just shift to more mundane or artistic things.
“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”
A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.
Without an assessment of such risk, it is not really possible to legitimately decide whether pursuing AGI is a good idea or not.
Risk (of a particular outcome) = probability of occurrence X “consequence/impact” (positive or negative) of an outcome.
Overall risk = sum of risks associated with possible outcomes
I’m curious how one performs a risk assessment for AGI when the “cost” — and hence risk — associated with one of the possible outcomes (human extinction) is effectively infinite no matter how small the probability is that that outcome will occur.
If “better life for all humans” were the only counterweight to “human extinction” in the risk sum, pursuing AGI would simply not be a good idea because the negative infinity cost (and risk) associated with human extinction would overwhelm ANY positive (finite) contribution from a “better life” outcome.
But there is another possibility: that AGI might actually “save humans from extinction” (which might otherwise occur in AGIs absence) which would be a positive infinity contribution to the overall risk calculation.
If one includes the latter possibility, one is faced with summing positive and negative infinity contributions to come up with an overall risk and decide whether pursuing AGI will likely be a net positive or negative for humanity.
Needless to say, calculations involving infinities — especially, summing a positive and negative infinity - are “tricky” (to say the least)
Nonetheless, it appears that those pursuing AGI have already decided that AGI will be a net positive for humanity (I am assuming here that they are not just pursuing AGI for short term selfish gain, which may not be true, of course)
Given the quite obvious difficulties with performing a risk assessment for AGI, I am curious how the proponents arrived at the conclusion that it will be a net positive for humanity.
They should show their work so the rest of us can see it.
But of course, even if they have performed the risk assessment, they won’t show us the math because lack of transparency is their MO, quite opposite to the way real scientists behave.
"If it does not kill everyone". Dude. I sincerely hope nobody put you in charge of anything important. Let's stop desecrating Gary's post with this nonsense. Bye.
No I just appreciate science and empirical truth. You erroneously believe that AGI reduces you to nothing but it’s merely your human hubristic nature that demands that nothing out does us. You have clear self-esteem issues wherein you are scared that you as a human will no longer lead the Earth in intelligence.
You are also erroneously fearful of AI when AI is only a tool unless and until it became sentient. A tool is merely a device humans use to help solve a problem or speed up a process.
AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.
Humans have no purpose aside from what our consciousness evokes, but it isn’t a true purpose, it’s merely a thought that appeases us. Even when AI becomes better than us at everything we can still create a fantasy of purpose just as we do today. It will just shift to more mundane or artistic things.
“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”
A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.
Without an assessment of such risk, it is not really possible to legitimately decide whether pursuing AGI is a good idea or not.
Risk (of a particular outcome) = probability of occurrence X “consequence/impact” (positive or negative) of an outcome.
Overall risk = sum of risks associated with possible outcomes
I’m curious how one performs a risk assessment for AGI when the “cost” — and hence risk — associated with one of the possible outcomes (human extinction) is effectively infinite no matter how small the probability is that that outcome will occur.
If “better life for all humans” were the only counterweight to “human extinction” in the risk sum, pursuing AGI would simply not be a good idea because the negative infinity cost (and risk) associated with human extinction would overwhelm ANY positive (finite) contribution from a “better life” outcome.
But there is another possibility: that AGI might actually “save humans from extinction” (which might otherwise occur in AGIs absence) which would be a positive infinity contribution to the overall risk calculation.
If one includes the latter possibility, one is faced with summing positive and negative infinity contributions to come up with an overall risk and decide whether pursuing AGI will likely be a net positive or negative for humanity.
Needless to say, calculations involving infinities — especially, summing a positive and negative infinity - are “tricky” (to say the least)
Nonetheless, it appears that those pursuing AGI have already decided that AGI will be a net positive for humanity (I am assuming here that they are not just pursuing AGI for short term selfish gain, which may not be true, of course)
Given the quite obvious difficulties with performing a risk assessment for AGI, I am curious how the proponents arrived at the conclusion that it will be a net positive for humanity.
They should show their work so the rest of us can see it.
But of course, even if they have performed the risk assessment, they won’t show us the math because lack of transparency is their MO, quite opposite to the way real scientists behave.
"AI will lead to unbelievably better life for humans with 100% certainty if it doesn't kill everyone"
This is an absolutely cracking sentence :-D
"If it does not kill everyone". Dude. I sincerely hope nobody put you in charge of anything important. Let's stop desecrating Gary's post with this nonsense. Bye.