“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”
A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.
“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”
A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.
Without an assessment of such risk, it is not really possible to legitimately decide whether pursuing AGI is a good idea or not.
Risk (of a particular outcome) = probability of occurrence X “consequence/impact” (positive or negative) of an outcome.
Overall risk = sum of risks associated with possible outcomes
I’m curious how one performs a risk assessment for AGI when the “cost” — and hence risk — associated with one of the possible outcomes (human extinction) is effectively infinite no matter how small the probability is that that outcome will occur.
If “better life for all humans” were the only counterweight to “human extinction” in the risk sum, pursuing AGI would simply not be a good idea because the negative infinity cost (and risk) associated with human extinction would overwhelm ANY positive (finite) contribution from a “better life” outcome.
But there is another possibility: that AGI might actually “save humans from extinction” (which might otherwise occur in AGIs absence) which would be a positive infinity contribution to the overall risk calculation.
If one includes the latter possibility, one is faced with summing positive and negative infinity contributions to come up with an overall risk and decide whether pursuing AGI will likely be a net positive or negative for humanity.
Needless to say, calculations involving infinities — especially, summing a positive and negative infinity - are “tricky” (to say the least)
Nonetheless, it appears that those pursuing AGI have already decided that AGI will be a net positive for humanity (I am assuming here that they are not just pursuing AGI for short term selfish gain, which may not be true, of course)
Given the quite obvious difficulties with performing a risk assessment for AGI, I am curious how the proponents arrived at the conclusion that it will be a net positive for humanity.
They should show their work so the rest of us can see it.
But of course, even if they have performed the risk assessment, they won’t show us the math because lack of transparency is their MO, quite opposite to the way real scientists behave.
“AI will lead to unbelievably better life for humans with 100% certainty if it does not kill everyone.”
A variation of this statement is repeated quite frequently, but is really quite nebulous and effectively useless because it does nothing to actually assess the overall risk presented by AGI.
Without an assessment of such risk, it is not really possible to legitimately decide whether pursuing AGI is a good idea or not.
Risk (of a particular outcome) = probability of occurrence X “consequence/impact” (positive or negative) of an outcome.
Overall risk = sum of risks associated with possible outcomes
I’m curious how one performs a risk assessment for AGI when the “cost” — and hence risk — associated with one of the possible outcomes (human extinction) is effectively infinite no matter how small the probability is that that outcome will occur.
If “better life for all humans” were the only counterweight to “human extinction” in the risk sum, pursuing AGI would simply not be a good idea because the negative infinity cost (and risk) associated with human extinction would overwhelm ANY positive (finite) contribution from a “better life” outcome.
But there is another possibility: that AGI might actually “save humans from extinction” (which might otherwise occur in AGIs absence) which would be a positive infinity contribution to the overall risk calculation.
If one includes the latter possibility, one is faced with summing positive and negative infinity contributions to come up with an overall risk and decide whether pursuing AGI will likely be a net positive or negative for humanity.
Needless to say, calculations involving infinities — especially, summing a positive and negative infinity - are “tricky” (to say the least)
Nonetheless, it appears that those pursuing AGI have already decided that AGI will be a net positive for humanity (I am assuming here that they are not just pursuing AGI for short term selfish gain, which may not be true, of course)
Given the quite obvious difficulties with performing a risk assessment for AGI, I am curious how the proponents arrived at the conclusion that it will be a net positive for humanity.
They should show their work so the rest of us can see it.
But of course, even if they have performed the risk assessment, they won’t show us the math because lack of transparency is their MO, quite opposite to the way real scientists behave.