It's wild -- and a real indictment of our politicians -- that it took a new pope to impactfully articulate what is at stake for human beings in this era.
To be fair, I think this is exactly the role of a religious leader -- to be attentive to the lessons of the past and the deep human themes in what appear to be new happenings.
There’s a real weight to the new Pope choosing the name Leo right now. It feels like a turning point, and I’m hopeful the Church might help more people name what’s at stake and guide the way forward. Not just for Catholics, but for all of us.
Because what’s really on the line isn’t just jobs or systems. It’s whether we still believe in human dignity.
Now more than ever, we need clear moral leadership. The Catholic Church helped build the foundations of Western civilisation - and in a time of moral collapse, coupled with growing concerns about AI and what it will mean for society, it remains one of the few institutions still holding a coherent vision of the human person. That matters.
I think that even before the internet it was the 24/7 news cycle. Without that, I think the internet might have played out very differently and as a source for in-depth research rather than prioritizing speed and ubiquity.
Oleg: I generally agree with you. However, more than the internet alone, I think that that huge societal change came with the World Wide Web (plus search images).
As a Christian (Protestant) that hasn't heard much from any denomination on this and nothing that indicated that they understood the tech's true costs and impact, I am both relieved and ecstatic! With this announcement, perhaps there was some understanding and I just needed to actively look for it.
I felt the same way (I'm a Protestant converting to Catholicism). The Vatican’s been thinking seriously about this - just not always loudly.
In January this year, the Vatican released a comprehensive document titled “Antiqua et Nova”, which goes into the ethical implications of AI across sectors like warfare, healthcare, education, and the environment. It emphasizes that AI should complement human intelligence, not replace it, and it strongly affirms the importance of human dignity and moral responsibility in how these tools are developed and used.
ANTIQUA ET NOVA: Note on the Relationship Between Artificial Intelligence and Human Intelligence
Ai and the Internet are the perfect tools for greed amplification, force multiplying a consumerism that is threatening to destroy the planet. They didn't need to be, but to go all biblical on this one, they have been built in man's image. But, never in my wildest dreams, did I ever forsee the emergence of the twin demons Musk and Trump. Cartoon like villains straight from a DC Marvel comic, with evil intent. And what has historically been a force for medieval conservatism, the Catholic church, suddenly appears as a beacon of light and hope. All praise the Pope for his enlightened view on society. 😀. PS for those of you that don't know, LEO was the acronym for one of the first office automation systems Lyons Electronic Office.
It is a little more complicated than a calculator or an automatated weaver but AI is just a tool, even when AGI - like we biological AIs - it will be just a tool. How best to handle that sharpened rock has been with humans for a while and eventually we do figure it out as we and the tool morph into a unit. Its a good thing.
Last time I checked, sharpened rocks weren’t built by underpaid data workers on the non-consensually scraped intellectual property of millions, don’t contain child sexual abuse material, aren’t biased, and don’t require the use of energy-intensive data centers built in vulnerable areas — for instance. Technologies are all different. A rock, a pen, a spinning jenny, a shoe, and a missile are not the same. Each has different ways that it fits into the world, each has different affordances, and each offers a different moral environment, to use a phrase by Peter-Paul Verbeek.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
Love people who calmly point out typos. [Joy in HK fiFP]
Love even more the authors who then fix the typos in their posts!
Love much less the commenters [UGH OH] who decry the messenger but then attribute the typo to the author when that error occurs as a quote from another in an image from other material.
The only occurence of "Leo" Xanything in the body of Marcus' text is in the first line.
Unfortunately, in this instance, Marcus cannot correct the typo which appears to have been made by Wes Davis in the Verge text in the quoted image.
The best one could hope for would be an authorial (sic) for 'thus it had been written.' But that would be distracting - and a bit much for a blog post where the image is not fully linked.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
Have you commented on Max Tegmark et al paper on the probability of the existential threat of artificial super intelligence? Is this just elaborate and bogus speculation?
On a skim the arxiv paper is interesting but very assumption laden. Kind of like a thought experiment for a specific set of scenarios rather than a real “constant” In the way that the Guardian describes.
[my emphasis]: "Tegmark said that AI firms should take responsibility for _rigorously calculating_ whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control."
Given that Tegmark thinks it's _possible_ to "rigorously calculate" this -- Well,I don't think that I need to pay any further attention to Tegmark's thoughts.
Such probability calculations are completely bogus. And they miss the *real* threat, which we can see every day, that the technology is dangerous because of how humans use it.
Something lots of people miss is that probability is only half of a risk calculation
Risk = probability X IMPACT
If the negative impact is very large (even possibly infinite in the case of human extinction), the risk is substantial even with a very small probability of occurrence
But if there is a probability that AI might also PREVENT human extinction, that must also be somehow figured into a “cost/benefit” calculation.
Such calculations are only possible under circumstances when probabilities and impacts can be reasonably estimated, but in the case of AI, where the probabilities are largely unknown and the impacts (positive or negative) potentially infinite, the “calculation” is exceedingly difficult (if not impossible.)
Talk of infinite impact or risk is likewise bogus, and leads people to do or advocate profoundly stupid things, like longtermism. On a unit scale, risk and impact are at most 1, and by many people's assessments (which are necessarily subjective) the impact of human extinction is considerably less than 1. Given a choice between humans escaping Earth to survive but destroying it in the process, and humans perishing on Earth, many sensible people (which excludes the longtermists) prefer the latter. Consider one's personal extinction--is the impact infinite and therefore anything and everything is justifiable in order to avoid it? Of course not--especially considering its inevitability. What's the impact of losing n years of one's life? Obviously not infinite. And the end of the human race is likewise inevitable, contrary to the absurd fantasies of mentally disturbed longtermists like Tipler, Bostrom, and Kurzweil.
Talk of infinite impact and risks is just more in a long line of sloppy careless thinking from people who don't spend even 2 seconds challenging their own assumptions.
Gary Marcus brings the current information flow to his readers, enabling them to connect it to deeper and more complex formats that further deepen their understanding.
thanks very much for this article, Gary. i think all of us who are working in the field of ai applications must work only for arrangements in which the client keeps all of their employees.
No, I’m puzzled. This publication has repeatedly pointed out the non-existence of AGI, and that we might be stuck on the 80% of LLMs for a very long time, if not forever.
Am I to take it that we are now to believe that AGI is imminent, or that LLMs with their incumbent hallucinations and needs for human cross checks will revolutionise human society?
It's wild -- and a real indictment of our politicians -- that it took a new pope to impactfully articulate what is at stake for human beings in this era.
Corporate whores….
Jobs for votes becomes tech manipulation for votes
To be fair, I think this is exactly the role of a religious leader -- to be attentive to the lessons of the past and the deep human themes in what appear to be new happenings.
There’s a real weight to the new Pope choosing the name Leo right now. It feels like a turning point, and I’m hopeful the Church might help more people name what’s at stake and guide the way forward. Not just for Catholics, but for all of us.
Because what’s really on the line isn’t just jobs or systems. It’s whether we still believe in human dignity.
Now more than ever, we need clear moral leadership. The Catholic Church helped build the foundations of Western civilisation - and in a time of moral collapse, coupled with growing concerns about AI and what it will mean for society, it remains one of the few institutions still holding a coherent vision of the human person. That matters.
The biggest societal change for humanity so far was the internet. Suddenly everybody can broadcast globally, and we are a hive mind.
AI so far looks more like an automation thingie.
I think that even before the internet it was the 24/7 news cycle. Without that, I think the internet might have played out very differently and as a source for in-depth research rather than prioritizing speed and ubiquity.
Oleg: I generally agree with you. However, more than the internet alone, I think that that huge societal change came with the World Wide Web (plus search images).
Oops. "plus search _engines_"
As a Christian (Protestant) that hasn't heard much from any denomination on this and nothing that indicated that they understood the tech's true costs and impact, I am both relieved and ecstatic! With this announcement, perhaps there was some understanding and I just needed to actively look for it.
I felt the same way (I'm a Protestant converting to Catholicism). The Vatican’s been thinking seriously about this - just not always loudly.
In January this year, the Vatican released a comprehensive document titled “Antiqua et Nova”, which goes into the ethical implications of AI across sectors like warfare, healthcare, education, and the environment. It emphasizes that AI should complement human intelligence, not replace it, and it strongly affirms the importance of human dignity and moral responsibility in how these tools are developed and used.
ANTIQUA ET NOVA: Note on the Relationship Between Artificial Intelligence and Human Intelligence
https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html
Quiet, thoughtful work, but worth the read. Best piece on AI I've read.
Thanks, I'll check it out. I believe Pope Francis also touched on the subject:
https://www.businessinsider.com/pope-leo-xiv-ai-artificial-intelligence-speech-2025-5
I guess the name “Pope Elon XAI” was already taken?
Ai and the Internet are the perfect tools for greed amplification, force multiplying a consumerism that is threatening to destroy the planet. They didn't need to be, but to go all biblical on this one, they have been built in man's image. But, never in my wildest dreams, did I ever forsee the emergence of the twin demons Musk and Trump. Cartoon like villains straight from a DC Marvel comic, with evil intent. And what has historically been a force for medieval conservatism, the Catholic church, suddenly appears as a beacon of light and hope. All praise the Pope for his enlightened view on society. 😀. PS for those of you that don't know, LEO was the acronym for one of the first office automation systems Lyons Electronic Office.
It is a little more complicated than a calculator or an automatated weaver but AI is just a tool, even when AGI - like we biological AIs - it will be just a tool. How best to handle that sharpened rock has been with humans for a while and eventually we do figure it out as we and the tool morph into a unit. Its a good thing.
Last time I checked, sharpened rocks weren’t built by underpaid data workers on the non-consensually scraped intellectual property of millions, don’t contain child sexual abuse material, aren’t biased, and don’t require the use of energy-intensive data centers built in vulnerable areas — for instance. Technologies are all different. A rock, a pen, a spinning jenny, a shoe, and a missile are not the same. Each has different ways that it fits into the world, each has different affordances, and each offers a different moral environment, to use a phrase by Peter-Paul Verbeek.
Also, unlike LLMs, calculators, weaving machines and even sharpened rocks give you what you ask for and don’t just make stuff up.
^ That ridiculous comment is from the fool who wrote “Homeostasis - Trump is the sting when the woke rubber band, stretched too far, snapped.”
“just” is a word that profoundly stupid, ignorant, intellectually dishonest people use in place of actual thinking.
Is that a typo, where in two places it says Leo XIII, and in one throws in Leo XII? You might want to check that, otherwise it makes no sense.
Let's hope the Pope can help tackle this topic, and find a way to back humanity for us all.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
So you sarcastically and pointlessly whine about a perfectly valid inquiry not just once but three times.
Love people who calmly point out typos. [Joy in HK fiFP]
Love even more the authors who then fix the typos in their posts!
Love much less the commenters [UGH OH] who decry the messenger but then attribute the typo to the author when that error occurs as a quote from another in an image from other material.
The only occurence of "Leo" Xanything in the body of Marcus' text is in the first line.
Unfortunately, in this instance, Marcus cannot correct the typo which appears to have been made by Wes Davis in the Verge text in the quoted image.
The best one could hope for would be an authorial (sic) for 'thus it had been written.' But that would be distracting - and a bit much for a blog post where the image is not fully linked.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
Love people who have tpo correct every typo. It makes the internet sooo much better for us all that diligent grammar guardians ensure we never make mistakes.
Bot.
"the treasury of her social teaching" -- cherry on top that he seems to have the heart of a poet as well.
This is encouraging! Thank you for posting this.
While not Catholic myself, I found the Vatican’s recent paper on AI to be very well-researched, thoughtfully structured, and good at bringing some much needed context from philosophy and history. Oh yeah, and well-written - not like GPT drivel. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html
Have you commented on Max Tegmark et al paper on the probability of the existential threat of artificial super intelligence? Is this just elaborate and bogus speculation?
link?
Article in The Guardian which has a link to the paper https://www.theguardian.com/technology/2025/may/10/ai-firms-urged-to-calculate-existential-threat-amid-fears-it-could-escape-human-control?CMP=Share_iOSApp_Other
On a skim the arxiv paper is interesting but very assumption laden. Kind of like a thought experiment for a specific set of scenarios rather than a real “constant” In the way that the Guardian describes.
[my emphasis]: "Tegmark said that AI firms should take responsibility for _rigorously calculating_ whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control."
Given that Tegmark thinks it's _possible_ to "rigorously calculate" this -- Well,I don't think that I need to pay any further attention to Tegmark's thoughts.
Hopefully, the AI firms won’t use an LLM to do the “rigorous” calculation, given its issues with basic arithmetic.
The only rigor to be found in LLM math is of the mortis kind.
Such probability calculations are completely bogus. And they miss the *real* threat, which we can see every day, that the technology is dangerous because of how humans use it.
Something lots of people miss is that probability is only half of a risk calculation
Risk = probability X IMPACT
If the negative impact is very large (even possibly infinite in the case of human extinction), the risk is substantial even with a very small probability of occurrence
But if there is a probability that AI might also PREVENT human extinction, that must also be somehow figured into a “cost/benefit” calculation.
Such calculations are only possible under circumstances when probabilities and impacts can be reasonably estimated, but in the case of AI, where the probabilities are largely unknown and the impacts (positive or negative) potentially infinite, the “calculation” is exceedingly difficult (if not impossible.)
And of course, with an infinite impact (extinction), no matter how small the nonzero probability, the risk is also infinite
Talk of infinite impact or risk is likewise bogus, and leads people to do or advocate profoundly stupid things, like longtermism. On a unit scale, risk and impact are at most 1, and by many people's assessments (which are necessarily subjective) the impact of human extinction is considerably less than 1. Given a choice between humans escaping Earth to survive but destroying it in the process, and humans perishing on Earth, many sensible people (which excludes the longtermists) prefer the latter. Consider one's personal extinction--is the impact infinite and therefore anything and everything is justifiable in order to avoid it? Of course not--especially considering its inevitability. What's the impact of losing n years of one's life? Obviously not infinite. And the end of the human race is likewise inevitable, contrary to the absurd fantasies of mentally disturbed longtermists like Tipler, Bostrom, and Kurzweil.
Talk of infinite impact and risks is just more in a long line of sloppy careless thinking from people who don't spend even 2 seconds challenging their own assumptions.
You are confusing risk with probability .
What cost/impact should we assign to “human extinction” if not “infinity”?
AI is an offense to horses, especially Mr. Ed
https://m.youtube.com/watch?v=Mg8oyvjFGbw&pp=0gcJCdgAo7VqN5tD
Gary Marcus brings the current information flow to his readers, enabling them to connect it to deeper and more complex formats that further deepen their understanding.
thanks very much for this article, Gary. i think all of us who are working in the field of ai applications must work only for arrangements in which the client keeps all of their employees.
No, I’m puzzled. This publication has repeatedly pointed out the non-existence of AGI, and that we might be stuck on the 80% of LLMs for a very long time, if not forever.
Am I to take it that we are now to believe that AGI is imminent, or that LLMs with their incumbent hallucinations and needs for human cross checks will revolutionise human society?
Samuel Altmanus est diabolus. Amen.