An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
A time capsule of AI thought leaders in 2022 gives us a lot to think about, going forward
What do Noam Chomsky, living legend of linguistics, Kai-Fu Lee, perhaps the most famous AI researcher in all of China, and Yejin Choi, the 2022 MacArthur Fellowship winner who was profiled earlier this week in The New York Times Magazine—and more than a dozen other scientists, economists, researchers, and elected officials—all have in common?
They are all worried about the near-term future of AI. The most worrisome thing of all? They are all worried about different things.
Each spoke last week at December 23’s AGI Debate (co-organized by Montreal.AI’s Vince Boucher and myself). No summary can capture all that was said (though Tiernan Ray’s 8,000 word account at ZDNet comes close), but here are a few of the many concerns that were raised:
Noam Chomsky, who led off the night, was worried about whether the current approach to artificial intelligence would ever tell us anything about the thing that he cares about most: what makes the human mind what it is?
I, Gary Marcus, worried about whether contemporary approaches to AI would ever give solutions to four key aspects of thought that we ought to expect from any intelligent machine: reasoning, abstraction, compositionality, and factuality.
Konrad Kording, computational neuroscientist at UPenn, worried about whether any of our current approaches to getting machines to reason about causality are adequate. (Spoiler alert: they’re not, at least not yet.)
Dileep George, DeepMind Researcher, and cofounder of two AI startups, worried that scaling alone would not be enough to bring us to general intelligence, raising an analogy with dirigibles like the Hindenberg that at one point seemed to be outpacing airplane development. George, like Chomsky and myself, called for a greater emphasis on the understanding of human intelligence.
David Ferrucci, CEO of Elemental Cognition, and director of IBM’s successful Watson Jeopardy effort, worried that current systems were “ultimately unsatisfied”, and like me called for hybrid approaches that combine neural networks with reasoning and structured representations.
Ben Goertzel, a well-known AI researcher who co-coined the term Artificial General Intelligence, the distal goal that so much of the field now aspires to, worried that there was too much intellectual imperalism focused on a single currently popular approach and not enough intellectual collaboration.
Yejin Choi, the MacArthur-winning UW/Allen AI professor named above, worried about whether we were making enough progress to understanding what she called the “dark matter of AI”, commonsense reasoning, and raised important further questions in a second talk about value pluralism and ethical reasoning in AI.
Artur d’Avila Garcez, pioneer in neurosymbolic approaches, argued that it is urgent that we bring symbolic approaches into the mix, and emphasized the need for a richer semantic framework.
Deep learning pioneer Jürgen Schmidhuber was perhaps least worried, feeling that all the essential tools for building AI already existed (in contrast to many others on the panel) but he nonetheless counseled for an increased focus on metalearning, the (ideally automated) combination of multiple learning mechanisms with different aptitudes, across different tasks.
Jeff Clune, UBC and Vector institute professor, also advocated for metalearning, with a more evolutionary twist. In a second talk, on ethics, he expressed concerns about how bad actors might use AGI, and argued that addressing such potential misuse was among “the most important questions facing humanity.”
The Honourable Michelle Garner Rempel, a Member of the Canadian Parliament, worried about whether elected officials are prepared for what AI is about to bring, and whether those officials could work together, in a sufficiently nonpartisan way, to bring us to the policies we need.
Sarah Hooker, leader of Cohere.AI worried about whether the currently popular approaches to AI software were an accident of currently popular hardware, and whether the AI community was looking broadly enough.
Francesca Rossi, IBM Fellow and President of the leading AI society, AAAI, worried about whether current approaches to AI could bring us to AI systems that could behave sufficiently ethically. She also argued that we must bring humans in the loop, given the realities of current technology.
Anja Kaspersen, Carnegie Council Senior Fellow, worried about whether the power dynamics of the AI community were leading to the best research and best policies, or simply to further entrenchment of paths that have already proven to be perilous.
Erik Brynjolffson, Stanford economist and bestselling author, worried about whether incentives between technologists, business executives, and policymakers were aligned well enough to bring us to a just society, and whether might need to shift more towards human augmentation rather than automation.
Kai-Fu Lee, often far more optimistic than I am, worried about whether recent advances in AI-generated content were a net positive for society, or whether they might lead to an avalanche of misinformation [something I too am deeply worried about] and target advertising that might be detrimental.
Angela Sheffield, Senior Director of AI at the DC security startup Raft, worried about how decision makers and policymakers should regulate AI, particularly when the actual AI that we currently have is by no stretch truly general intelligence.
Every single speaker was both articulate, and pointed. Two—Schmidhuber and Clune—notably more optimistic about the potential of current techniques. (Sparks flew when they expressed that optimism.) But not one speaker thought that current AI was anything like the holy grail of artificial general intelligence. (Clune thought we might get there, by 2030.) Virtually every speaker thought that things were about to get wild—and not necessarily in entirely good ways.
I urge you, if you care about artificial intelligence, and its future, and its impact on society, to watch the debate, in full. That’s a huge commitment, 3.5 hours, but one thing that I think all of our speakers could agree on is that artificial intelligence, in whatever form it currently is, is about to have a huge impact on society.
If you want to know what a large cross section of the field’s leaders are thinking, right now, there’s no better place to start, a time capsule of 2022 as we enter into the remaining wild AI years of the 2020s.
(Bonus: Michael Ma has posted a detailed, moment by moment set of publicly-editable notes intended to spark and organize post-game community discussion, and, as a peek behind the scenes Vince Boucher and I have posted the “backstage” Zoom chat amongst our participants.)
Gary Marcus (@garymarcus) is a scientist, best-selling author, and entrepreneur. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. His still-relevant essay Deep Learning is Hitting a Wall is one of Pocket’s Best Technology articles of 2022.
Hi Gary, thanks for co-hosting the debate, and for this excellent summary!
'Direct experience with the world' might be the key ingredient that's missing, in all current approaches (including embodied ones). Such experience that we humans acquire, complements the symbolic reasoning we do. In this view, unless AI is able to directly interact with the world to build up (represent!) a 'personal', ongoing, modifiable model of the world in a *non-symbolic* way, and use that in conjunction with symbolic ways, it is not going to able to exhibit intelligence the way humans do.
Happy 2023, looking fwd to more of your 'keeping it real' posts! Cheers!!
Great summary. I saw the original debate live (well...streamed) but missed this one. Hope to watch over the weekend.
Thanks for great blogs throughout this year -- I always look forward to them. Please keep them coming in 2023. AI/AGI *will* happen but it won't be DL, ChatGPT, etc -- thanks for keeping the AI world honest.