13 Comments
User's avatar
Saty Chary's avatar

Hi Gary, thanks for co-hosting the debate, and for this excellent summary!

'Direct experience with the world' might be the key ingredient that's missing, in all current approaches (including embodied ones). Such experience that we humans acquire, complements the symbolic reasoning we do. In this view, unless AI is able to directly interact with the world to build up (represent!) a 'personal', ongoing, modifiable model of the world in a *non-symbolic* way, and use that in conjunction with symbolic ways, it is not going to able to exhibit intelligence the way humans do.

Happy 2023, looking fwd to more of your 'keeping it real' posts! Cheers!!

howard8888's avatar

Great summary. I saw the original debate live (well...streamed) but missed this one. Hope to watch over the weekend.

Thanks for great blogs throughout this year -- I always look forward to them. Please keep them coming in 2023. AI/AGI *will* happen but it won't be DL, ChatGPT, etc -- thanks for keeping the AI world honest.

Jack Shanahan's avatar

Thank you for the excellent, pithy summary!

I realize a fairly large number of people in the AI field don’t want to talk about AI for defense and national security in these forums. But it seems to me this is not a topic that can or should be ignored any longer. I hope Montreal is willing to consider taking it on in a future debate.

Paul Topping's avatar

I agree that it is important but such a discussion would be hampered by the secrecy surrounding the subject. Whatever people would legally be able to discuss is not likely to reflect what's really happening in those fields today. How do you plan to get around these limitations?

Jack Shanahan's avatar

That’s a valid point, but there is so much that can be discussed at the unclassified level that it would be worth it. Similar to the AI Track II dialogues I’m part of. A candid and healthy unclass conversation is possible. Though I expect the friction would be palpable.

David's avatar

I recall 30 or so years ago as factory workers where being laid off, loosing jobs, or having their pay cut as robotic programmed machines took their jobs. They where told and continue to be told,"It will all be okay. Those are jobs of the past. You need to go back to a trade school or community college to learn a new trade or for jobs of the future. Now, with AI, you see those same "white collar" owners, managers & supervisors across high end industries (writers, actors, news anchors, professor's, etc.) that where preaching those sentiments now...in a complete panic. What goes around comes around. Now, you know how those miners, factory line workers, etc. felt. So, we will give you "high end" earners (for now) the same advice. Go back to college or Tech school, you know,learn another trade, it will be okay. Lawyers, writers, some medical professionals, etc. Those are jobs of the past now that AI is here. And it's for the best, they can do it cheaper and more accurate. My employees where told the same and you had absolutely no issue with it. Now, your about to see why they felt betrayed and lost. It's all they knew, like what you are doing now after, some of you have invested decades of time, money and hard work to achieve your goals and careers. So did those factory workers and coal miners you dismissed out of hand and told to get over it and start all over. Yeah, doesn't feel good does it. I suggest you get about "retraining" yourself for the jobs of the future. And be prepared for these same professionals to "scare" us about AI's abilities and how it could "destroy" our way of living. The same way Union Leaders preached about automation of human jobs, look no further than Terminator,lol. But we are going to hear from news correspondents, some medical professionals, legal professionals, writers,etc. "AI will be the down fall of the world, we can't allow it. Yeah, they can't allow their jobs to be taken and have to go back to school to get another trade like they had no problem, with a smile on there faces, saying to those same humans in factories, farms, etc.

Pere Mayol 🧠🗽's avatar

Chomsky is a totem of one of the most misleading AI-gurus obsession: wanting badly to find anything very special and very different in human brain.

Marcel Kincaid's avatar

You might be interested in Dan Dennett's comments about making AIs into persons and a legal framework to force people to take responsibility for their use of these tools: https://youtu.be/IZefk4gzQt4?t=3209

Rich Seidner's avatar

FYI, the hardcoded subtitles are not properly synched with the audio track. They are about 2-3 seconds late. I (or anyone) can easily fix this with access to the original video without subtitles.

ralph boyshline's avatar

As long as the AI in spellcheckers can't detect if it's Hindenberg or Hindenburg, there is still a lot of work to be done. Luckily, a New Year just started. ;-)

LV's avatar

Lack of causal understanding is absolutely fundamental to my understanding of the limits of AI. Behind all the complexity, AI models are correlation machines. Adding more and more inputs and more complicated functions of those same inputs doesn’t break the causation ceiling. Due to combinatorial math, models with too many inputs will not have enough cases to identify causation.

I am no specialist, but as a layperson, I do wonder about ways to get out of this trap. One way out of this trap may be some ingenious way to build AI on top of each other along some kind of hierarchy, using AI to build AI, or using AI to build AI that builds AI, with some human at the very top that is rewarding those meta-AI machines that produce the most consistently sensible AI machines.

Another approach is to redouble on generic algorithms that aim to mimic evolution at light speed. After all, the human brain was created that way. The challenge is that simulating the right evolutionary environment may itself be as complicated as any AI problem.

Alexander Naumenko's avatar

An epic indeed. Thank you for the event!

With respect to the worries, I would like to point out that my model could in principle address them all except for two: my approach is symbolic (but different from the classic symbolic approach), so no hybride, and I do not address ethics, alignment and legislative or any other control of AGI because: 1) we do not have AGI; 2) we still have wars (I am from Ukraine) and it is not AGI (see p.1) that is responsible for them.

So far, it only covers the NLU part of AGI, so there is still a lot of work ahead even if everything in that model is reasonable, which may not be necessarily so. Anyway, I strongly believe that there are valuable pieces in that model and collective collaboration advocated by the Debate participants could polish them to perfection.

Happy New Year! Let's make it a year of AGI!

User's avatar
Comment deleted
Dec 31, 2022
Comment deleted
Gary Marcus's avatar

Angela Sheffield, our last speaker, worked extensively on disarmament, as has Anja Kaspersen. But we ran out of time…