15 Comments
Dec 31, 2022·edited Dec 31, 2022Liked by Gary Marcus

Hi Gary, thanks for co-hosting the debate, and for this excellent summary!

'Direct experience with the world' might be the key ingredient that's missing, in all current approaches (including embodied ones). Such experience that we humans acquire, complements the symbolic reasoning we do. In this view, unless AI is able to directly interact with the world to build up (represent!) a 'personal', ongoing, modifiable model of the world in a *non-symbolic* way, and use that in conjunction with symbolic ways, it is not going to able to exhibit intelligence the way humans do.

Happy 2023, looking fwd to more of your 'keeping it real' posts! Cheers!!

Expand full comment
Dec 31, 2022Liked by Gary Marcus

Great summary. I saw the original debate live (well...streamed) but missed this one. Hope to watch over the weekend.

Thanks for great blogs throughout this year -- I always look forward to them. Please keep them coming in 2023. AI/AGI *will* happen but it won't be DL, ChatGPT, etc -- thanks for keeping the AI world honest.

Expand full comment

Thank you for the excellent, pithy summary!

I realize a fairly large number of people in the AI field don’t want to talk about AI for defense and national security in these forums. But it seems to me this is not a topic that can or should be ignored any longer. I hope Montreal is willing to consider taking it on in a future debate.

Expand full comment

Here's what I'm worried about. None of the experts listed above, or almost anywhere else, seem interested in that which most worries me.

1) Nuclear Weapons: Why are we focused on what might possibly happen someday with AI, when we know as fact that nuclear weapons can end this civilization in the next 30 minutes? The more intellectual elites relentlessly ignore this blatantly obvious threat, the more persuaded I am that they are not people of reason, but rather highly articulate speakers who know how to work the career advancement publicity system.

2) Knowledge Explosion: As a thought experiment, let's imagine for moment that every issue of concern with AI was somehow resolved so that the future AI was no longer something we were worried about.

That doesn't matter. It just doesn't matter.

The knowledge explosion assembly line which is generating ever more, ever larger powers, at an ever accelerating rate will keep on churning out new challenges, new threats, new things to worry about. The 21st century is young yet, AI is the beginning and not the end of the story.

For every challenge we overcome the knowledge explosion will create three new problems. Sooner or later this accelerating process will generate some set of conditions which we limited human beings, who actually are not gods, can't successfully manage. Nobody can say exactly how or when, we can only know for sure that on the current course that this failure is coming.

The mistake experts are making is focusing on the particular products which role off the end of the knowledge explosion assembly line one by one by one. This is a loser's game which is guaranteed to fail, because an accelerating knowledge explosion will generate new problems faster than we can figure out how to solve the challenges we already face. Here's the proof of that...

After 75 years we still have not the slightest clue what to do about nuclear weapons. And while we're wondering about that, the knowledge explosion has handed us genetic engineering and AI.

If there is a solution to the challenge presented by an accelerating knowledge explosion, it is to stop being distracted by every little detail of emerging technological threats, and shift our focus to the fundamentally flawed assumption which is the source of all the threats.

The “more is better” relationship with knowledge which is the foundation of science and our modern civilization is a simplistic, outdated and increasingly dangerous 19th century philosophy. The challenge we face is to update our relationship with knowledge to meet the radically new conditions created by the success of the Enlightenment.

Until we understand this, all the expert chatter is for nothing.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment
May 1, 2023·edited May 1, 2023

I recall 30 or so years ago as factory workers where being laid off, loosing jobs, or having their pay cut as robotic programmed machines took their jobs. They where told and continue to be told,"It will all be okay. Those are jobs of the past. You need to go back to a trade school or community college to learn a new trade or for jobs of the future. Now, with AI, you see those same "white collar" owners, managers & supervisors across high end industries (writers, actors, news anchors, professor's, etc.) that where preaching those sentiments now...in a complete panic. What goes around comes around. Now, you know how those miners, factory line workers, etc. felt. So, we will give you "high end" earners (for now) the same advice. Go back to college or Tech school, you know,learn another trade, it will be okay. Lawyers, writers, some medical professionals, etc. Those are jobs of the past now that AI is here. And it's for the best, they can do it cheaper and more accurate. My employees where told the same and you had absolutely no issue with it. Now, your about to see why they felt betrayed and lost. It's all they knew, like what you are doing now after, some of you have invested decades of time, money and hard work to achieve your goals and careers. So did those factory workers and coal miners you dismissed out of hand and told to get over it and start all over. Yeah, doesn't feel good does it. I suggest you get about "retraining" yourself for the jobs of the future. And be prepared for these same professionals to "scare" us about AI's abilities and how it could "destroy" our way of living. The same way Union Leaders preached about automation of human jobs, look no further than Terminator,lol. But we are going to hear from news correspondents, some medical professionals, legal professionals, writers,etc. "AI will be the down fall of the world, we can't allow it. Yeah, they can't allow their jobs to be taken and have to go back to school to get another trade like they had no problem, with a smile on there faces, saying to those same humans in factories, farms, etc.

Expand full comment

Chomsky is a totem of one of the most misleading AI-gurus obsession: wanting badly to find anything very special and very different in human brain.

Expand full comment

You might be interested in Dan Dennett's comments about making AIs into persons and a legal framework to force people to take responsibility for their use of these tools: https://youtu.be/IZefk4gzQt4?t=3209

Expand full comment

FYI, the hardcoded subtitles are not properly synched with the audio track. They are about 2-3 seconds late. I (or anyone) can easily fix this with access to the original video without subtitles.

Expand full comment

As long as the AI in spellcheckers can't detect if it's Hindenberg or Hindenburg, there is still a lot of work to be done. Luckily, a New Year just started. ;-)

Expand full comment
Dec 31, 2022·edited Dec 31, 2022

Lack of causal understanding is absolutely fundamental to my understanding of the limits of AI. Behind all the complexity, AI models are correlation machines. Adding more and more inputs and more complicated functions of those same inputs doesn’t break the causation ceiling. Due to combinatorial math, models with too many inputs will not have enough cases to identify causation.

I am no specialist, but as a layperson, I do wonder about ways to get out of this trap. One way out of this trap may be some ingenious way to build AI on top of each other along some kind of hierarchy, using AI to build AI, or using AI to build AI that builds AI, with some human at the very top that is rewarding those meta-AI machines that produce the most consistently sensible AI machines.

Another approach is to redouble on generic algorithms that aim to mimic evolution at light speed. After all, the human brain was created that way. The challenge is that simulating the right evolutionary environment may itself be as complicated as any AI problem.

Expand full comment

An epic indeed. Thank you for the event!

With respect to the worries, I would like to point out that my model could in principle address them all except for two: my approach is symbolic (but different from the classic symbolic approach), so no hybride, and I do not address ethics, alignment and legislative or any other control of AGI because: 1) we do not have AGI; 2) we still have wars (I am from Ukraine) and it is not AGI (see p.1) that is responsible for them.

So far, it only covers the NLU part of AGI, so there is still a lot of work ahead even if everything in that model is reasonable, which may not be necessarily so. Anyway, I strongly believe that there are valuable pieces in that model and collective collaboration advocated by the Debate participants could polish them to perfection.

Happy New Year! Let's make it a year of AGI!

Expand full comment