46 Comments
Nov 27, 2023Liked by Gary Marcus

During an interview with NDTV last week, I said that if in 1950 you polled 1000 physicists globally, you would find universal agreement that nuclear weapons posed an existential threat. (Though some would also undoubtedly say they were a necessary evil, for deterring future war).

Yet I then said that if you were to poll 10 AI luminaries today, you will get at least four different opinions on whether AI poses an existential risk.

That’s not a problem necessarily. It’s the nature of the AI beast today. But it’s hard to criticize Congress (and other global policy bodies) for either under- or over-regulation of AI when there are widely different opinions on such a fundamental question.

Expand full comment

Lol! Who needs AI to doom us when we have jokers like this lot pulling the strings?

Expand full comment
Nov 27, 2023Liked by Gary Marcus

Simple errors of "common sense," hallucinating, weird overconfidence, none of this suggests LLMs understand anything but the implications of their own solipsistic AI architecture. The language game has to in some way get outside of itself and speak about the world. It is the Kant problem all over again. AI is still locked in its own viciously spinning transcendental consciousness. Good old Stevan Harnad seems applicable. No sensorimotor transduction no real consciousness, no real understanding. Touch me.

Expand full comment

Please enter the ring, we’ll root for you!!

Expand full comment

Real Housewives of Silicon Valley is big ick.

Mr. Cedric's stated argument against regulation is farcical imo. The US doesn't regulate and barrels on ahead with zero responsibility while the EU "over"-regulates and is apparently a cesspool (X to doubt), so the EU should also move fast and break things, and hopefully enough tech titans breaking things in the rage room will get us through a tumultuous era to a better tomorrow? (XXXXXXX to doubt)

Expand full comment
Nov 27, 2023Liked by Gary Marcus

So good just so good ! Well said

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Gary Marcus

If I were you, Gary, I would have pointedly disagreed with Hinton after that first bit. AGI is dangerous independent of whether GPT4 "understands" anything, much as an H-bomb is dangerous independent of whether nuclear reactors can similarly explode (er, they can't).

And given how dismissive people are of AI understanding, I expect the phrase "it doesn't really understand anything" to be used to describe the first AGI, too. And if, later, a poorly-aligned AGI should be given a memory subsystem and become far more powerful and dangerous than anyone intended, a few people will still keep repeating the pleasant thought: "it doesn't really understand anything". In some sense they could even be right ― maybe it's not conscious, maybe it just says it's conscious because it's instrumentally useful to pretend to be humanlike.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Gary Marcus

Science is not a democracy. By that I mean, it's (ultimately) truth and nature that decide, and not a crowd, or via a vote. As we all know, one person (sometimes) has had to go against the democratic opinion and express a viewpoint that shook the world, e.g., Copernicus, Galileo... and in some cases, actually risk their lives (the crowd can be rather fascistic!). With AI and the investigation of intelligence (which in my view, is intrinsic to Consciousness, and non-mechanical), we are in the forefront of knowledge, and things are going to be shaken to the core... under those evolutionary circumstances, these kind of knock-down drag-out fights are natural (and fascinating!). And when the dust settles, things will look *very* different...

Expand full comment

This story of dueling Xerpts brings up an issue that's been bugging me for awhile: Just what IS expertise on the issue of how sophisticated/powerful AI is?

It's one thing to be have deep knowledge about the technology itself. But here we are making judgments against some standard, and the standard is almost always human capability, implicitly if not explicitly. That means that you have to know something about human capabilities in order to make a valid judgment.

What do these guys know about human capabilities? What is their expertise? Do they know more than a bright sophomore at a good school? If not, then why should we take their judgments seriously?

As far as I know, this issue isn't just about these particular researchers. It's pretty much about the discipline. The issue is institutionalized. That is, that one is deemed to be qualified to address such questions regardless of actual knowledge of human capabilities, that is implicit in the institutionalized culture of AI.

Let me put the question in the starkest way possible by offering an analogy: Would you buy shares in a whaling voyage captained by someone who knows everything about the boat and is able to take it on a day sale to and from its home harbor, but has never sailed it on the open seas, must less navigated the treacherous seas around the Cape of Horn, and who doesn't know any more about whales than the average landlubber?

Expand full comment

Girls, please - you're all pretty! :-)

Expand full comment

The AI debate today makes me really glad that I decided against pursuing a science and technology studies / philosophy of technology PhD in 2013.

The vast majority of the industry's luminaries today seem incapable of transcending past "science" (whatever that means) and continue to squabble over who can generate value for shareholders faster.

I don't doubt that these problems are hard or that there are capable people working on them but I for one am just glad to be as far as I am from these people.

No matter the medium term effect of AI, I'm sure most of today's geniuses will be proven wrong in the most ironic ways imaginable and we'll come to realize that an overlooked paper from a disbanded AI safety lab got it 100% correctly lol

Expand full comment

Ng is the educated Voice of Reason as usual. He's my favorite star in all of this AI mess.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023

What about the following? We already passed the singularity. Our whole world economy is the AGI, trained by reinforcement learning on the objective function of profit maximization. What are the chances that this AGI will extinguish humanity? What would be the evidence that these chances are minuscule?

Expand full comment

The recent human vs. human controversies in the AI industry, reported here and elsewhere, serve the useful purpose of illustrating what we are building AI on top of. AI will inherit our basic nature, just as we inherited the basic nature of apes. It seems likely that everything that is good and bad about humans, beautiful and ugly, will be mirrored in AI. Most likely mirrored at a larger scale, just as we operate on a larger scale than apes do.

-----

Sidebar: I consider this excellent 4 hour documentary to be must see TV for anyone wishing to grasp the ape to human mirroring that's already happened. Full show on Netflix, and here's the trailer on YouTube: https://www.youtube.com/watch?v=NjgL7Pumb4Q

-----

In addition to debating which AI experts won the expert debate, we could also dig deeper and inquire in to the source of such conflicts. I propose it to be the nature of what we're all made of psychologically, thought.

https://www.tannytalk.com/p/article-series-the-nature-of-thought

If that's true, then conflict is built-in to human nature, and will surely be passed along to our digital children, like AI. What will change is the scale of the conflict. Chimps fight other chimp tribes in a bar room brawl manner. We fight other humans with cruise missiles etc. We still act just like chimps (see the documentary!) just on a larger scale.

No ideology, religion, politics, or system of law etc has ever come close to ending human conflict, either within our minds or externally with each other. Human conflict in some form or another is universal, and unyielding. This suggests that the source of the conflict is built-in, and arises from something all humans have in common, which can only be the medium of thought.

AI experts are made of thought. As are all of their ideas. Thus, whatever thought is, it will be passed on to our digital descendants.

If the above is true, then it seems to follow that well intended plans for AI alignment are a form of fantasy. Can we make improvements here and there by tinkering around the edges? Sure, we already do this in the human world. But AI will not be made peaceful, just AI's parents have never been made peaceful.

If that is true, then the question becomes, what scale of conflict can we accept?

Expand full comment

I don't want EU bureaucrats to "run the world". The level of incompetence, stupidity, corruption and arrogance is profound. If they are your heroes, get new heroes.

Expand full comment

I'm Pro AI, I like Geoff Hinton for his intuition, and Yann LeCun for his technical chops, he seems far better among like minds, than in-front of a live audience. Melanie Mitchell was far better in the debate than he was, against Bengio and Tegmark IMO.

It may have been LeCun that I saw a snippet of recently, talking about the Doomers fear of existential threat. Arguing that the will to dominate was something peculiar to higher order primates, but not Orangutans, or other species.

That said I'm also of the opinion that when AGI arrives, (if it's not here already) that it will be as another species; pace Hinton, I do think we are are technically inferior (my wording) to models given they have bandwidth, back propagation and a different relationship to time, than we do. That they are not limited in the way that we are.

Expand full comment