46 Comments
Oct 10, 2023Liked by Gary Marcus

This was an enjoyable "annotated" interview. The moment where Hinton suggests that AI systems have experiences yet are somehow not conscious is revealing of some deep conceptual confusion on his end. This isn't just a philosophical debate, it gets to the core of what these systems are designed to do.

Expand full comment

Hinton is part of what I call the deep learning generation. They discovered something important and useful but they confused it with intelligence. AGI will not come from that generation. They are stuck in a local optimum of their own making.

Expand full comment
author

Agreed

Expand full comment
Oct 11, 2023Liked by Gary Marcus

Chomsky and others have talked about how the Cartesian physicists thought they had basically figured out all of science back in the early renaissance, with Descartes and others being fascinated with making machines that mimicked human biology. These local optimums are a recurrent theme in history.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

Yes, no matter how much data you throw into LLMs, that's not going to change anything qualitatively. It's the same with Google Translate; it's a very impressive machine translator, but it's not going to start translating novels good enough just because its database keeps growing. Totally different things.

And I'm also curious when people say that LLMs have a kind of experience similar to humans: Does this mean that it has this experience in EVERY chat? So, if they somehow magically became conscious, would a new consciousness emerge every time a new person logs in to the program? Or would it have a mega consciousness, feeling experiences in some million chats at the same time? Phew.

Expand full comment

Agreed, particularly re consciousness. The way I think about consciousness is that it is a process that requires a) continued presence/awareness and b) ongoing "training" in response to the presence.

I struggle to define a time quantum in which an LLM is "conscious" - is it the inference time for one word? Because after that the existence of that instance of LLM "being" ceases and a new one is booted up to predict the second word.

And, of course, LLM's don't (yet) learn from every request, so that throws any idea of reflection and thoughtfulness out of the window.

Once it has committed to the opener that "Gary Marcus has a chicken named Henrietta..." it can't back down and say "...actually no, he doesn't" - it presses on with growing confidence instead.

Expand full comment

Did you ever notice that "consciousness" seems to blossom in children around the age of 10 or 11? Up until then, children are mimicking and experiencing - and suddenly there's this person. It's about this age where you start noticing that you "like" or "dislike" your children's friends.

So, maybe, if you took a computer with the capacity and wiring of a human brain, as well as the emotional pathways - and gave it 10 or 11 years of experience, the ups and downs and traumas of life, you could get something resembling a beginning of consciousness?

Maybe, anything else, no matter how clever and precocious is just a lousy imitation.

Expand full comment
Oct 11, 2023·edited Oct 11, 2023

Moral psychologists have run experiments on children and came to the conclusion that they start developing moral intuitions before they can even speak. It's likely part of ontogenetic development.

consciousness starting at age 10 seems a bit bonkers to me. I can remember my first time on an aeroplane, which was age 1 and a half.

Expand full comment

Thanks for your reply.

I have a 5 year old. She is bright, self-centered, social and manipulative.

But conscious? No matter how I try to apply that term to her, I can't make it fit.

Of course, I'm probably going down the rabbit-hole that has frustrated so many researchers. And still, some sort of line is crossed at 10 or 11. Lots of parents see this in their children and other people's children.

Expand full comment
author
Oct 12, 2023·edited Oct 12, 2023Author

As a former full professor of (inter Alia) developmental psychology, and as a parent, I am not buying this at all

Expand full comment

Perhaps I'm putting too high a bar on "consciousness"?

But something distinct happens at about 10. Is it "setting" of the personality? Or a glimpse of maturity?

I've only seen this cycle twice, so my data is very limited. I'd be interested in any insight you can provide.

Expand full comment

Consciousness appears much earlier in children though; they experience themselves as persons many years before they're 10. And yeah, the hard part is to create a computer with emotional pathways in the first place. No matter how many books we throw at LLMs, they're not going to become more conscious than our calculator.

Expand full comment

Thanks for your reply.

But there is something about the age of 10 or 11 - some sort of line is crossed. Maybe it's the beginning of maturity - or maybe a change in the way adults see them.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

This is the kind of posts that I come here for. Careful, nuanced, acknowledging both the truth, the limits, and issues of something.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I would think that at the least, before taking any predictions Hinton makes seriously, an interviewer should ask him about his 2016 statement that we should stop training radiologists because AI would make them worthless in five years.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I am so reminded of Marvin Minsky's 1970 assurance that 'within three to eight years we will have computers with human level intelligence and superhuman intelligence shortly thereafter. He was a Turing Award winner for his work on AI and what he claimed was horribly wrong and eminently believable at the time. History repeats itself

Expand full comment

History repeats itself, but not perfectly. There has been serious progress, and now we have the compute, the data, and the scale to get things done. It will still take a decade or two.

Expand full comment
Oct 16, 2023Liked by Gary Marcus

You responding to 60 Minutes in 'real time' on your own dime is so inspiring and reassuring. I go on rants about music 'biz' mess shenanigans and when diligent scientists keep pace with propagandist b.s. you empower all of us ☮️💜🎼✊

https://www.youtube.com/watch?v=r_3RTm2xL-4

Expand full comment

Fantastically laid out and so indicative of the issues in the field.

Expand full comment

Dalle3 can't even tell its left from its right, yet it's "smarter than us"? Not by a long shot. Go ahead and try for yourself. First ask ChatGPT4 if it can tell its left from its right. It will insist that it can. Then ask it to paint a left-facing arrow. It will paint a right facing arrow and tell you its facing left. Then ask it to examine its own output. It will apologize and ask if it can try again. Tell it to try again. It will give you another right facing arrow. You can repeat the process ad nauseum and it will never give you a left facing arrow. To its credit, it does seem to know up from down. Whether it knows shit from shinola remains undetermined.

Expand full comment

Thanks for doing this report! This 60 Minutes segment is sensationalist BS. I hate to think about all the folks out of the loop watching this garbage. Ugh... Pure fear-mongering!

Expand full comment

Gary, I work on an education newsletter. I feel like I am grappling with a lot of hype around Gen AI writing capabilities. After reading this post, I tried to channel my inner-Gary for my latest post. I'd love it if you'd take a look sometime. Nick

https://open.substack.com/pub/nickpotkalitsky/p/why-write-unassisted-in-the-era-of?r=2l25hp&utm_campaign=post&utm_medium=web

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I can't really blame people for distrusting experts when I see this kind of thing.

Expand full comment

I wholeheartedly agree with the idea that short-term dangers from malicious or even just misguided use of faulty and overhyped systems is a far more concerning issue than any existential risk --even if the existential risk cannot be completely dismissed.

Sadly, those shorter-term issues don't make it for flashy headlines.

Expand full comment

Thanks for your interesting commentary. The thing I can’t understand is WHY Hinton, who surely is no slouch, believes this? Someone below said “he is clueless about intelligence and consciousness, and annoyingly unscientific in his predictions,” and indeed it seems to be so—but why? Is he going senile? I just don’t understand it.

On a different note, your post would benefit greatly with more differentiation between your (silent) voice and theirs. Perhaps make yours bold and indented?

Expand full comment

You called to proofread on ex-twitter. Below are my findings. Grammarly, even in free version, could help with most and it integrates with Substack.

1. The interview they just with Geoff Hinton - verb?

2. deserves .I can’t - misplaced space

3. But I don’t we are - verb?

4. Pelley could and should pushed - have?

5. You can’t really mean this, do you? - can you?

6. “spending time friends and family” - with?

7. but the still have enough problems - they? and coma in the end

8. talk She - dot missing

9. Pelley In 2019 - missing punctuation

10. but not of that is captured - none?

11. and it makes them difficult predict - to?

12. Put me, too, done - not sure

13. but that I wouldn’t mean - drop I

14. Hinton did not in fact give birth to AI - father just like godfathers don't give birth

15. Thee field - the?

Expand full comment

Gary, thanks so much for this review. I watched it and had several moments of discomfort. I felt like watching a father want to protect his child from too much scrutiny. He was basically coming off as proud of a child who had nevertheless gotten into trouble or of a child that has gotten in over his head somewhere in life. The news media is so personality-focused and wants to attach a human to any story (a core of journalism as it is taught in schools). We need you and others to keep the critiques and truth out there for those of us paying attention!

Expand full comment

Yes. Marcus and his colleagues are doing a mostly thankless but essential job at keeping the AI community honest.

Expand full comment

> Gary Marcus: I am much more worried about bad actors deliberately misusing AI

This is so often overlooked in favor of AIs taking over, but misaligned humans seem to be the biggest threat of all, especially if you add potential future unknown technologies into the equation.

I can't envision a future society where our freedom isn't severely limited in one way or another, like limiting our access to materials to build things, or to software that is too intelligent, or limiting our rights to privacy with constant monitoring.... i doubt that defensive measures will ever cover every threat, and once threats get big enough you just can't take the risk of leaving a hole left open.

Expand full comment

It is too bad Gary but your point of view just doesn't sell commercials for junk food like the opposite point of view.

Expand full comment