46 Comments
Oct 10, 2023Liked by Gary Marcus

This was an enjoyable "annotated" interview. The moment where Hinton suggests that AI systems have experiences yet are somehow not conscious is revealing of some deep conceptual confusion on his end. This isn't just a philosophical debate, it gets to the core of what these systems are designed to do.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

Yes, no matter how much data you throw into LLMs, that's not going to change anything qualitatively. It's the same with Google Translate; it's a very impressive machine translator, but it's not going to start translating novels good enough just because its database keeps growing. Totally different things.

And I'm also curious when people say that LLMs have a kind of experience similar to humans: Does this mean that it has this experience in EVERY chat? So, if they somehow magically became conscious, would a new consciousness emerge every time a new person logs in to the program? Or would it have a mega consciousness, feeling experiences in some million chats at the same time? Phew.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

This is the kind of posts that I come here for. Careful, nuanced, acknowledging both the truth, the limits, and issues of something.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I would think that at the least, before taking any predictions Hinton makes seriously, an interviewer should ask him about his 2016 statement that we should stop training radiologists because AI would make them worthless in five years.

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I am so reminded of Marvin Minsky's 1970 assurance that 'within three to eight years we will have computers with human level intelligence and superhuman intelligence shortly thereafter. He was a Turing Award winner for his work on AI and what he claimed was horribly wrong and eminently believable at the time. History repeats itself

Expand full comment
Oct 16, 2023Liked by Gary Marcus

You responding to 60 Minutes in 'real time' on your own dime is so inspiring and reassuring. I go on rants about music 'biz' mess shenanigans and when diligent scientists keep pace with propagandist b.s. you empower all of us ☮️💜🎼✊

https://www.youtube.com/watch?v=r_3RTm2xL-4

Expand full comment

Fantastically laid out and so indicative of the issues in the field.

Expand full comment
Oct 16, 2023Liked by Gary Marcus

Dalle3 can't even tell its left from its right, yet it's "smarter than us"? Not by a long shot. Go ahead and try for yourself. First ask ChatGPT4 if it can tell its left from its right. It will insist that it can. Then ask it to paint a left-facing arrow. It will paint a right facing arrow and tell you its facing left. Then ask it to examine its own output. It will apologize and ask if it can try again. Tell it to try again. It will give you another right facing arrow. You can repeat the process ad nauseum and it will never give you a left facing arrow. To its credit, it does seem to know up from down. Whether it knows shit from shinola remains undetermined.

Expand full comment

Thanks for doing this report! This 60 Minutes segment is sensationalist BS. I hate to think about all the folks out of the loop watching this garbage. Ugh... Pure fear-mongering!

Expand full comment
Oct 10, 2023Liked by Gary Marcus

I can't really blame people for distrusting experts when I see this kind of thing.

Expand full comment

I wholeheartedly agree with the idea that short-term dangers from malicious or even just misguided use of faulty and overhyped systems is a far more concerning issue than any existential risk --even if the existential risk cannot be completely dismissed.

Sadly, those shorter-term issues don't make it for flashy headlines.

Expand full comment

Thanks for your interesting commentary. The thing I can’t understand is WHY Hinton, who surely is no slouch, believes this? Someone below said “he is clueless about intelligence and consciousness, and annoyingly unscientific in his predictions,” and indeed it seems to be so—but why? Is he going senile? I just don’t understand it.

On a different note, your post would benefit greatly with more differentiation between your (silent) voice and theirs. Perhaps make yours bold and indented?

Expand full comment

You called to proofread on ex-twitter. Below are my findings. Grammarly, even in free version, could help with most and it integrates with Substack.

1. The interview they just with Geoff Hinton - verb?

2. deserves .I can’t - misplaced space

3. But I don’t we are - verb?

4. Pelley could and should pushed - have?

5. You can’t really mean this, do you? - can you?

6. “spending time friends and family” - with?

7. but the still have enough problems - they? and coma in the end

8. talk She - dot missing

9. Pelley In 2019 - missing punctuation

10. but not of that is captured - none?

11. and it makes them difficult predict - to?

12. Put me, too, done - not sure

13. but that I wouldn’t mean - drop I

14. Hinton did not in fact give birth to AI - father just like godfathers don't give birth

15. Thee field - the?

Expand full comment

Gary, thanks so much for this review. I watched it and had several moments of discomfort. I felt like watching a father want to protect his child from too much scrutiny. He was basically coming off as proud of a child who had nevertheless gotten into trouble or of a child that has gotten in over his head somewhere in life. The news media is so personality-focused and wants to attach a human to any story (a core of journalism as it is taught in schools). We need you and others to keep the critiques and truth out there for those of us paying attention!

Expand full comment

> Gary Marcus: I am much more worried about bad actors deliberately misusing AI

This is so often overlooked in favor of AIs taking over, but misaligned humans seem to be the biggest threat of all, especially if you add potential future unknown technologies into the equation.

I can't envision a future society where our freedom isn't severely limited in one way or another, like limiting our access to materials to build things, or to software that is too intelligent, or limiting our rights to privacy with constant monitoring.... i doubt that defensive measures will ever cover every threat, and once threats get big enough you just can't take the risk of leaving a hole left open.

Expand full comment

It is too bad Gary but your point of view just doesn't sell commercials for junk food like the opposite point of view.

Expand full comment