43 Comments
User's avatar
Jack Shanahan's avatar

During an interview with NDTV last week, I said that if in 1950 you polled 1000 physicists globally, you would find universal agreement that nuclear weapons posed an existential threat. (Though some would also undoubtedly say they were a necessary evil, for deterring future war).

Yet I then said that if you were to poll 10 AI luminaries today, you will get at least four different opinions on whether AI poses an existential risk.

That’s not a problem necessarily. It’s the nature of the AI beast today. But it’s hard to criticize Congress (and other global policy bodies) for either under- or over-regulation of AI when there are widely different opinions on such a fundamental question.

Expand full comment
David Piepgrass's avatar

Nuclear weapons pose a catastrophic rather than existential threat, but humans often mix the two up even today.

Ironically, if everyone agreed that AGI posed a catastrophic threat, the threat would be greatly minimized. Seems to me that most of the threat comes from people charged with making AGI thinking it's inherently safe.

Expand full comment
Boodsy's avatar

Lol! Who needs AI to doom us when we have jokers like this lot pulling the strings?

Expand full comment
Jim Carmine's avatar

Simple errors of "common sense," hallucinating, weird overconfidence, none of this suggests LLMs understand anything but the implications of their own solipsistic AI architecture. The language game has to in some way get outside of itself and speak about the world. It is the Kant problem all over again. AI is still locked in its own viciously spinning transcendental consciousness. Good old Stevan Harnad seems applicable. No sensorimotor transduction no real consciousness, no real understanding. Touch me.

Expand full comment
Jan Matusiewicz's avatar

If that is so, there is no need to regulate. If "deep learning is hitting the wall" then its future usage won't be much bigger than now. Dead end

Expand full comment
Jim Carmine's avatar

I don't know about that. Yejin Choi gave a talk recently, with Gates, and prior a TED talk, where she suggests the current algorithms have probably hit a wall because they are fixated on size of data and speed of processing, and that will not lead to AGI. She attributes the irreparable common sense errors to this. But current AI will invade our privacy and promote all sorts of bad solutions that will certainly hurt people. Yet there is entirely the possibility of a more elegant set of algorithms that may do much more with much less data, just like us. So I am not a Doom guy, but I can see lots of future suffering if we do not regulate, and even more so if the breakthrough happens and we are caught flat footed.

Expand full comment
Jan Matusiewicz's avatar

What bad solutions you mean? Do you have any examples of actual harm done by ChatGPT or IT solutions that use it?

Expand full comment
David in Tokyo's avatar

Using LLMs as psychiatric counsellors has been a disaster. In multiple cases they've done things that make depressed patients worse. Ditto for medical advice. Good a lot of the time, dead wrong (suggesting ways of dealing with medical problems that make the problem worse) way too often to be acceptable.

Expand full comment
Jim Carmine's avatar

Yes, actuarial technicians used by retirement homes can predict likelihood of future death and expensive diseases of potential residents with incredible accuracy based on someone's current health and other data; they can use this information to deny people entry or other sorts of money saving mischief.

Expand full comment
Dr. Alberto Chierici's avatar

Please enter the ring, we’ll root for you!!

Expand full comment
Chaos Goblin's avatar

Real Housewives of Silicon Valley is big ick.

Mr. Cedric's stated argument against regulation is farcical imo. The US doesn't regulate and barrels on ahead with zero responsibility while the EU "over"-regulates and is apparently a cesspool (X to doubt), so the EU should also move fast and break things, and hopefully enough tech titans breaking things in the rage room will get us through a tumultuous era to a better tomorrow? (XXXXXXX to doubt)

Expand full comment
Matthew Chew's avatar

So good just so good ! Well said

Expand full comment
David Piepgrass's avatar

If I were you, Gary, I would have pointedly disagreed with Hinton after that first bit. AGI is dangerous independent of whether GPT4 "understands" anything, much as an H-bomb is dangerous independent of whether nuclear reactors can similarly explode (er, they can't).

And given how dismissive people are of AI understanding, I expect the phrase "it doesn't really understand anything" to be used to describe the first AGI, too. And if, later, a poorly-aligned AGI should be given a memory subsystem and become far more powerful and dangerous than anyone intended, a few people will still keep repeating the pleasant thought: "it doesn't really understand anything". In some sense they could even be right ― maybe it's not conscious, maybe it just says it's conscious because it's instrumentally useful to pretend to be humanlike.

Expand full comment
Eric Cort Platt's avatar

Science is not a democracy. By that I mean, it's (ultimately) truth and nature that decide, and not a crowd, or via a vote. As we all know, one person (sometimes) has had to go against the democratic opinion and express a viewpoint that shook the world, e.g., Copernicus, Galileo... and in some cases, actually risk their lives (the crowd can be rather fascistic!). With AI and the investigation of intelligence (which in my view, is intrinsic to Consciousness, and non-mechanical), we are in the forefront of knowledge, and things are going to be shaken to the core... under those evolutionary circumstances, these kind of knock-down drag-out fights are natural (and fascinating!). And when the dust settles, things will look *very* different...

Expand full comment
Peter's avatar

May i ask in what way intelligence is non-mechanical ? Do you mean not reproducible in computers by non-mechanical ? Is solving a puzzle done by non-mechanical means then? Writing a poem ? A novel ? Inferring people's emotions through their body language ? Playing the guitar ? Working in a factory ? Navigating a maze ? Driving a car ? Proving a Math theorem ? Cooking food ? Buying groceries ? Conversation ? Remembering things ?

Expand full comment
Eric Cort Platt's avatar

Sorry for the long delay – very busy here. I meant that there are aspects of intelligence – and we need to define/re-define clearly what we mean by intelligence - that are non-mechanical, in that they are direct and non-temporal, therefore not amenable to being reduced to a process. Some things *are* amenable to information processing, obviously, thus the great success in using computers for those *functional* tasks in the phenomenal world that involve breaking things down into parts, symbols (such as numbers), tasks, and can be seen from the outside (behaviors), such as most that you mentioned. But one that you mentioned – "conversation" – is interesting, because one can simulate a conversation very well, as ChatGPT does, but it has been proven in experience that it has zero understanding, nor any grasp of the meaning (information is not meaning) of what it is saying – nor do automatic language translators... nor for that matter, is it capable of true communication, which involves love. I could write reams about all this stuff (and have already) but should not do it here now. Thanks for the question.

Expand full comment
Peter's avatar

Do you have examples of things that are not amenable to information processing ?

You talk about true communication involving love, but there are countless of humans who communicate perfectly without love because their emotions are non-existent or reduced to negligible levels due to neurological and psychological disorders.

Don't feel offended, but i surmise the reason you cannot write about it here is because you don't have a precise idea of your position yourself. And my guess is that you have no precise idea of your position because it doesn't really make sense.

I'll add 5 cents here : think about why there is a hard problem of consciousness, but no hard problem of intelligence.

Expand full comment
Eric Cort Platt's avatar

Love is not an emotion (I should have specified that since it's such a common misperception and misunderstanding). I define emotions as what comes and goes as an "energy", for lack of a better term, in the experience of the body – they are constantly changing. Feelings, as I define them, can be a sense of something registering an attribute of the Real – what does not come and go. Real love is registered in experience as a feeling via the body appearance of the attribute of Consciousness we call Love.

All you have to do to verify this in your experience. Ask yourself how communication is possible. If we were all living in separate realities, no communication would be possible. This is pointing to the very simple common everyday fact, that every scientist knows there’s only One reality. Otherwise there couldn’t be science, and there couldn’t be any communication possible at all. A shared reality is required for both love and communication, and that shared reality we call "consciousness", whatever that is – "it" cannot be known as an object: That which is perceived cannot perceive (any more than you can see your own eye directly, metaphorically speaking).

Regarding "precise": It depends on your definition of precise. My definition is “clear” or clarity. Precision that is after-the-fact isn’t any use here. In other words, precision is a *product* of intelligent awareness, not the other way around. For example, engineering sometimes requires precision to implement ideas once they have emerged, and is either learned from the past (computers can do this if specifiable as a function, but not all of experience is a function), or is a new product of intelligent consciousness and creativity that appears in the totality of experience through these instruments, such these characters we call humans. Not the other way around. You have to put the cart behind the horse.

Beauty along with Truth, and Love (BLT), are known in experience as attributes of what we are calling consciousness, as you can see in experience that they are intrinsic: there is no experience of BLT without consciousness-reality, and no love without truth, etc.

Here's a provisional stab at a definition, as it were:

Truth is the immediate timeless apperception of the self-evident certainty of, for instance, a mathematical idea, proof, or a logical, or a true expression (verbal or otherwise). (This allowed us to invent the logic to invent computers, so-called AI, etc.).

This experience arises to, and is experienced via what we term “the mind” from Intelligence.

You experience this when you "get" a joke: there is a discontinuity in thought. A sudden understanding. A gap.

Beauty is the immediate apperception of the self-evident beauty (to the senses) of, for instance, a natural or artistic expression (visual, aural or otherwise). This experience arises to, and is experienced via what we term “the perceptual senses” from Intelligence.

Love is the immediate apperception of the self-evident love feeling (in the inner body appearance of sensations), for instance, experiences of a communicative expression (via a human, animal appearance…). This experience arises to, and is experienced via what we term “the feeling sense” (not an emotion) from Intelligence.

So the function of computers and AI is to an enact certain behaviors as tools within the functioning of behavior that is in time and space. But it’s not intelligent in the true sense of the word, since real intelligence, as you can know from direct experience (not theory, models or what you learned), is non-temporal. And if I were to fall in love with a robot, it would be a projection. :))

Note that I am not talking about sentience. Sentience would be the behavior of bodyminds, as observed, such as waking state, dreaming sleep state, and deep sleep. But those states (they are "states" because come and go), are experienced in consciousness (awareness). Awareness itself is never "off", otherwise states would not be experienced and known: they appear to consciousness. In other words, you cannot have Being (and phenomena) without Awareness, otherwise it would not be known, and you can't have Awareness without Being, otherwise it wouldn't exist. They are inseparable in Reality. However you can have consciousness without phenomena – therefore phenomenal appearances such as objects and states are dependent on consciousness, but consciousness is not dependent on objects or phenomena; is independent of time and space since time and space appear in consciousness.

So again, so see clearly, all you have to do is look through the right end of the telescope...

Gotta run, thanks...

Expand full comment
Peter's avatar

I see, you've somewhat created your own definition of Intelligence aka True Intelligence that is synonymous with Consciousness, or at least requires Consciousness. But Intelligence is just the ability to acquire and apply knowledge and skills, or through wikipedia " It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.".

Intelligence is a way simpler concept than which you seem to hold, one does not even need self-awareness to display intelligence.

But wait, even consciousness is highly vulnerable to digitization, take the feeling of contemplating beauty for example, imagine a computer that has a function is_beautiful(object) that would return an integer from 1 to 100, and this function perfectly matches the subjective evaluation of humans thanks to some extensive training. Imagine a computer has implemented all the other feelings and basic emotions, love, fear, etc. in the same manner. Wouldn't that computer be able to display True Intelligence ? If it had human form, and acted upon its integer represented feelings in a graceful manner, do you think you could tell the difference with a real human ?

Expand full comment
Eric Cort Platt's avatar

It's a definition of intelligence in line with thinkers like Spinoza and other philosophers who looked deeper than outward behavior or definitions. They looked inward towards the source.

The self-awareness you are talking about is a function, looked at from the outside or after-the-fact of awareness itself.

In fact, it couldn't possibly be any simpler, this definition of consciousness and intelligence: the reality which is reading these words right now. Not the contents, but the reality behind the experience. Not a concept, not hearsay, not a theory, not imagination or fantasy (those are all contents). Just the simplest empirical fact.

So we are really talking apples and oranges here, since you are talking about outward behavior. Yes, we can simulate anything we want. That's not at issue here. The reality of intelligence is, which will never be found in dictionaries or libraries. If it were, we'd have had AI decades ago...

Expand full comment
Bill Benzon's avatar

This story of dueling Xerpts brings up an issue that's been bugging me for awhile: Just what IS expertise on the issue of how sophisticated/powerful AI is?

It's one thing to be have deep knowledge about the technology itself. But here we are making judgments against some standard, and the standard is almost always human capability, implicitly if not explicitly. That means that you have to know something about human capabilities in order to make a valid judgment.

What do these guys know about human capabilities? What is their expertise? Do they know more than a bright sophomore at a good school? If not, then why should we take their judgments seriously?

As far as I know, this issue isn't just about these particular researchers. It's pretty much about the discipline. The issue is institutionalized. That is, that one is deemed to be qualified to address such questions regardless of actual knowledge of human capabilities, that is implicit in the institutionalized culture of AI.

Let me put the question in the starkest way possible by offering an analogy: Would you buy shares in a whaling voyage captained by someone who knows everything about the boat and is able to take it on a day sale to and from its home harbor, but has never sailed it on the open seas, must less navigated the treacherous seas around the Cape of Horn, and who doesn't know any more about whales than the average landlubber?

Expand full comment
David in Tokyo's avatar

Interestingly, there was a review (in Science) of a book about Moby Dick. It turns out that other than inventing a bunch of non-existent whale species, Moby Dick largely gets whale science/biology right. Pretty kewl, I thought.

You have an interesting implicit point about "these guys". AI types of my generation took psychology and linguistics courses and thought about things that humans did and wondered how we do them. At least for folks studying under Minsky or Schank, we were at least supposed to think about human abilities. (I buttonholed a Minsky student at an AI conference and groussed that the "truth maintenance" stuff he was doing wasn't a model of what people do and he snapped "That's what Marvin said, and that's wrong." From experience, I know that Roger wouldn't sign off on such a thesis.)

Like the "truth maintenance" bloke, most AI guys nowadays not only don't care how humans do things, they think they can do better.

Expand full comment
Bill Benzon's avatar

I'm not an AI type, more computational linguistics, semantics, actually. And, you're right, we thought about how humans did things and studied the relevant literature in psychology. The current crew, not so much. It's aggravating. I keep thinking of that Bruegel painting of the Blind Leading the Blind.

On Moby Dick, I believe that Melville crewed on a whaler in his youth. Those whalers had to know about whales; their livelihood depended on it.

Expand full comment
Aaron Turner's avatar

Girls, please - you're all pretty! :-)

Expand full comment
Khashayar's avatar

The AI debate today makes me really glad that I decided against pursuing a science and technology studies / philosophy of technology PhD in 2013.

The vast majority of the industry's luminaries today seem incapable of transcending past "science" (whatever that means) and continue to squabble over who can generate value for shareholders faster.

I don't doubt that these problems are hard or that there are capable people working on them but I for one am just glad to be as far as I am from these people.

No matter the medium term effect of AI, I'm sure most of today's geniuses will be proven wrong in the most ironic ways imaginable and we'll come to realize that an overlooked paper from a disbanded AI safety lab got it 100% correctly lol

Expand full comment
Peter's avatar

I would look no further than here : https://nickbostrom.com/papers/vulnerable.pdf

Nick Bolstrom is to be almost right to me, i can't see technology not becoming more destructive as time goes by, and i can't see us fully taming human nature, or imposing enough guardrails to prevent anybody from doing the unthinkable. Disasters seem bound to happen, the question is how big of disasters are we talking about before we implement maximal security ?

Expand full comment
Khashayar's avatar

That's definitely a cogent paper - thanks for sharing! This resonates with me because out of all the famous public intellectuals today, I usually find Daniel Schmachtenberger to be the most lucid, who offers a political economy critique of the same stuff Bostrom is talking about.

In that view, the "what if" question - as debated fruitlessly by AI and business minds - is often categorically insufficient. It's like riding downhill on a runaway train and wondering if there's a wall a mile ahead instead of freaking out about the fact that the train doesn't even have breaks.

At its core, since the end of the second world war, we have created a world that is inherently incapable of addressing a decentralized crisis that requires coordination and cooperation and ironically enough are now faced with a poly-crisis that is a dozen of those at the same time. So our inability to address AI risk itself a symptom of our parasitic civil religion, which Schmachtenberger calls the superorganism. Highly recommend his work!

Expand full comment
Peter's avatar

Thanks ! I'll definitely check it out, deep analyses of the post-AGI society are severely lacking.

Expand full comment
Alexander Kurz's avatar

What about the following? We already passed the singularity. Our whole world economy is the AGI, trained by reinforcement learning on the objective function of profit maximization. What are the chances that this AGI will extinguish humanity? What would be the evidence that these chances are minuscule?

Expand full comment
Patrick O'Connor-Read's avatar

I don't want EU bureaucrats to "run the world". The level of incompetence, stupidity, corruption and arrogance is profound. If they are your heroes, get new heroes.

Expand full comment
praxis22's avatar

I'm Pro AI, I like Geoff Hinton for his intuition, and Yann LeCun for his technical chops, he seems far better among like minds, than in-front of a live audience. Melanie Mitchell was far better in the debate than he was, against Bengio and Tegmark IMO.

It may have been LeCun that I saw a snippet of recently, talking about the Doomers fear of existential threat. Arguing that the will to dominate was something peculiar to higher order primates, but not Orangutans, or other species.

That said I'm also of the opinion that when AGI arrives, (if it's not here already) that it will be as another species; pace Hinton, I do think we are are technically inferior (my wording) to models given they have bandwidth, back propagation and a different relationship to time, than we do. That they are not limited in the way that we are.

Expand full comment
Gerald Harris's avatar

This is a thoughtful, if raucous exchange. But it leaves me scratching my head about what exactly do we use as proof that and AI device understands the real world (or maybe anything). What is our proof of "understanding?" Even in human affairs we debate that someone has really understood us. Proofs we use are: that they act in a way we think is consistent with it, they can repeat it in different words that indicate a deeper capturing of the essential points; or they go out and apply what we have explained and get the result we predicted. But even those are not air-tight. How can an AI machine understand, if at all, at a level deeper than its programming? If that programming is problematic (for example due to bias or flawed data or data with multiple meanings that might lead to multiple consistent but inappropriate results), then how do we prove "understanding." I think the underlying fear here seems to be concern about misunderstandings that have impact in the real world when decisions/actions are left to an AI system. We can be concerned about that without resolving this debate.

Expand full comment
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment