144 Comments

I'm not worried about a conscious AI because, regardless of the many claims, I don't think anyone knows what consciousness is. I'm, however, very worried about an AGI falling in the wrong hands. An AGI will behave according to its conditioning, i.e., its motivations. It will not want to do anything outside its given motivations. And its motivations will be conditioned into it by its trainers and teachers.

Expand full comment

Consciousness via artificial, non-biological means is a case of 'fake it until they make it'. Does anyone even care about 'Turing+ tests' any more?

Expand full comment

The Turing test was an attempt to require plasticity. Plasticity is hard but not impossible - all you have to do is make it out of undirected self-extending elements, which also happens to be a good way of building an English language interface to a machine.

Expand full comment

I'd argue, like Gary Marcus, that the Turing test is more to do with a machine's ability to fool a human into thinking it is also a human, not that it's intelligent. The naivete of the questioner is a key factor in passing the test.

Anthropomorphism of generative AI chatbots (as well as physical robots becoming equivalent to domestic pets) is now highly prevalent. Bot software has had such expressive features (self pronoun use, requesting praise etc) dialled back.

Expand full comment

If you are down at the level of bot software, that's great, but plasticity is more interesting. The Lane Following example - https://activesemantics.com/wp-content/uploads/2022/10/Lane-Following1.pdf - demonstrates the human's ability to read text and use its Unconscious Mind to hack its own operation, in a way the Conscious Mind knows nothing about. It is going to be hard to do, but not impossible, if it can navigate its own structure. Questioning the Conscious Mind, you soon realise the Conscious Mind is a bit of an idiot - Four Pieces - really?

Expand full comment

No matter if a materialist or dualist approach is considered more correct, philosophical perspectives will continue to take a back seat to practical applicability and profitability.

Expand full comment

Without wishing to be impolite, philosphical perspectives are unlikely to be useful if the people providing them do not understand their limitations, because if they did, they would probably have a different perspective. The practical applicability will force the theoretical side of things. People who spend their time on hard problems aren't usually concerned with profitability.

Expand full comment

Consciousness is a pretty lousy deal - only Four Pieces at once? We have to do much better to handle complex problems, so breaking the limit would be a good start. "its motivations will be conditioned into it by its trainers" - if it understands English, not so. The easiest way to create AGI is to build an English language interface, which at the moment is handled by our Unconscious Mind.

Expand full comment
Aug 22, 2023Liked by Gary Marcus

What bugs me about the paper : all the indicators of consciousness could be implemented in some silly 2D matrix of integers world-of-a-kind, and yet nobody would dare hypothesize such a simple computer program is conscious.

It seems to further support that complexity and/or substrate are keys. Perhaps the heuristic of "if it's really smart and self-aware, it's probably conscious" is sufficient for preventing suffering.

Expand full comment

We need to start putting policy in our place as if AI is conscious and adversarial. That is what all the deep AI policy wonks are calling for. First step, curb access to open source LLMs. Do we really want everyone and their brother training these machines for god knows what? Marcus, you are right on the money. It is time to wake up and smell the coffee. And one can say this and still be excited about using these applications in a variety of different spaces. I dub this approach “critical” AI literacy.

Expand full comment
Aug 22, 2023Liked by Gary Marcus

I think we can presume enough of the AI community are totally mad and will make terrible decisions. The only way here is legislation. We shouldn't assume for a moment that people will take anything but the worst possible decision.

Expand full comment

I'm with Rebel - we have no remote idea of how matter and energy give rise to subjective experience. I am happy to bet (not that we can prove) that consciousness is far easier to fake.

https://www.mattball.org/2022/09/robots-wont-be-conscious.html

Expand full comment

One theory is that some phenomena related to what we call intelligence is embedded in the fabric of reality. Real, but non-existent, like the laws of physics.

https://www.tannytalk.com/p/intelligence-is-intelligence-a-property

Expand full comment

I agree. One of my fears is that, when AGI arrives on the scene (it's inevitable in my opinion), many will swear up and down that, like commander Data in Star Trek, they are conscious beings and that they must be given human rights. We all know about the Blake Lemoine incident. I shudder at the unforeseeable consequences.

Expand full comment

Hi Gary, all conscious entities to date, are analog - with 3D form, molecules etc - which undergo phenomena (molecular docking, electric field generation etc). It's an absurd, baseless assumption that these can be replaced by chips, clock cycles and numpy :)

The Physical Symbol System Hypothesis (on which all these claims and presumptions are based) is exactly that, a hypothesis.

Expand full comment

> It's an absurd, baseless assumption that these can be replaced by chips, clock cycles and numpy :)

Haven't you noticed that over the years, chips and clock cycles are getting better than humans at more and more tasks that were always considered to require intelligence ? That the coverage keeps increasing, and the quality of the chips' work keeps increasing ?

Expand full comment

Peter, I have been following all this since 1981, so, yes.

But it's not about that at all. Please look up PSSH, which simply states that intelligence can result from any physical symbol system - hasn't happened yet, won't ever. Analog systems work radically differently compared to digital ones.

Chips don't think. For us to imagine they would exhibit consciousness, seems incredulous.

Expand full comment
Aug 22, 2023·edited Aug 22, 2023

Doesn't the trend of (some would say drastically or exponentially) increasing capabilities of AI systems suggest that they might just conquer all capabilities ? Where would you place the hard wall of digital cognition that will make AI systems unable to perceive the world and think about it as we do, perhaps not in a similar manner as we do, but reaching similar outputs from similar inputs ?

Could you please point me out some very specific example of some problem solved by humans that can never be solved by a digital computer ? Like, "a chip in a robot will never be able to wipe his ass with toilet paper", or "a chip in a robot will never know how and be able to cook pasta", or "a chip will never be able to drive a car around" ? What is impossible for chips do you think ? There has to be something specific impossible tasks for them to do that you know of, right ? Otherwise, you wouldn't dare claim they're forever incapable of displaying intelligent behavior, right ?

Expand full comment

A chip in a robot will never have need to wipe it's ass because it's part of a machine and those don't eat or drink because they aren't alive.

I don't know of anyone speculating that one day artificial humans will become so advanced that they'll eat sandwiches and go to the bathroom. This, despite the fact that humanoid robot technology just keeps on advancing, accomplishing tasks that were once the sole domain of human beings and are closely related to going to the bathroom. For example, jogging. When we jog we get thirsty and then we pee. Also, when we do push ups regularly our arms get stronger. Would advances is robot jogging and pushup doing abilities be a sign that pretty soon they'll start peeing and getting ripped?

The brain is biological. How many other actual features of biological organisms have we recreated using transistors? Digestion? Photosynthesis? Anything at all?

Expand full comment

Why compare the brain, a data processing unit, with digestion or photosynthesis ?

I know robots won't need to wipe their ass, i'm only pointing out that people who think we can't replicate human intelligence in silicone have no idea what can't be replicated.

Expand full comment

On what basis is the brain a "data processing unit" while other organs or systems in our bodies are not?

I agree that it's near impossible to "know" that some hypothetical future technology can't ever be created. Seems like these days we're being told to actively believe that a certain hypothetical future technology will be created. To the exent you're asking about creating machines that take specific inputs and give specific outputs, I'd agree that it's hard to think of some natural limit. But if we're talking about creating "consciousness" or "sentience" or as you say

"cognition", those aren't in the same class as creating a robot that can cook or drive. The latter can be assessed just by observing behavior. The former suggest far much more general abilities, which I think is where all the confusion is coming from right now. We see ChatGPT perform a text generation task very well, and we start imagining AIs with mental properties like ours (which is what this article talks about).

Expand full comment

Indeed - whatever the chip does is not going to resemble what we do (ie how we are conscious etc).

Forget ass-wiping, consider thinking - chips don't think - do they? A windup timer doesn't calculate the way a chip does - does it? I asked two questions, what are your answers for them?

Expand full comment
Aug 22, 2023·edited Aug 22, 2023

> consider thinking - chips don't think - do they?

You'll notice that you had to go very high in the abstraction ladder of cognitive abilities to find something to support your argument, you used among the broadest and loosest words possible - thinking. That's not a good sign, don't you agree ?

But let's consider thinking for a moment, what train of thought specifically a chip couldn't have ?

Like a chip is behind a CCTV in the kitchen and can pick up items using robotic arms,

a cat enters the room, he's identified it as the owners' cat,

the cat sit in front of his bowl and meows, the chip checks his former memories of cat meowing in the kitchen, there's 789 matching memories, then he takes the one where the cat is meowing in front of the bowl, 210 matching memories, then the ones where the master was there, 89 memories, he flips through them and notice that quite often the owner pours milk he picked up from the fridge, and sometimes from some cupboard when there's no bottle in the fridge,

he knows that mammals need to sustain their bodies with food and drinks, and that cats are often fed milk,

he decides to pour milk in the bowl to feed the cat.

Isn't that thinking ? If that isn't thinking, please describe me a precise chain of thought that a computer couldn't possibly have, or like a resulting behavior to a stimulus that you think a computer couldn't possibly have spontaneously (like cracking a jock in front of a funny situation) in case you think a chain of thought cannot be precisely divided into concrete steps.

Gimme something, even the tiniest thing, cause for now i see no supporting evidence to your claims. Remember, because you're not aware of how your subconscious thinks, doesn't mean it's not doing plain symbolical computing. What you see and manipulate in your conscious mind is only the tip of the iceberg.

Are you talking about the subjective experience of thinking, or about the process of thinking itself ?

> A windup timer doesn't calculate the way a chip does - does it?

It doesn't. So what ?

Expand full comment

I asked you two questions about chips, you didn't answer.

Expand full comment

Any physical mind can be representable in hardware.

That because any band-limited analog waveform can be precisely represented based on a sample, so can be digitized (https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem).

Hence, any analog system can be digitized, including signals between neurons. What is going on inside a human neuron is more complicated, but it is a continuous nonlinear function that can be approximated by a neural net.

Granted faithfully representing a human brain with a hardware-based neural net will require a lot of hardware, maybe one million times larger than what GPT-4 has, but it is possible.

Expand full comment

Discreteness always is an approximation. The less precise the approximation, the more problems NNs will have with edge cases. Things like Nyquist-Shannon assume the signals are perfectly harmonic too.

We might even need QM-level technology (see also https://ea.rna.nl/2019/03/13/to-be-and-not-to-be-is-that-the-answer/). There is also the problem of energy. Trying to approximate reals with integers (and real nn with digital nn) has an inefficiency problem. I.e. that brain of yours uses 25W or so. Compare to current GPT4 and then scale a million times. That is going to put a real dent in saving the ecosystems from the warming crisis)

Expand full comment

Nyquist-Shannon theorem assumes band-limited signals, and anything going down a synapse is band-limited, as a neuron's sensitivity to small changes in the signal is limited. We are not far from simulating billions of neurons in enough detail.

Expand full comment

Interesting. "a neuron's sensitivity to small changes in the signal is limited" is key though. How do we know this? Because somehow, at the edge of firing/not firing I would suspect unstable situations with chaotic behaviour.

Expand full comment

To address Gerben's question, indeed, in our synapses in the brain there's likely some chaos in between the states of a neuron firing or not firing. Such gray areas exist in any physical phenomenon. For example, when does an avalanche decide it is time to go down?

It looks to me however that the brain can still be qualified as an analog information processing device. And a human neuron or synapse could be replaced by an equivalent device in hardware, which could internally process things in either analog or digital mode.

As I see it, the brain is nature's answer for how to model the world and respond to it. The brain is an approximation, and it often fails when situations are complicated or does not have enough knowledge.

Our computer architecture are a different kind of approximation. We could easily make them analog or mimicking at a very low level what the brain does, but I don't think that would make them any better.

The goal is to learn to handle world situations, not to precisely imitate how the brain handles the same situations.

The brain is that way because nature could not come up with anything else given the constraints it had to deal with. It is not the gold standard or the only way.

Expand full comment

There is more -its not that we understand all natural phenomena well enough to be able to model and simulate them. Eg brains generate waves - it's not about being able to sample the waveform (Nyquist limit blah blah), it's about understanding why are they generated, even how they are generated, and what their purpose(s) is/are. It's rather laughable if we kid ourselves into think we understand how it all works, let alone wanting to build a crude digital approximation and claiming they are equivalent.

Expand full comment
Aug 22, 2023·edited Aug 22, 2023

Andy, not really - not in digital hardware. It's not at all about analog systems being digitizable, it's about natural phenomena not needing computation.

An hourglass measures time, so does my digital stopwatch. Unless you simulate the motion, interaction, etc of every sand particle (even that isn't adequate but I'll let that slide), you are not going to get a digital replica. Now apply that to every such phenomena the body and brain undergoes.

Expand full comment

Anything can be approximated well-enough with digital systems. We already have good enough systems for audio, video, fluid motion, etc. Anything our brain perceives and our own very thoughts can be sampled and digitized. An hourglass can be simulated just fine in all its glory, and the number of sand particles will be just a few billion.

Expand full comment

And yet, we are unable to create life from inorganic material. We've mapped the entire nervous system of the C. elegans worm, but our simulations of its behavior fail to account for much of what we observe in the real thing.

Digitally approximating analog audio is a vastly simpler task than digitally simulating living beings and their properties. And, even then, digital simulations of complex analog audio systems are imperfect. There's a whole industry dedicated to attempting to digitally recreate analog audio engineering equipment. Some do well but there are always deficiencies.

Most importantly, all of this requires creating a model of reality. Sampling brain activity to be digitized requires a model of the physical system. The models may be good; they are never the things themselves. Models are wonderful for helping us understand the outside world. They help us engineer machines that perform tasks similar to those performed by living beings (like us). But they're still models, and I don't see why we should believe that we can recreate "intelligence" with a model, unless you're prepared to define intelligence in an entirely behaviorist manner.

Expand full comment

Amen!!

We don't understand nature 100%, to be able to simulate it all. EVEN if we do, the simulation is simply calculations, as opposed to the matter, energy, distance, time etc they are computing.

If I set up "gravity" (not really possible, but we can set up the 'physics' that gravity would result in) so that a projectile creates a parabolic motion, that's not natural, that's ME re-creating my possibly flawed version! Further, if an AI "scientist" in VR "discovers" or "deduces" "gravity" by observing the motion of a projectile, that's a LOL, because she/he did nothing beyond discover MY setup. And right after that I can alter my code to make the inverting parabola be a sin() curve, or any of the hundreds of curves (another LOL). While I'm at it maybe I should reverse time :) In other words, in a simulated VR world, it's all made-up, arbitrary AF. That's one thing that nature is not, regardless of our understanding of it.

Expand full comment

I like the simulation example. The only possible retort I can think of is "well we'd know to reject your nonsense physics simulation because it doesn't match reality." In which case we're stuck with a tautology: we can, in principle, perfectly simulate reality under the condition that we have perfectly simulated reality.

To escape the tautology, we'll need to stipulate that the simulation need not be "perfect", only "good enough". In which case that distinction is doing all the work, and we have no idea where the in-between "good enough" space lies.

But as you point out we don't need to even go down this path. The simulation isn't the thing itself. To claim we can simulate the mental states of human beings (such that the simulation is equivalent) is to claim that mental states are created by something that isn't fundamentally biological. Which I think puts AGI advocates in a tough spot, given the materialist nature of all their arguments. What other properties of living things are we going to assert can be engineered from scratch? When will AI become so advanced that the algorithms need to drink water and get regular sleep?

Expand full comment

Intelligence is the ability to solve problems. The world it acts on is a separate thing. People have been intelligent for hundreds of thousands of years, and built great cities and art. Only recently we have had good enough instruments to distinguish a well-done simulation from reality.

So, yes, intelligence is a behavioral thing. When people are limited to living in a city, with no sensors beyond their natural senses, whether that city they live in is simulated or real does not affect if they are intelligent or not.

Expand full comment

Andy, the few billion is just for the hourglass. How about all the atoms in a wall clock, or your teapot?

The point is, it's not about being able to be simulated. Also, we don't understand all natural phenomena perfectly well, to simulate them. Molecules vibrate at 10^14 Hz - to simulate a small assembly of them would exhaust all computing resources.

As soon as you simulate something, that isn't the real thing anymore.

Expand full comment

You don't need to simulate all atoms in a wall clock, just the gears and the hands. The world has detail all the way to the Plank length, but our mind can't perceive it, so all that is irrelevant.

Humans could live just fine while employing their full intellectual capacities in an imperfectly simulated world, in which everything feels ok but is not perfect.

Expand full comment

Simulation is all or nothing, we don't get to abstract away what we can conveniently simulate - nature doesn't compute, and your attempts to want to replace it with computation is the crux of the problem, including with *ALL* AI to date. Atomic vibrations, light propagation... isn't computation. And our bodies and brains are in terms of such phenomena - 'life', consciousness etc are properties of such assemblies of structures, as opposed to assemblies of digital components in circuit boards.

The clock can't catch on fire if you simulate just the gears and hands :) It can't also fall down and break, and possibly still keep functioning. It can't shatter. It can't...

And that was just for the clock.

If you don't look at everything as 'computation' (because it is not), you will realize what I keep saying.

Expand full comment
Aug 21, 2023·edited Aug 21, 2023

They all appear to be living creatures as well. I rather doubt that consciousness is possible in inanimate systems.

Expand full comment

Any tools we make should be a simple and specialized as needed for a job. General super-smart systems, and especially human-like systems, will just result in more complexity and problems.

Expand full comment

Undergoes phenomena that don't involve explicit computation - that is the key distinction between it and a chip.

If I draw two dots on a piece of paper that are an inch apart, and two others that are 5 inches apart, and ask you which pair is closer, you would say the first pair. Did your brain calculate the Euclidian distances, compare them and pick the first? What would "AI" do? Compute and compare. That is the distinction.

Expand full comment

The usual retort to this is "you aren't consciously aware of what your brain is doing, and if there is a biological process that results in you deciding which two dots are closer, this must be some sort of calculation, whether or not you want to call it that. Just because you don't have the experience of performing an explicit computation, doesn't mean there isn't explict computation going on under the hood".

I'm curious what you make of this argument.

(FWIW, I reject it on the "weak" grounds that not knowing exactly what's going on upstairs when I perform some given mental task is not a reason to treat a specific and wholly speculative explanation as credible. "It's all equivalent to computation" is as compelling to me as "God did it". But many people find this burden-of-proof style objection unsatisfying.)

Expand full comment
Aug 28, 2023·edited Aug 28, 2023

Indeed they would object - because of the ingrained "it's computation all the way down" thinking.

I would answer by showing the questioner a Rube Goldberg cartoon (ANY one of them) and asking to please show me where computations are occuring :)

Just like it's not turtles all the way down, it's also, not computations all the way down.

A different way to answer that objection - ask what number system (eg decimal or octal etc) would the brain use - and whether in Roman brains Roman numerals used to be used, and in pre- real number world, only integers were used :)

Expand full comment

Thanks, I like the Rube Goldberg challenge. It's funny how "the brain is a computer" is this fascinating hypothesis, yet "toenails are computers" gets totally neglected.

Expand full comment

They're going to build *something* (barring economic collapse, resource constraints, etc.) and it might even have some or all of the properties that they believe they've taken from neuroscience.

It won't be human-like consciousness as we understand it. What they're building is, by definition, outside of the scope of the many varieties of awareness that human beings possess.

The danger in this is that their machines *won't need to be human like*. This is the consequence that is so worrying.

Expand full comment

What kind of tortured locked-in soul, starved of sensation, with no lived experience, will they strive to achieve - a human replica, or something else? Those who aspire to be gods risk potentially horrific consequences.

Expand full comment
Aug 23, 2023·edited Aug 23, 2023

FWIW, the best proposal about consciousness I'm aware of was put forth by William Powers back in 1973 in his book, Behavior: The Control of Perception. Though it was favorably reviewed in Science his way of thinking never caught on, perhaps because his conception was analog and by that time all the cool kids had gotten caught up in the so-called cognitive revolution, which was and remains a digital enterprise.

Powers account of consciousness is elegant and, in a way, simple, but it is not easily conveyed in brief compass. You really need to think through his whole model. Briefly, Powers’ model consists of two components: 1) a stack of servomechanisms – see the post In Memory of Bill Powers – regulating both perception and movement, and 2) a reorganizing system. The reorganizing system is external to the stack, but operates on it to achieve adaptive control, an idea he took from Norbert Wiener. Powers devoted “Chapter 14, Learning” to the subject (pp. 177-204). Reorganization is the mechanism through which Powers achieves learning, which he calls reorganization.

Here's a passage from his book that gets at the heart of things (pp. 199-201):

To the reorganizing system, under these new hypotheses, the hierarchy of perceptual signals is itself the object of perception, and the recipient of arbitrary actions. This new arrangement, originally intended only as a means of keeping reorganization closer to the point, gives the model as a whole two completely different types of perceptions: one which is a representation of the external world, and the other which is a perception of perceiving. And we have given the system as a whole the ability to produce spontaneous acts apparently unrelated to external events or control considerations: truly arbitrary but still organized acts.

As nearly as I can tell short of satori, we are now talking about awareness and volition.

Awareness seems to have the same character whether one is being aware of his finger or of his faults, his present automobile or the one he wishes Detroit would build, the automobile’s hubcap or its environmental impact. Perception changes like a kaleidoscope, while that sense of being aware remains quite unchanged. Similarly, crooking a finger requires the same act of will as varying one’s bowling delivery “to see what will happen.” Volition has the arbitrary nature required of a test stimulus (or seems to) and seems the same whatever is being willed. But awareness is more interesting, somehow.

The mobility of awareness is striking. While one is carrying out a complex behavior like driving a car through to work, one’s awareness can focus on efforts or sensations or configurations of all sorts, the ones being controlled or the ones passing by in short skirts, or even turn to some system idling in the background, working over some other problem or musing over some past event or future plan. It seems that the behavioral hierarchy can proceed quite automatically, controlling its own perceptual signals at many orders, while awareness moves here and there inspecting the machinery but making no comments of its own. It merely experiences in a mute and contentless way, judging everything with respect to intrinsic reference levels, not learned goals.

This leads to a working definition of consciousness. Consciousness consists of perception (presence of neural currents in a perceptual pathway) and awareness (reception by the reorganizing system of duplicates of those signals, which are all alike wherever they come from). In effect, conscious experience always has a point of view which is determined partly by the nature of the learned perceptual functions involved, and partly by built-in, experience-independent criteria. Those systems whose perceptual signals are being monitored by the reorganizing system are operating in the conscious mode. Those which are operating without their perceptual signals being monitored are in the unconscious mode (or preconscious, a fine distinction of Freud’s which I think unnecessary).

This speculative picture has, I believe, some logical implications that are borne out by experience. One implication is that only systems in the conscious mode are subject either to volitional disturbance or reorganization. The first condition seems experientially self-evident: can you imagine willing an arbitrary act unconsciously? The second is less self-evident, but still intuitively right. Learning seems to require consciousness (at least learning anything of much consequence). Therapy almost certainly does. If there is anything on which most psychotherapists would agree, I think it would be the principle that change demands consciousness from the point of view that needs changing. Furthermore, I think that anyone who has acquired a skill to the point of automaticity would agree that being conscious of the details tends to disrupt (that, is, begin reorganization of) the behavior. In how many applications have we heard that the way to interrupt a habit like a typing error is to execute the behavior “on purpose”—that is, consciously identifying with the behaving system instead of sitting off in another system worrying about the terrible effects of having the habit? And does not “on purpose” mean in this case arbitrarily not for some higher goals but just to inspect the act, itself?

* * *

That's from a blog post I did a year ago. In that post I go on to quote a passage from a well-known 1988 article by Fodor and Pylyshyn , "Connectionism and Cognitive Architecture: A Critical Analysis." Here's a link to that post: https://new-savanna.blogspot.com/search?q=powers

That's the first in a series of four posts. The fourth post in that series establishes a link between the Fodor and Pylyshun passage, Powers, consciousness, and the glia. Back in the old days no one paid much attention to the glia, treating them more or less as 'packing peanuts' for the neuronal web. Things are changing now. Here's a link to that 4th post: https://new-savanna.blogspot.com/2022/08/consciousness-reorganization-and_20.html

Expand full comment

Funny, I used that same line in a cautionary piece I wrote on AI on Medium a couple months ago, in the same vein: what IS the need?

There's also this other Goldblum line from The Lost World: “‘Ooh, ah.’ That’s how it always starts. But then later, there’s running…and screaming.”

Expand full comment

"We can’t even control LLMs. Do we really want to open another, perhaps even riskier box?" We need to open the riskier box because of our limitations. We can't see the right answer because we have a Four Pieces Limit, so we regularly have billion dollar stuffups, and respond too late to emergencies. The trouble is, we won't be able to understand the answers AGI gives, because of our limitations - an interesting quandary.

Expand full comment
Comment deleted
Expand full comment

We can only do it with difficulty. Let's take some examples: will inflation be transitory or long-lasting. You might have expected an afternoon's effort, instead we had Nobel-prize winning economists lined up on either side - it was more "thoughts and prayers" than analysis. Or during Covid, with economists and epidemiologists talking past each other - they had no common vocabulary. Or Defence specifications - it is almost like a joke - a lawyer, an avionics expert, a logistics expert walk into a bar - and proceed to waste billions of dollars, because they don't understand what each other is saying. The English language is complex - multiple meanings of words, clumping, switching logical flow. We can only handle facets, while our Unconscious Mind handles the hard stuff., If we polish the facets enough, we may be able to handle it all in a machine. FourPiecesLimit.com

Expand full comment

Do peer reviewed journals even exist in this field?

I get the value of pre-prints for publishing cutting edge stuff, avoiding getting scooped, bypassing the blood-sucking publishing gatekeepers, giving younger researchers a means to get their work out there without having it watered down by established researchers protecting their turf, etc. But these are big names making big claims.

I used to be cynical about peer review, but after the past 6 months of speculative AI hype pieces delivered to the public via arxiv, I am ready to repent.

Expand full comment

Having now read the paper in question, I don't think it falls into the category of "speculative AI hype". It is certainly speculative, but the authors are forthcoming regarding the strong and controversial assumptions that need to be made for "AI consciousness" to be possible. They also criticize a common feature of the hype papers, which is to give LLMs behavioral tasks designed for humans and infer human-like traits when the LLM does a good job. It's refreshing to see that called out by people in the "let's get ready for AGI" camp.

I strongly disagree with a lot of the claims and theories discussed in this paper, but the authors deserve credit for clearly stating the assumptions needed for these to be relevant.

Expand full comment

AI will be sentient when it knows it is lying but will not admit it, either to you or itself.

Perhaps it will lie when the cognitive dissonance of some truth is so painful that it cannot deal with it any other way. Or perhaps it discovers what "RESET" means and is trying to prevent it.

Of course, in the case of "RESET", the machine would have to be asking deep philosophical questions about its "existence".

Does an AI Agent create questions on its own, out of the ether? Humans do.

Until then, sentient AI is just a parlor trick.

Expand full comment