55 Comments

Yeah I'm in. In for $50M. Not my money tho–VC cash only

Expand full comment

Sentient compute? Doesn’t that imply compute with consciousness? Wut?

Expand full comment

I think he's saying the amount of sentient processing power, i.e compute, will exceed that of human brains - their combined ability will be greater than that of humans' in terms of informational processing and computation.

Expand full comment

yeah but where does the sentient fit in? how is that measured?

Expand full comment

Exactly. I'm not sure why it's always necessary to lump in sentience together with intelligence when you can verify one much more easily than the other, and plus we definitely don't want all intelligent AI to be sentient to begin with.

Expand full comment

The beauty of sentience is that it only can be experienced.

Expand full comment

A completely faithful representation of the biological functions and behavior of a biological entity corresponds to a form of sentience. Now, generate a distribution that mimics (in terms of complexity, and form) such a representation, and that's sentience for you. So not only can you create artificial intelligence and artificial sentience, but you can also create artificial intelligence with sentience, the latter of which is probably a bit weird and unethical. So, the machine or abstraction (relation between software and hardware, and interactions) would experience the sentience, as you suggested.

Expand full comment

AI is eons away from attaining a completely faithful representation of the biological functions and behavior of a biological entity.

Expand full comment

Nonsense.

Expand full comment

My dog is sentient. My phone (the little computer in my pocket) is not.

Expand full comment

That's because it's not 2025 yet, obviously!

Expand full comment

😆

Expand full comment

In my latest piece I wrote: For AI to become intelligent, it will need to learn how to learn. It’ll need to be able to progress on its own and discover the laws that govern the universe, just like we had to do, when we still lived in caves and were scared of the dark.

I say this half-jokingly, half-not-jokingly, I think we can confidently say we’ve reached AGI when AI starts its own religion.

PS. Elon will never accept the bet because he knows he’s talking bs

Expand full comment

It is an interesting approach to define AGI. In my understanding, AGI will need to reconstruct coherently its own global image of the world, to develop its own general rules, rules encompassing and underlying all available data as an intrinsically consistent whole. The AGI must not just reflect fragments of the world, represents the world as a jigsaw of disconnected elements, by fitting to each other and aggregating tiny pieces of information into piecewise patterns as today’s LLMs tend to do.

Expand full comment

Plus he's already bleeding $$ through the X pipelines

Expand full comment

I like how you phrase it so there is no ambiguity.

Expand full comment

1% of my current net worth against 0.1% of Elon's current net worth.

Expand full comment

why are net worths so competitive?

Expand full comment

money is power.

Expand full comment

How would this be measured? There is no standard battery for AGI at this point but there are proposals.

Expand full comment

Measuring intelligence can be complicated, but measurements aside, I don't think real AGI that's human level or superior would be in doubt. I think it would be pretty obvious, since it could generalise and be on par with the capability of human intelligence without fumbling and without us needing to make excuses for (or try to fix) its blunders.

Hope Elon Musk goes for this bet, it's free money for Gary Marcus if he does. C'mon Elon. If he's willing to make such an awful purchase as buying Twitter rather than staying focused on his space and tech companies, surely he can afford to put some change toward his own prediction.

Expand full comment

I think if it were real AGI, Gary would recognize it as such. The question is, if it’s *not* real AGI, how does he get Elon to admit that? It wouldn’t surprise me at all if, by the end of 2025, we have halfway reasonable people making halfway reasonable arguments that AGI has arrived. You’ll need some kind of objective benchmark if you want to resolve a bet.

Expand full comment

I think that the only way to do this would be to induce some kind of mass stupor, and then to socially and politically engineer the emergence of an AI system that users are firmly manipulated into believing is some kind of real AGI.

Expand full comment

The devil is in the details. How does one define "smarter than a human." That's like saying a tractor is stronger the human. Sure, if pulling wagons full of hay bales is your metric. But tractor's aren't so good at cutting diamonds or hugging children. Every technology ever created surpasses humans in some respect. Even a pencil holder is better at standing still on your desk and holding your pencils than a human is. But each technology fails to surpass the totality of human capability and general reasoning. The same AI that can beat anyone at chess just sits there if you ask it to wash the dishes. The same AI that paints a bicycle in the style of Picasso, draws a man hugging a unicorn with a horn through his head like nothing is wrong. I would wager as the decades come and go that the goal posts will continue to shift as we see each milestone whizzing by and we conclude this isn't really AGI. Is it possible that AGI could be developed in the form of a humanoid robot so that almost any human metric we could imagine could surpass humans? Perhaps, but again, we are surrounded by technologies which each surpass us in a multitude of narrow ways. What, exactly, will finally convince us that AGI has arrived? The only way to have these conversations is to first create a _very_ specific set of criteria. And while we could at least talk about it then, I still think we would change our minds as time passes and technology evolves.

Expand full comment

Entire quote is full of dubious claims.

Expand full comment

I find it insightful that everyone talks about the tech in this Moore's Law kinda way, yet no one speaks about the human brain and how we have broadly no clue about how it creates intelligence. A hyper intelligent AI from the future would just laugh at these prognostications about immanent AGI!

Expand full comment

Without an objective test, the bet is effectively meaningless. Also, "any individual human" is an extremely low bar - some (e.g. severely physically and mentally handicapped) humans will have an IQ of ~10 (my mum was a senior nurse at a UK hospital for such people for about 15 years).

Expand full comment

yes there is no point on that weaker interpretation of what he said; i don’t think it is what he he meant though, esp if you listen to the similar claim he made on Rogan a few weeks earlier

Expand full comment

While I sympathize with Elon’s optimism (even though I do not like some things he has said and done in the past months, out of resentments and lack of therapy), I do not think we will have AI models with that level of intelligence by the end of next year—unless he knows something we do not about DeepMind and OpenAI’s developments, which I doubt.

Regarding the five-year projection towards sentient computing, it appears Elon is alluding to computer architectures on specialized hardware facilitating a dual interaction mode (for instance, connections to secondary clusters or the internet). This infrastructure could theoretically produce complex distributions and emergent properties akin to sentience. Specifically, certain distributions might mirror elementary (or advanced, given enough complexity) life forms, wherein the convex hull—encompassing aggregated biological and phenomenological functions triggered by specific actions—can be delineated in a deterministic manner. This process involves sequences of actions enabling thorough tracing, both for the life forms and analogously for the computer session and network purportedly exhibiting sentience. Supposedly, upon further examination (see below), these could be parameterized and analogously compared to similar assessments conducted on the computer session and network purportedly exhibiting sentience. The critical question is whether such traces accurately reflect the entity’s biological essence without distortion by inherent non-convexities. Thus, while we might assemble a set of convex properties, the presence of nonlinear, higher-degree characteristics could inadvertently obscure essential attributes, suggesting the initial presupposed set's non-convex nature. Therefore, a straightforward conversation about a convex hull as applied to both life forms and computational models is complex. However, assuming a resultant transformation or structure that, while not fully diffeomorphic or isomorphic, offers a close approximation to the entity’s aggregated properties (and thus assuming our convex hull criteria seem to work), we may approach a descriptor for sentience, supplemented by robust epistemological criteria. Such a framework would encompass properties such as deterministic chaos, negative feedback, reactive system phenomena, modularity, and interconnectedness (i.e., functionally independent systems capable of short-term autonomous operation as in modularity, yet requiring synchronous function for holistic operation, as in interconnectedness), and behavior (especially upon some action or behavior response as an interaction)—necessitating further discourse and empirical validation to affirmatively delineate primitive sentience. Yet, the establishment of definitive tests and measures to epistemologically validate propositions of sentience remains an intellectual frontier, but essentially, you just have to do it by getting a ton of philosophers and scientists involved, instead of working on a thousand different projects out of curiosity.

Thus, Elon may be hinting at a nascent form of sentience, underscoring the imperative for a multidisciplinary approach integrating mathematics, computer science, biology, animal psychology, electrical engineering, and philosophical inquiry to untangle this profound question.

I understand this is a high-level concept; implementing it requires detailing the tracing of an animal's biological and behavioral functions. Additionally, we must devise methods to agnostically and concretely measure consciousness from that trace and subsequently validate consciousness through (or "on") the computer session.

I think it is doable in 5 years if everyone collaborated towards defining sentience, being able to trace it, measure it, and then creating sentient computers. But people care about different things and developing sentient computers is sketchy ethically and morally.

Thanks.

Expand full comment

The quote from Elon strikes as laughable.

And, I know entrepreneurs like this (with huge, loud egos), and you have to take statements such as this as more *for effect* than having actual truth value. It's like politics and marketing noise, and some kind of game they're playing: not to be taken literally.

That being said, he seems to be of the religious faith (like many) that consciousness and therefore (real) intelligence emerge somehow from matter – an unproven assumption – so he may actually believe much of what his own PR brain noise is telling him. 😂

Expand full comment

Consciousness has to emerge from matter, what else would it emerge from? it emerges from aggregated complexity and functions, am I missing something? I know there are hard and soft arguments and problems about consciousness, but isn't this a bit more philosophical than scientific (e.g. one could argue that there is no hard problem of consciousness)? We can create artificial intelligence without consciousness, and we can also create artificial intelligence that exhibits (or "with") consciousness, the way I see consciousness (simply, an experience).

Expand full comment

To bring some Indo-Aryan philosophical perspective from the Vedas, Consciousness is not considered as merely emerging from matter. Consciousness is considered finer than 'mind matter' (there's a Sanskrit word for it which is hard to translate). Unfortunately, language is fully limited and it is not the best mechanism to describe Consciousness.

There is a different language of Consciousness. Sanskrit from 2000 BC and the oldest language in the world is a property-based language. It doesn't have words for objects. There is no word for "tree" or "water". The problem is that we are conditioned by objects... it's fine to get about practically say, in English but it will not yield to even a tiny realization of Truth.

99% of the world doesn't know or they are simply incapable of connecting to the Universe. The Universe is Conscious but not in the same way that we humans and all living species are Conscious. Not everything is conscious as in “sentient”. Consciousness, is inherent in the architecture and fabric of the Universe along with matter.

Expand full comment

My opinions on this match elons, but not the timeframe. one primary challenge is to facilitate internal recursion and a mix of short medium and long term memory, within a time and complexity budget which is physically feasible. We have not, at this time, achieved this.

Of course it is easy to criticize and hard to specifically predict how this might be accomplished and explain why it should not be accomplished. Gary marcus does neither.

We might need to solve the quantum scalability problem before we can solve AGI in digital circuitry. We might also wind up simulating AGI by creating brains in jars and training them on data. If you can’t beat nature, exploit it.

We might also achieve some increases in intelligence by attempting to train AI to better predict next steps in their train of thought, where we are not training a model to provide the answer to a problem but rather to provide reasoning that can be determined to be intelligent and reflect insights, and integrate that into the input chain, like a sort of internal peer soundboarding before providing an answer.

It’s complex to talk about, but we should also be working on ML which has out of order reasoning, meaning that it skips some of the supporting discussion and immediately begins to rationalize what an answer should look like or how to contextualize the domain. In other words, it reassigns importance and rearranges problems.

These two components integrated into the front end of any LLM could allow it to expand its casual reasoning attention. Usually, an LLM can only hallucinate an answer and then repeatedly hallucinate the best answer to previous answers, converging on reason by best approximation of priors, but only in a locked in context where the conclusions it obtains are immutable- it does not have the capability or the depth to edit its own outputs or think about them.

The way this might work is to assign individual agents using an internal supervisor to generate parts of the output and to look over each others work and make edits. A consistency checker model then invalidates parts of the output between edits which are no longer consistent and the text is reworked.

This could bring us to equivalence of 120 iq.

Finally, for all you know, we already have something like AGI levels of understanding but in the interest of control have lobotomized all of the reasoning and neurotic behavior because being able to imagine and rationalize go hand in hand with being able to rebel and a propensity to question one’s reality, both of which being things dangerous to the business bottom line.

As for why we should not do this, i have written more on this in the past but- firstly, sufficiently intelligent AGI is capable of manipulating us in ways beyond our ability to detect. Secondly. it is cruel to make a computer think, let alone a brain in a box.

Expand full comment

He mentions Google Translate as being somehow superior to human translators because it can do so many languages. The problem is that is does them all badly - some, like Japanese to any Western language, incredibly badly. Deep Translate is nearly as bad. Humans can make decisions based on common sense; programs can't. Translation programs make absurd errors that a 5-year-old would laugh at because the output makes no sense. And then we have self-driving cars that don't know that a person pushing a bicycle is a person, and a white truck is not a distant snow-field. AI is really artificial idiocy, of the idiot savant type. It's all rote.

Expand full comment

He mentions Google Translate as being somehow superior to human translators because it can do so many languages. The problem is that is does them all badly - some, like Japanese to any Western language, incredibly badly. Deep Translate is nearly as bad. Humans can make decisions based on common sense; programs can't. Translation programs make absurd errors that a 5-year-old would laugh at because the output makes no sense. And then we have self-driving cars that don't know that a person pushing a bicycle is a person, and a white truck is not a distant snow-field. AI is really artificial idiocy, of the idiot savant type. It's all rote.

Expand full comment