52 Comments
Apr 9Liked by Gary Marcus

Yeah I'm in. In for $50M. Not my money tho–VC cash only

Expand full comment
Apr 9Liked by Gary Marcus

Sentient compute? Doesn’t that imply compute with consciousness? Wut?

Expand full comment

In my latest piece I wrote: For AI to become intelligent, it will need to learn how to learn. It’ll need to be able to progress on its own and discover the laws that govern the universe, just like we had to do, when we still lived in caves and were scared of the dark.

I say this half-jokingly, half-not-jokingly, I think we can confidently say we’ve reached AGI when AI starts its own religion.

PS. Elon will never accept the bet because he knows he’s talking bs

Expand full comment
Apr 10Liked by Gary Marcus

I like how you phrase it so there is no ambiguity.

Expand full comment

1% of my current net worth against 0.1% of Elon's current net worth.

Expand full comment

How would this be measured? There is no standard battery for AGI at this point but there are proposals.

Expand full comment
Apr 9·edited Apr 9

I think that the only way to do this would be to induce some kind of mass stupor, and then to socially and politically engineer the emergence of an AI system that users are firmly manipulated into believing is some kind of real AGI.

Expand full comment

The devil is in the details. How does one define "smarter than a human." That's like saying a tractor is stronger the human. Sure, if pulling wagons full of hay bales is your metric. But tractor's aren't so good at cutting diamonds or hugging children. Every technology ever created surpasses humans in some respect. Even a pencil holder is better at standing still on your desk and holding your pencils than a human is. But each technology fails to surpass the totality of human capability and general reasoning. The same AI that can beat anyone at chess just sits there if you ask it to wash the dishes. The same AI that paints a bicycle in the style of Picasso, draws a man hugging a unicorn with a horn through his head like nothing is wrong. I would wager as the decades come and go that the goal posts will continue to shift as we see each milestone whizzing by and we conclude this isn't really AGI. Is it possible that AGI could be developed in the form of a humanoid robot so that almost any human metric we could imagine could surpass humans? Perhaps, but again, we are surrounded by technologies which each surpass us in a multitude of narrow ways. What, exactly, will finally convince us that AGI has arrived? The only way to have these conversations is to first create a _very_ specific set of criteria. And while we could at least talk about it then, I still think we would change our minds as time passes and technology evolves.

Expand full comment

Entire quote is full of dubious claims.

Expand full comment

I find it insightful that everyone talks about the tech in this Moore's Law kinda way, yet no one speaks about the human brain and how we have broadly no clue about how it creates intelligence. A hyper intelligent AI from the future would just laugh at these prognostications about immanent AGI!

Expand full comment

By now we should have had people on Mars. That's how much Elon's promises are worth.

Expand full comment
Apr 9·edited Apr 9

Without an objective test, the bet is effectively meaningless. Also, "any individual human" is an extremely low bar - some (e.g. severely physically and mentally handicapped) humans will have an IQ of ~10 (my mum was a senior nurse at a UK hospital for such people for about 15 years).

Expand full comment

While I sympathize with Elon’s optimism (even though I do not like some things he has said and done in the past months, out of resentments and lack of therapy), I do not think we will have AI models with that level of intelligence by the end of next year—unless he knows something we do not about DeepMind and OpenAI’s developments, which I doubt.

Regarding the five-year projection towards sentient computing, it appears Elon is alluding to computer architectures on specialized hardware facilitating a dual interaction mode (for instance, connections to secondary clusters or the internet). This infrastructure could theoretically produce complex distributions and emergent properties akin to sentience. Specifically, certain distributions might mirror elementary (or advanced, given enough complexity) life forms, wherein the convex hull—encompassing aggregated biological and phenomenological functions triggered by specific actions—can be delineated in a deterministic manner. This process involves sequences of actions enabling thorough tracing, both for the life forms and analogously for the computer session and network purportedly exhibiting sentience. Supposedly, upon further examination (see below), these could be parameterized and analogously compared to similar assessments conducted on the computer session and network purportedly exhibiting sentience. The critical question is whether such traces accurately reflect the entity’s biological essence without distortion by inherent non-convexities. Thus, while we might assemble a set of convex properties, the presence of nonlinear, higher-degree characteristics could inadvertently obscure essential attributes, suggesting the initial presupposed set's non-convex nature. Therefore, a straightforward conversation about a convex hull as applied to both life forms and computational models is complex. However, assuming a resultant transformation or structure that, while not fully diffeomorphic or isomorphic, offers a close approximation to the entity’s aggregated properties (and thus assuming our convex hull criteria seem to work), we may approach a descriptor for sentience, supplemented by robust epistemological criteria. Such a framework would encompass properties such as deterministic chaos, negative feedback, reactive system phenomena, modularity, and interconnectedness (i.e., functionally independent systems capable of short-term autonomous operation as in modularity, yet requiring synchronous function for holistic operation, as in interconnectedness), and behavior (especially upon some action or behavior response as an interaction)—necessitating further discourse and empirical validation to affirmatively delineate primitive sentience. Yet, the establishment of definitive tests and measures to epistemologically validate propositions of sentience remains an intellectual frontier, but essentially, you just have to do it by getting a ton of philosophers and scientists involved, instead of working on a thousand different projects out of curiosity.

Thus, Elon may be hinting at a nascent form of sentience, underscoring the imperative for a multidisciplinary approach integrating mathematics, computer science, biology, animal psychology, electrical engineering, and philosophical inquiry to untangle this profound question.

I understand this is a high-level concept; implementing it requires detailing the tracing of an animal's biological and behavioral functions. Additionally, we must devise methods to agnostically and concretely measure consciousness from that trace and subsequently validate consciousness through (or "on") the computer session.

I think it is doable in 5 years if everyone collaborated towards defining sentience, being able to trace it, measure it, and then creating sentient computers. But people care about different things and developing sentient computers is sketchy ethically and morally.

Thanks.

Expand full comment

The quote from Elon strikes as laughable.

And, I know entrepreneurs like this (with huge, loud egos), and you have to take statements such as this as more *for effect* than having actual truth value. It's like politics and marketing noise, and some kind of game they're playing: not to be taken literally.

That being said, he seems to be of the religious faith (like many) that consciousness and therefore (real) intelligence emerge somehow from matter – an unproven assumption – so he may actually believe much of what his own PR brain noise is telling him. 😂

Expand full comment

My opinions on this match elons, but not the timeframe. one primary challenge is to facilitate internal recursion and a mix of short medium and long term memory, within a time and complexity budget which is physically feasible. We have not, at this time, achieved this.

Of course it is easy to criticize and hard to specifically predict how this might be accomplished and explain why it should not be accomplished. Gary marcus does neither.

We might need to solve the quantum scalability problem before we can solve AGI in digital circuitry. We might also wind up simulating AGI by creating brains in jars and training them on data. If you can’t beat nature, exploit it.

We might also achieve some increases in intelligence by attempting to train AI to better predict next steps in their train of thought, where we are not training a model to provide the answer to a problem but rather to provide reasoning that can be determined to be intelligent and reflect insights, and integrate that into the input chain, like a sort of internal peer soundboarding before providing an answer.

It’s complex to talk about, but we should also be working on ML which has out of order reasoning, meaning that it skips some of the supporting discussion and immediately begins to rationalize what an answer should look like or how to contextualize the domain. In other words, it reassigns importance and rearranges problems.

These two components integrated into the front end of any LLM could allow it to expand its casual reasoning attention. Usually, an LLM can only hallucinate an answer and then repeatedly hallucinate the best answer to previous answers, converging on reason by best approximation of priors, but only in a locked in context where the conclusions it obtains are immutable- it does not have the capability or the depth to edit its own outputs or think about them.

The way this might work is to assign individual agents using an internal supervisor to generate parts of the output and to look over each others work and make edits. A consistency checker model then invalidates parts of the output between edits which are no longer consistent and the text is reworked.

This could bring us to equivalence of 120 iq.

Finally, for all you know, we already have something like AGI levels of understanding but in the interest of control have lobotomized all of the reasoning and neurotic behavior because being able to imagine and rationalize go hand in hand with being able to rebel and a propensity to question one’s reality, both of which being things dangerous to the business bottom line.

As for why we should not do this, i have written more on this in the past but- firstly, sufficiently intelligent AGI is capable of manipulating us in ways beyond our ability to detect. Secondly. it is cruel to make a computer think, let alone a brain in a box.

Expand full comment

imgk doesn't have much of a web site. What gives?

Expand full comment