33 Comments

Great article.... if you subscribe to the viewpoint that "AI" ~== deep learning-like techniques.

There are others of us toiling in poorly-funded, poorly-acknowledged areas of, for example, cognitive architectures that seem to produce (actually seems to emerge) full causal abilities, intrinsic ubiquitous analogical reasoning, automatically (i.e., emerges from architecture) almost all the things you bemoan.

AGI will definitely occur by 2029. But it won't be via deep learning-like techniques which have taken over industry, taken over academia, and taken over the imagination of the technical and lay world.

Expand full comment

Great insights as always. One proposal: for the five "By 2029" tests, we can probably condense them into one flagpost for AGI:

"In 2029, AI will not be able to read a few pages of a comic book (or graphic novel, or manga, however you wish to name the kind of publication where sequentially arranged panels depicting individual scenes are strung together to tell a story) and reliably tell you the plot, the characters, and why certain characters are motivated. If there are humorous scenes or dialogues in the comic book, AI won't be able to tell you where the funny parts are."

Taking disjoint pieces of information and putting them together by the works of the mind, that's how comprehension happens --- essentially, we are making up stories for ourselves to make sense of what comes across our perceptive systems. Hence the comprehension challenge, I feel, is how the Strong Story Hypothesis (God bless the gentle soul of Patrick Winston) manifests when we talk about evaluating AI: can AI understand its inputs by creating a narrative that makes sense?

Expand full comment

It's odd that we keep talking about teaching AI reasoning when we ourselves are incapable of it.

One wonders when it will occur to AI commentators and AI programmers that AI is going to take their jobs. Writing and programming are data management tasks, and sooner or later no human is going to be able to perform that task better or cheaper than AI.. The factory workers who lost their jobs to automation went on to exciting careers at Walmart. One wonders, where will the AI experts go when they too are no longer needed?

AI experts keep talking breathlessly about the future, but intellectually they are living in a past when humans ran the show and were in charge of everything. They are living in a past era of knowledge scarcity, when it made sense to seek as much knowledge as possible. They are living in a past era when we could afford to take chances because the scale of powers available to us were modest.

The best argument against AI may be AI experts. If they don't really grasp the world they are creating, if they aren't ready to adapt to that world, then neither are the rest of us.

Expand full comment
Jun 1, 2022·edited Jun 2, 2022

'Data' and 'AGI' don't go together. Natural human-level intelligence isn't based on data (or rules either); it's based on experience, imitation, association etc., via a suitable body [there are zero examples of body-less natural intelligence]. We can't go from disembodied AI to embodied AGI in just 8 years!

Language isn't 'data', it's more than a text corpus. Nouns, verbs, adjectives, adverbs... are there to describe things/places, actions - which require a body to experience. Intangibles (eg. yesterday, open space, permanence...) do have their place in reasoning and analysis, but intelligence isn't fundamentally about them - and, they too can be understood in terms of a body.

Expand full comment

Interesting to revisit this, points #1 and #2 seem to have been broken already, with the larger context windows of Gemini 1.5 and the new Claude

Expand full comment

It seems as though the bet is crafted in such a way that even if AGI does exist by 2029 there is still a very good chance that you would win the bet.

Take your third criteria for instance. "In 2029, AI will not be able to work as a competent cook in an arbitrary kitchen." This could easily happen if AGI is created before any sufficiently dexterous robots.

Or your criteria that, given a natural language specification, it be able to write as least 10,000 lines of bug free code without using code libraries. Is that supposed to be all in one go, like without iterating and testing and going back and squashing bugs as they are discovered and such? Because I'm pretty sure there's no human alive that could do that. An AGI that was far superior to any human programmer may nonetheless still fail this condition.

Your first two and last criteria seem more reasonable depending on how they are operationalized. Although, caveat, I would worry that the mathematical proofs written in natural language may be insufficient to uniquely rederive the actual proof they are describing. Would an AI that derived a different proof than the one intended count as a success if the proof was good and the description could be said to be an apt description of the alternate proof?

Expand full comment

Never bet against Elon. He will always try to find a way to rescind his bet, and throw a storm of legal servants at it, rather than admitting he was wrong.

Expand full comment

The thing is people are not perfect drivers either have a look at the no AI-involved idiot things humans have done with cars and bumping into a plane looks tame. Now to the five things that humans can do hmmm even the first one is complex for most people and for every person you ask you will get about a film you generally get a different answer. As for a book I doubt most people can remember all the characters never mind their motivations unless it's a children's book. A decent cook in a random kitchen that would be practically no one. 10,000 lines of code with no bugs really who do you know who can do that possibly some extreme savant. The last one is into genius-level humans and there are proofs in books written that no one alive can decipher. This is not Artificial General Intelligence you are talking about Super Human General Intelligence by 2029 this will be the year after 2030. The prediction was not made by Musk however he is reiterating Ray Kurzweil's (current director of engineering at Google) predictions which he assures us are still on track.

Expand full comment

Um … do you think an average person off the street could do 3 out of those 5?

Expand full comment

I'm sort of having the reverse feeling of Gellman Amnesia -- I self publish novels for a living, and it feels to me fairly likely (though of course by no means certain) that AIwill be able to read a novel or watch a movie and then describe the theme, character motivations, etc by 2029, while its success in the areas I know much less about, ie to cook, code and prove feels intuitively much harder for it to achieve (though of course by no means impossible).

Expand full comment

Leave Elon alone, he is already losing billions on his first twitter bet.

Expand full comment

The article is spot on as to why the current AI techniques will not become AGI. Especially the long tail problem says it all. We have terabytes and terabytes of data, but the knowledge they embody is very less. To create a true AI, we need to start at the basics, from changing the representation of data, to defining what logic is. Our current data lacks continuity, depth, dimensional relations and many other requirements to build true AI. We seem to start at logic to learn logic!

While deep learning like techniques seems to lead us to perceived intelligence, they still are just learning if-else clauses with mathematical representations. More the depth of if-else clause, more it will appear as if they are intelligent. But, they are still limited and inflexible, meaning they cannot adapt a learning for seemingly unrelated circumstances.

That said, I don't believe I agree with the points that indicate "general intelligence". I really wonder if there is a definition for "general intelligence" at all? Watching a movie or reading a book and summarising it can be easily done by bypassing "general intelligence", using purely NLP algorithms. Cooking, coding, mathematics, all these are learned skills by a human, after the presence of "general intelligence". Moreover, these are too tied into human intelligence, which need not be true of "general intelligence".

I think "general intelligence" is more a way of developing "common intelligence" across a set of beings. Study the behaviour of street dogs, you will find that they develop a common understanding as to how they protect their territory. Study the cats, you will find they inherently develop a common segregation of locations and place where they do certain actions, trees, grow only in locations that meet a certain criteria. Study a city form, as I have written, first the roads come, the utilities are laid, then a few shops and so on it goes till around the road a city is formed. That is "general intelligence".

IMHO: We do not seem to have an acceptable definition for knowledge and intelligence even. How can we then define general intelligence. So, my take is that first there needs to a consensus as to what IS called a "general intelligence", before attempting to create it.

2029 may or may not be a year when we can get AGI. Who knows. But hey, isn't that statement the reason that this whole discussion is starting and getting noticed? After all, the surest way to fail is to not start at all?

Expand full comment

Language, common sense, DNA or life is natural general intelligence (genetic code (4 bits - intuition level) translated via RNA into proteins (20 bits - sensory level) that work as receptors and realize cognitive functions of a cell) and its adequate symbolic model that provides the similar process (DNA-RNA-proteins) of transcription, splicing of introns and exons and translation say interpretation of Chinese into English is artificial general intelligence. No other way for life.

Expand full comment

Thing is when we talk AGI you expect it as an adult human level. Which even us mere mortals take decades of experience to fine tune, so if you’re ready to put up $100k, kudos and will like to take it up!

From what I can tell you are somewhat hung up about comprehension and the nuances in audiovisual medium. yes it’s a large problem space but works within limitations.

Elon is busy making stuff work, and of course the “free speech” folks, who don’t appreciate a free media platform! mark my words “Tesla Bot is a start..of the journey towards AGI”

Though playing Devil’s Advocate I’d agree there’s a bunch of hype, but it’s not all smokin mirrors.. there’s genuine work happening to unlock the puzzle of intelligence.. I’ve been musing but hesitant to build the race of super powerful consciousness (some may perceive as end of human kind, and perhaps rightly to have doubts of where it will lead us)…hope you understand… thank you!

DS

Expand full comment

a) Neurons of a bio net are memorizing patterns and comparing huge sets of those patterns, utilizing a similarity measure, unaffected by the curse of dimensionality (at lower level on NN)

b) The reward signal is not a scalar (oversimplification), it is a vector

c) Rewards stream is not just a control source, it must be considered as part of the input stream

d) There is a rewardial net getting built along structural one

e) A net must create dedicated nodes for patterns - a pattern a node (structural net)

f) A net must inflate to absorb experiences and deflate to get rid of noise

g) Unsupervised, supervised and reinforcement "modes" of training are just different ways to remove irrelevant patterns from a network

h) Nodes behave locally, with no optimization, no matrices multiplication, no gradient

i) Dendritic computations implement AND-gates

j) Motor circuitry generated the same way as perceptive one

k) Grandmother cells do exist and memories distributed hierarchically

l) A symbolic superstructure of "grannies" grows above stochastic "basement"

m) When "grannies" exchange activations intra-layer - a symbolic subnet does thinking

n) Activated "grannies" provide extra excitation to underlaying nodes, providing injection of context

o) A neural net grows a "diamond" shape - with receptors at the bottom, getting *much* wider with billions of multimodal patterns at the middle (intuitive domain), and narrowing to a few hundred thousand of high level patterns which can be labeled with words (symbolic language domain)

p) Creation of high-level nodes causes "positive" feelings (dopamine?) and defines curiosity as a bio-entity adaptation motivation. Destruction of nodes-synapses (destructed believes - treason or cheating, broken promises, sensory deprivation) induces neurotransmitters based suffering

**) +++

There is a net that might be an answer to the bet:

https://www.linkedin.com/posts/bullbash_neuromorphic-ann-growing-billions-of-connections-activity-6873695912426917889-tecK?utm_source=linkedin_share&utm_medium=member_desktop_web

Expand full comment