147 Comments
User's avatar
RJ Robinson's avatar

"AGI is a decade away."

In other words -

1. We don't know what it is.

2. We don't know how to make it.

3. We have no idea when anything will really happen.

4. But carry on giving us your money anyway.

Expand full comment
Basalat Raja's avatar

We don't know how the human brain works. It's a very complex process.

We don't know how LLMs actually work. It's a very complex process.

They are obviously the same process.

Therefore, LLMs are intelligent. Q.E.D.

Corollary: Sometimes LLMs do dumb things.

A bunch of jealous people post articles claiming this means they aren't intelligent.

Therefore, we need a trillion dollars to fix these minor problems.

Or, echoing the physics professor who had commented "This paper is not right. It is not even wrong" ... "Generative AI is not intelligent. It is not even stupid."

Expand full comment
Chris's avatar

LLMs are far less complicated than the human brain and we know pretty good how they work. They are unpredictable in a certain sense, yes. But for every given input we know the exact calculations they go through. The same way we know the exact calculations for every learning step. But we can't really prove the correctness of their behaviour, because often we can't even define the objects we are trying to apply them to in mathematical terms and we don't know which inputs they are bound to encounter...

Expand full comment
BA_Rehl's avatar

We know some -- enough to know that LLMs don't and can't work the same way as brains. This fact, however, was not known in 1956 by anyone at the Dartmouth Conference.

It is also known that AGI theory can't be solved with scaling, data, machine learning or throwing money at the problem.

Expand full comment
BA_Rehl's avatar

We know some -- enough to know that LLMs don't and can't work the same way as brains. This fact, however, was not known in 1956 by anyone at the Dartmouth Conference.

It is also known that AGI theory can't be solved with scaling, data, machine learning or throwing money at the problem.

Expand full comment
Basalat Raja's avatar

Yes, I made approximately the same point, albeit sarcastic of the people who blatantly claim that NNs must be thinking. They are not thinking and the claim is preposterous.

Expand full comment
DrJon's avatar

It is starting to feel like and could totally end up like Elon and full self driving. It's been a decade since he first started talking about it (in 12/25 it was 2 years away).

Expand full comment
Larry Jewett's avatar

Elon was not exagerating.

Its just that four of the letters in Fullofit Self Driving are silent

Expand full comment
C. King's avatar

DrJon and Larry Jewett: About Elon Musk and driverless cars--not only do the investors and wayward-minded scientists need to know about consciousness before they go off on a binge of hubris with AGI, but the driverless cars thing is built around a really skewed view of metaphysics and its relationship to how the history of events *actually* works.

For such driving to work (or to resemble working, that is, to an idiot) you'd have to take everything human and historical out of the engineering AND the driving picture--EVERYTHING, not only cars, would have to be pre-engineered and robotic.

Expand full comment
BA_Rehl's avatar

That's true for the vast majority of people who try to talk about AGI, but some of us do know what it is.

True.

That's true, unless the research can be completed, nothing is on the horizon.

No, the actual research project hasn't asked for investment nor is there any plan to.

Expand full comment
RJ Robinson's avatar

So what is agi?

Expand full comment
Matt Kolbuc's avatar

The whole thing is just pathetic now. Wasn't long ago Altman was prancing around the world claiming shortly Chat GPT is going to eliminate world poverty, cure cancer, develop nuclear fusion, et al. Now it's morphed into essentially, "we're broke and can't do anything remotely close to what we promised, so we'll make our chat bot talk dirty to you if you subscribe". That's just pathetic.

Expand full comment
Mike's avatar

Well, and then Altman said we'd have to choose between curing cancer and free global education because there just isn't enough compute.

Now he's using his compute to let people produce porn and bad sloppy videos!

Expand full comment
Oaktown's avatar

He's reaping what he sowed with his persistent BS. I hope he and the VCs he fooled go broke; maybe they'll learn something. Then again, since porn is so lucrative, maybe he won't go broke, but he certainly won't be able to pose as "god" anymore.

Expand full comment
Jonah's avatar

Which just shows how ignorant he is. Curing all forms of cancer might require immense computational resources, but free global education could theoretically be achieved without a single computer. That's a political issue, not a scientific challenge.

Expand full comment
Larry Jewett's avatar

I suspect their video bot (Sora) will do more than just "talk" dirty.

They'll have to rename it SoraXXX

Expand full comment
Matt Hawthorn's avatar

“and we'll also pump ads into your eyeballs through an infinite feed of dopamine hacking video slop called Sora 2”

Expand full comment
Christopher Shinn's avatar

Keep up the good work, Gary!

Expand full comment
Brian Mcleish's avatar

"A decade away" - what people say when they have no idea how long it will take. When I was studying physics in the early 90s, fusion power was a decade away. 30 years later it is still a decade away........

Expand full comment
E. Syla's avatar

But at least fusion power is not an incoherent concept with zero basis in reality.

Expand full comment
Tim Nguyen's avatar

Beat me to it. I was about to call out that bullshit guess or prediction. Clever way of trying to sound optimistic and confident when you know your field is crap for the foreseeable future.

Expand full comment
Alex Tolley's avatar

I thought fusion was always "30 years away" - long enough to be almost 2 generations away with time to solve it, or not—certainly a decent time to be paid to work on the problem at public expense.

Expand full comment
Brian Mcleish's avatar

Within research circles people genuinely thought it was 10-15 years away back in the 90s and modern researchers think the same now. The ones who tout for grant money usually cite 2 to 3 decades to hedge their bets!

Expand full comment
Alex Tolley's avatar

What if you go further back, to the ZETA machine in 1950s? Any sense of how long they thought fusion energy might be viable? https://en.wikipedia.org/wiki/ZETA_(fusion_reactor)

Expand full comment
Brian Mcleish's avatar

No idea what the researchers themselves thought but the media and (especially) government hype machines went into overdrive at the time.

Expand full comment
Larry Jewett's avatar

Maybe "AI years" are like "dog years" : you have to multiply by 7 to get human years

Expand full comment
Richard Seager's avatar

Or 7 million.

Expand full comment
BA_Rehl's avatar

A working AGI system cannot be built without a design. It can't be designed without a completed theory. Even if the theory were ready now, it wouldn't be published until 2029. Using the Manhattan Project as a base, it would take about seven years to design and build a working AGI system. So, 2036 would be the earliest possible date. However, there is no estimable timeline until the theory is completed.

Expand full comment
TheAISlop's avatar

The consensus is overwhelming.

What's the old saying it's easy to get 80 percent in 20 percent of the time but you spend the other 80 percent of your time stepping through the last 20 percent.

See FSD for details.

Expand full comment
William Bowles's avatar

I would have thought Zeno’s Law more applicable, never mind no quantum physics.

Expand full comment
Paul Jurczak's avatar

Or you spend the other 80 percent of your time stepping through the last 19 percent. The remaining 1 percent never gets done.

Expand full comment
Oleg  Alexandrov's avatar

Yes, but self-driving cars are making nice progress, with neural nets in combination with good old physics. Unless one is Tesla, of course.

Expand full comment
Larry Jewett's avatar

Its actually very revealing that the self-driving cars that actually work (most of the time) operate within a very restricted (physical and parameter) space, taking actual physics into account. And even then, simple "not trained on" things can make them go awry and behave in entirely unpredictable (and dangerous) ways.

That should teach the developers of LLMs something.

A few (eg, Hsabbis ) seem to understand that, but most dont.

Expand full comment
Oleg  Alexandrov's avatar

Yeah, there is no magical path to AGI and there is way to go. We have the resources to chip at things though.

Expand full comment
Larry Jewett's avatar

Nvidia loves the current approach of "chipping at things"

Expand full comment
Stephen Bosch's avatar

"Keep chipping, guys! We believe in you." — Jensen Huang

Expand full comment
Larry Jewett's avatar

And if that is one's idea of "chipping at things", im not sure we actually do have the resources , or at least not sure we should allocate them even if we do.

Expand full comment
Matthew Kastor's avatar

So, what I'm hearing is that the site "let me Google that for you" beat OpenAI to AGI by decades. 🤣

Expand full comment
Mike's avatar

ROTFL!

Expand full comment
Alex Tolley's avatar

And search lookup is far, far cheaper. However, it would still be better for an AI to solve a specific problem one has, from producing teh correct math formula, solving a math problem, writing code in an unfamiliar language, solving a bug from a trace log, and many other issues that go beyond search. That would be really useful AI - one that truly thinks and has expertise "on tap".

Expand full comment
Mehdididit's avatar

If you could trust it. If you can’t write code in the language you seek, you’d have to find someone who could to fact check your AI. What’s the point?

Expand full comment
jason b smith's avatar

That is what every senior programmer says about getting paired with a junior programmer :).

Expand full comment
Alex Tolley's avatar

I understand your point, but there are ways to check the output of code in certain situations. For example, write the code for a known formula in a language you are unfamiliar with. Use the variables that will produce a known output. Confirm the new code is correct. The same methodology applies to a range of algorithms, where the data and results can be tested on the code. This is effectively an easy unit test.

Obviously, this cannot work where this type of test does not have this method to test it. However, you can test the results against existing code to determine that both outputs are the same (allowing for any stochastic effects).

Totally new code where the I/O is not known would be a problem. Using LLMs to answer questions on information you are unfamiliar with is the same. When testing various LLMs on information, e.g., context texts, when I know the answers to questions, it is a similar test of valid I/O.

Bottom line: Use LLMs where they can improve productivity but where their output can be validated. It may even require periodic spot checks to ensure errors do not creep in. Treat an LLM like a new student. If it doesn't make errors, trust it more over time. If it makes errors, discard it for that use, possibly even discard it, full stop.

[Did you stop using Intel chip-based computers when the maths bug that gave incorrect answers on some specific cases was discovered in the 1990s?]

Expand full comment
Mehdididit's avatar

Seems horribly inefficient to me. I’ll just go back to messing with AI customer service bots. It’s more fun than trying to make something work that doesn’t. Shout out to you for being willing to put so much effort in to it though.

Expand full comment
Alex Tolley's avatar

I'm still waiting for those CS bots to not be so obtuse and irritating that I demand to talk to a real person. I find interacting with a bot is a ridiculously slow process, with lots of "I think you want [X], is that right?" and failing to get to my issue and what I want. Humans do it far better and more quickly, so that I am not texting a bot and waiting for a response. Rinse and repeat endlessly.

As Gary Marcus has explained endlessly, AIs have no world model to understand anything, certainly not humans. A ChatBot is no more than a script following a menu dressed up with some chatty speech or text. At best it might be able to more quickly reach the relevant point in a script. It cannot respond to other cues in text or speech, and even if it could detect them, it wouldn't understand how to respond to them.

Good luck messing with them, but I think you are just adding to the general irritation of humanity. Remember "voice menu hell?" Now coming in spades with chattiness added. :-(

Expand full comment
Matthew Kastor's avatar

There are already "AI" that calculate, solve problems, find data, iterate over problem space and find novel solutions efficiently, etc. They're called "software". Using statistics on a data set to generate probable solutions isn't new at all, it's fundamental to data analysis and research, and it has been automated since the beginning. Even ECC RAM does it. The grifters attempting to dupe capital into jumping on a dumpster fire so they can skim some of the top and leverage it for political influence, are conflating LLM with the whole of computer science, attempting to rebrand everything as "AI", and make false claims that simply slapping an accessibility layer for natural language processing on top of existing technology is a paradigm shift that'll invoke a deity and bring their pornbox to life. Simultaneously, they're violating copyright left and right as if simply reproducing other people's intellectual property poorly in service of spam, makes it their own.

Wrapping useful software and other valuable intellectual property with a conversational interface isn't that useful unless you're blind, because a picture says a thousand words, and we've already got graphical user interfaces.

Great, we have a database that can parse sentences and bias its response based on examples of other writing, in effect repeating the things we've already said and already know back at us but in a non ACID compliant way, and it may simply generate completely invalid responses based on nothing but weak links between indexes that have nothing to do with the input query.

This is all stupid. AI has existed and has been being developed since before the steampunk analytical engine, if we're going to run with the current conflation proposed by corporate grifters who want to say "everything is AI, and that's what we do", then proceed to try taking credit for computation itself and all is inventions that they clearly don't understand. It's the nerd equivalent of stolen valor in service of treason, and their program should be terminated with prejudice.

I could fill novels with how I know these grifters are full of shit, but it's not necessary to waste my life proving myself credible when these assholes are being so blatantly obvious about their fraud. Go ahead and run on a tangent using their vocabulary and "yeah but" until you're blue in the face. I don't give a flying fuck. They're taking the same approach to their software as traders trying to reverse engineer trading strategies on historic data, and getting the same unprovable results while losing money generating useless actions from their computational infrastructure. They can't even explain what their software is doing, because they're fucking amateur script kiddies playing with rainbow tables and telling the world they're about to hack the CIA. Sure, they might get lucky once and pull off a stunt that appears spectacular on the surface, but actually gaining control of the objective is far beyond their wheelhouse. Their entire approach is so fundamentally incorrect, they're going to resurrect the phrase "not even wrong" when they finally collapse in a massive plume of burnt IOU slips and cash.

Remember, the markets can stay irrational longer than you can stay solvent. Prolonged periods of fuckery are not proof of future gains.

END OF RANT

Expand full comment
Tom Vandermolen's avatar

I propose the next OpenAI model be code-named Tulip.

Expand full comment
Mike's avatar

ROTFL again! Please guys, stop this, I can't laugh as hard as I need to!

Expand full comment
Amy A's avatar

When the experts can’t tell the difference between data leakage and an actual major breakthrough, you have a problem. I think this will be seen as worse than the replication problem in the social sciences someday. Grateful I found your writing early Gary, you kept a lot of us from being fooled or feeling crazy 🤪

Expand full comment
Paul Snyder's avatar

Since everyone’s throwing around colloquial “Rules” and “Laws”, I offer that the continuing AI grift is the ultimate negation of Hanlon’s Razor.

https://simple.wikipedia.org/wiki/Hanlon%27s_razor

This exercise is not just a result of well meaning boobs exhausting their runway on development, but from knowing and intentional misrepresentations motivated by malicious greed and desire for control.

Which grift will collapse first, Crypto or AI? Both are being artificially kept afloat as I write this. The AI farce of circular finance manipulations by the primary players (similar to Enron but with much more gusto and bald governmental complicity) will only work for a short period, after which Altman and the other freaks will be bailed out by another half trillion, which will immediately vanish and the cycle will repeat.

At least with Pets.com we got an occasionally humorous commercial and a bit of internet buildout that remained after the collapse.

After this collapse, we’ll just be left with a bunch of Data Centers that we have both paid for and must subsidize into the future.

I wonder if those Data Centers could in some way be used by an authoritarian state to monitor and control a resistant population?

Have we financed and constructed our own panopticon?

Asking for a friend.

Best to all.

Expand full comment
Doug Tarnopol's avatar

Sutton deserves every measure of grace back—publicly admitting wrong is for most people a nightmare (shouldn’t be).

Expand full comment
Mike's avatar

Yep!

Expand full comment
PF's avatar

Essentially the whole chatbot industry is capitalizing on human's soft spot for natural language. Comparing the 2025 version of Altman against his 2015 version is just too sad and too painful to watch.

Expand full comment
Jim Ryan's avatar

Plus he has turned full facist and embraced the slop in the white house

Expand full comment
Larry Jewett's avatar

And now OpenAI is capitalizing on an even softer human spot : porn ...I mean "erotica"

Comparing the 2025 version of Altman against the 2025 version (just a couple months later) is sad, but some (not I, of course) might call it "entertaining" rather than painful.

Expand full comment
PF's avatar

That's right, the "P" word is too crass for Altman, the True Savior of Homo Sapiens and the Protector of the visible universe.

Expand full comment
Larry Jewett's avatar

AI-rotica is just an altmanative spelling of porn-AI-graphy

Expand full comment
Larry Jewett's avatar

AKA Jesus AIs Christ

Expand full comment
Benard Mesander's avatar

Oops mom I spent a trillion dollars on Eliza

Expand full comment
Steve's avatar

It's always "a decade away". Every big breakthrough, from the singularity to fusion to large- scale genetic engineering, to the cure for cancer is a decade away.

Expand full comment
Aaron Turner's avatar

When's the bubble going to burst? Taking all bets! :-)

Expand full comment
toolate's avatar

Markets can stay irrational longer than you can stay solvent betting against them

Expand full comment
Brian Payne's avatar

Part of me hopes that it's never. So many companies have invested heavily in this and so much of the US GDP is tied to AI that a sudden burst will be painful to many bystanders. I'm optimistic that the hype can be slowly deflated and instead rewrite the narrative so that the efficiencies gained by LLM usage can still be realized (but obviously not to the extent that AGI would)

Maybe if Marcus's voice was mainstreamed sooner, we wouldn't be in this predicament and a bubble burst would be ok

Expand full comment
Bruce Cohen's avatar

Maybe when Altman talks about “it” being a decade away he’s talking about when the AI bubble will burst. Should allow him enough time to stash a few hundred billion in a Swiss account and get out ahead of the torches and pitchforks.

Expand full comment
toolate's avatar

And they have been phenomenal at wealth transfer and setting the stage for an epic bubble burst

Expand full comment
Bruce Cohen's avatar

And yet Sam Altman continues

https://open.substack.com/pub/luizasnewsletter/p/i-expect-some-really-bad-stuff-to?r=3lstvr&utm_medium=ios

to hold out the prediction that we’re on a direct path to AGI* while simultaneously predicting dangers, risks, and harms while not having anything to say about what, if anything, he or his company plans to do to mitigate these dangers. Seems irresponsible, or maybe just grifty to me.

* while he redefines “AGI” every time there’s a hiccup in the advancement of LLMs in that direction.

Expand full comment
C. King's avatar

Bruce Cohen: "* . . . while he (Altman?) redefines 'AGI every time there's a hiccup in the advancement of LLMs in that direction."

I take it that, in doing so, "he" is in a creative/hypothetical frame of mind, which he apparently is not afraid to speak (his mind)? and not yet in claiming well-defined, though not yet verified theory? I could be wrong in this--but it seems that way to me.

Also, isn't it just this "noise-creating" open/and/changing movement that is bothersome to our expectations of AGI? That is, if I am using the term "noise" in the way the field means by it? (I am in a cross-field situation here--first philosophy and cognitional theory, and not as an expert experienced in neuroscience or physics, so please correct me if I err in my clarity and understanding of those specialist terms. Thanks, Catherine Blanche King

Expand full comment
Larry Jewett's avatar

Altman has only one frame of mind: $$mind$$

Expand full comment
Bruce Cohen's avatar

I’d agree with you about Altman’s frame of mind if my view of him over the last couple of years hadn’t made it clear to me that he is a grifter. He always seems to come out with new prophecies and/or redefinitions of where the technology is just when he’s negotiating another round of capital investment.

Expand full comment
C. King's avatar

Larry Jewett: I remain a capitalist at heart, though of course it also can be poisonous to one's character. And I forget sometimes . . . not everyone is driven, first, by the science spirit.

Expand full comment
Larry Jewett's avatar

Some are not even driven second (of third (or fourth ...(or nth))) by the science.

Besides, AI in its current (black box) configuration is most definitely not science.

Its more akin to AIchemy (even spelled the same) Picture AI "scientists" reciting incantations and tossing AI of newt and toe of frog into a boiling cauldron.

Expand full comment
Larry Jewett's avatar

Maybe that would be "AI of Newton",given his fondness for Alchemy in his later years

Expand full comment
C. King's avatar

Larry Jewett: Another cure for naiveté about the limitations of human potentialities.

Expand full comment
C. King's avatar

I think my note above re: Altman was responding to Bruce Cohen's note. C. King

Expand full comment
Hermogenes Rojas's avatar

This is old hat Mr.Marcus....

Give us the code/algorithm to fuse LLMs with Symbolic programming...

You may win the Turing/Nobel prize.

Expand full comment
Larry Jewett's avatar

There is no Nobel Prize for AI, despite the delusions of the Nobel award committee, who apparently believe Nobel's first name was spelled "AI-fred"

Expand full comment