96 Comments
User's avatar
Sir MeowFace's avatar

Hey everyone, last night I used a single Google Search query and discovered quantum mechanics. I find this very accelerating.

Notorious P.A.T.'s avatar

It's always in the last place you look!

Paul Jurczak's avatar

I discovered the wheel with Google AI Overview. I think it will be accelerating my trips to the grocery store.

Coalabi's avatar

or "exhilarating"? ^^

Jonah's avatar

I think their use of "accelerating" was likely quite intentional. Spurred on by the transition of computer executives from anti-establishment "disruptors" to being the establishment themselves, the previous ideological cult of Silicon Valley, centered around freedom of information, open-source software, and a generally democratic instinct, has become dominated by a twisted modern update of effective altruism, combined with authoritarianism that occasionally shades into neo-feudalism or fascism.

The upshot of this is that the true devotees of this cult envision a future of plenty and happiness, generally brought about Deus Est Machina solving all the problems that they can imagine, a future that (of course!) can only come about under the benevolent guidance of philosopher-kings, who happen to be those very same technology executives and venture capitalists. As such, taking a leaf from those other inevitabilists, accelerationist Marxists, they believe that their ideal future is unavoidable, and they must bring it about as quickly as possible for the sake of maximizing global well-being. They define themselves as accelerationists, and those who urge any degree of caution as decelerationists, and thus the enemy.

So when they choose a word like "accelerating" in an unusual context like this, it very likely comes from that ideological context, even if they are not aware of that being the reason that a certain kind of language is widespread in the social circles in which they much.

Christian GP's avatar

So found means literally searched and found / so AGI is really AGS. Advanced General Search.

disappointed, again : ho hum.

6-5-4 months to go….

Amy A's avatar

My fellow librarians and I find things daily. Where are our prizes and accolades 😅

Danielle Church's avatar

Speaking as a library enjoyer, I wholeheartedly agree on "where are the prizes and accolades for librarians?"!!!

merlinder's avatar

as a fellow librarian, "I find this very accelerating because I know how hard it is to search the literature" absolutely sends me because this is one of our areas of expertise and this is yet another example of the industry erasing people, expertise, and human interaction in an effort to further hype technopoly - and this time it's personal lol

Larry Jewett's avatar

But you don't find the truly important things: billions of dollars

Amy A's avatar

We should work on that!

Guidothekp's avatar

The words credibility and brand value don't mean much for OpenAI folks, it seems.

Kathleen Weber's avatar

Actually, it just reflects the underlying premise of 98% of American enterprise: "There's a sucker born every minute" AKA "Caveat emptor." 

Jim Skelton's avatar

Bubeck's backtrack sounds a lot like an LLM backtracking when confronted with a hallucination...

Gerben Wierda's avatar

Or when it's confronted with a doubt/claim of hallucination after producing a correct answer... (it does less so these days, I suspect because massive parallelism and confidence values in upscaled LLMs enable it to withstand this a bit).

Saty Chary's avatar

Hi Gary, lol. Slimy at best - make wild claims to get people talking, then back off.

A 'little bit of AGI' is being a little bit pregnant.

Btw, same with the BS about Gemini having "found" a new cellular pathway for cancer therapy, or having "discovered" a million new materials. Or DeepMind having "found" mat-mult shortcuts, or their agents having "learned" to play hide and seek. These are all wild and absurd claims.

Larry Jewett's avatar

If a person or LLM makes enough wild and (seemingly) absurd claims, some are bound by probability to turn out to be true.

And most people naturally focus on the latter and simply forget the ones that turned out to be false.

I believe it was Carl Sagan who wrote about this phenomenon in the context of "psychics". Those who "predicted" the future are held up as soothsayers and those who did not are simply forgotten. And even those who got just one thing right out of many many "predictions" (most of which were wrong) are still held to be soothsayers

Nathan Brouwer's avatar

Is there any debunking on the gemini - cancer cell pathway topic. I follow AI hype on and off and want to start tracking this

Kwesi Afful's avatar

Surely they knew they would get found out? Or was the purpose just to generate attention and hype of the initial buzz. ‘No press is bad press’

Gary Marcus's avatar

maybe bubeck didn’t vet the employee carefully because he was so ready to believe?

Jim Amos's avatar

The whole company runs on a magical thinking engine.

Naomi Alderman's avatar

I think this is the thing, yes. They want it to be true so hard that it has shut down their own normal cognitive functioning. (And also they love the churn because discourse keeps the stock price high….)

Larry Jewett's avatar

This presumes they possess a "normal cognitive functioning" to be shut down.

On a humorous side note : I first read "churn" as "chum" -- fish bait

Joy in HK fiFP's avatar

Whenever I read articles on AI, I picture the great minds of yesteryear, who were convinced of the truth of alchemy, and I firmly believe they would instantly recognize, and join in, with what we are seeing in today's AI hype.

Larry Jewett's avatar

The spelling is the same "AIchemy".

That can't be an accident

Gerben Wierda's avatar

Bubeck probably not vetting the Erdös claim because he was ready to believe really is a key aspect of what is happening here.

All humans have this. We're wired for efficiency and speed, which means that our convictions steer our observations and reasonings, probably even more than the other way around. So, when such a claim surfaces, my (and your) convictions conclude 'this cannot be true' first, and we start to look for evidence that it isn't true (which is then easily found) after. Because humans are wired for confirmation too ("Yes! I'm right!" gives a nice feeling). Basically, with a brain operating partly on the edge of chaos, the human core behavioural architecture must have these stability-enforcers or that brain wouldn't work.

Bubeck c.s. start from the convictions that this must be true. It's about who has the most trustworthy convictions, and in a world full of hype, dis- and misinformation, those are somewhat hard to get by.

The Digital Revolution, especially the GenAI-revolution is (hopefully) going to teach us some inconvenient truths about human intelligence.

Graham Lovelace's avatar

That's probably the case ... over-exuberance looking for proof-points of AGI that aren't there in LLMs, and never will be.

Patrick's avatar

You are reminding me of Kahneman's discussion of 'cognitive ease' — so many of these examples are so cognitively easy at lower latency than we've ever seen. Put that together with the readiness to believe, and the lie is around the world before critical thinking has even put its shoes on.

Jim Amos's avatar

Lol such a pitifully poor backtrack. Literature is hard to research? Give me a break!

Larry Jewett's avatar

Ignorance is hard when one is ignorant

Zac's avatar

Thoroughly pleased by the “searched on Google” bit. Back during the pandemic I suggested that people should start swapping out the phrase “researched” with “googled on the toilet” and if the statement still sounded smart they were allowed to finish the statement.

C. King's avatar

A repost: FYI, (Last week) I had a conversation with a friend in Denmark who has been working with AI in his work for awhile. (I have not.) He sent me a brief text of a conversation he had with is AI and was "amazed" at the feedback he was getting.

After a bit of discussion between me and my friend, I realized that the AI part of the conversation was merely a tautological "take" on my friend's earlier question and input, albeit with some "amazing" conceptualizations and relatively unique word associations that, though maintaining the tautological meaning of his original work and questions, my friend easily confused with originalist insights and their expressions.

My initial thought is that the experience was more dictionary and thesaurus than anything much like human thinking. I am reposting this from the comments section of Gary's last posting.

FWIW Catherine Blanche King

gregvp's avatar

Yeah, so many are doing it wrong. The best, maybe only, way to use AI is to describe what you think and ask it "tell me why I'm wrong here".

The results are often as much fun as a cold shower, but useful nonetheless.

User's avatar
Comment deleted
Oct 21
Comment deleted
C. King's avatar

Ted Bunny: My friend did not resist--In the meantime, however, I sent him a four-minute read/narrative given by a scientist and linked in a NATURE online magazine (see link below) and he replied that, though the scientist's experience and data were much more complex, he still had "oh wow" and "wait a minute..." experiences. Here's what my friend said further: "AI algorithms are as prone to inauthenticity errors as much as any of us humans. i wonder why....lol." I didn't think I needed to press the point further.

https://www.nature.com/articles/d41586-025-03135-z?utm_source=Live+Audience&utm_campaign=097eb88fe6-nature-briefing-daily-20251016&utm_medium=email&utm_term=0_-33f35e09ea-49343912

True to Type with Pollyanna's avatar

Your debunkings make me indescribably happy.

Jack's avatar

There's a LOT of money that wants to keep the hype train moving.

The more philosophical point is that an "extraordinary claim" is very much in the eye of the beholder. To a lay person all math problems are hard, so they have no frame of reference for what constitutes a genuinely extraordinary claim, versus a claim that is merely impressive.

Jim Amos's avatar

Math isn't one of my strengths so I just check to see if Gary has debunked it yet 😁

Diamantino Almeida's avatar

I think now I understand why they keep saying we don’t need universities, education, or even knowledge since their "digital parrots" supposedly know more than we do. But if you’re not educated, you’ll fall prey to these claims, which are not only disrespectful but also reveal how hype is prioritized over critical understanding.

They could have consulted experts to verify their claims before pushing misleading narratives into the public.

Can they please stop the hype? ChatGPT is not a being. It’s an inanimate tool an advanced SQL-like query engine that uses natural language (no disrespect intended to the researchers who built LLMs). Throwing around the term "AI" when AI is a field, only adds to the confusion.

But I’ve never seen so many lies in tech as I have in the past three to five years.

Sam Oldman's avatar

I'm no mathematician, but I find this so funny.

Fake news?

Clickbait?

What else is new.

But on math?

No! You don’t mess with science.

Shame on you.

The comment here about AGI being AGS (artificial general search) was spot on.

This is where we’re at. Flooded with focus-draining apps wrapped in promises of scientific salvation. Instead of curing cancer, we get an update on Sora. Instead of progress, we get a shitshow from the upper balcony.

In all seriousness, maybe this helps put things in perspective? I don't know.

Not all growth is growth.

Darren D'Addario's avatar

It's beside the point, but it's fascinating how Paul Erdos's singular life never really ends.

Jim Brander's avatar

Why the ire?

An LLM seaches for patterns between words. It doesn't turn those words into objects, give them attributes, and solve something.

It is hard to sell snake oil when everyone knows what it is - give him a break. At least he is identifying the gullible among us - a disturbingly large percentage.

Jim Amos's avatar

Why the benefit of the doubt? Has OpenAI earned it? Or do they have a track record of making ridiculous claims that are tantamount to fraud?

Jim Brander's avatar

He doesn't get the benefit of the doubt - there is no doubt he is a fraud. But it is also a stinging indictment of Computer Science Professors that so many of their output are totally lacking in critical thinking skills. These are not people you want working on life-critical applications.

Art Keller's avatar

why anyone believes anything claimed by 95% of people in AI industry with a vested stake in maintaining momentum (and therefore hugely overblown valuations) is beyond me? How many times do we have to see initial "this changes everything" claims fall to pieces? How many times do they have to be caught hacking metrics to justify stupendous claims but see LLMs immediately failing when asked to solve for anything outside training data distribution? When OpenAI is bragging about the number of tokens it processed but NOT number of paying subscribers, it should be a giant red flag warning that every unpaid query is actually lighting money on fire. How many of 800 million weekly users would still be using these models if there were not a free commodity to 95% of users? They've yet to come up with a coherent explanation of their path to profitability because they don't have one.

Larry Jewett's avatar

The economics will eventually kill it because there is no such thing as a free (AI) launch.