94 Comments
User's avatar
Mikael Hanna's avatar

My own micro-trend is in-sync. I started out using AI to assist me in programming tasks. But my usage has fallen dramatically due to the poor performance.

Sean's avatar

I had a number of open questions about some algorithm topics and math , which chatGPT5 successfully closed for me.

After that I don't have too much to ask it.

If I am tired and write sloppy code that has a bug in it, it can be quicker to hand it to chatGPT5 than spend time looking for the error myself.

And I have maybe one or two questions a week for it.

Initial high usage down to much more sparse usage.

Robert Keith's avatar

Meaning, like many other things, it's a tool, not a panacea.

Ian Keane's avatar

I stopped using it Gen AI for helping me with work after I noticed the cognitive offloading was affecting my own abilities to do tasks and also my overall happiness...

So far, I've noticed I'm more productive and happier than I was using it. I was also using Chat GPT 40, so it was sycophantic and unreliable. It concerns me that we are just unleashing this powerful and extremely flawed into the world, not understanding how it exactly works (unprecedented), saying it will make you "X" more productive and not asking ourselves about the costs plus what we lose in the process. We need to learn from Social Media and what we all lost from that in the process. Gen AI has its place, but I hope the backlash to Gen AI, it's masters of the "universe", and how these companies are deploying it only grows for our society's and environment's sake.

rakkuroba's avatar

I find it absolutely hilarious that the scaling “laws” that were supposedly inviolable are no longer being talked about at all. I haven’t heard Sam talk about exponential growth in like a year and a half!

Paul Topping's avatar

It's gone from being a "law" to being an expectation. "Of course 5.0 will be better than 6.0. It always is."

Ovid Jacob's avatar

Maybe they aren't 'laws' after all.... :)

Jeff Irvin's avatar

The present bubble forming around AI (LLMs) is likely a by-product of late stage capitalism combined with several hundred years of Enlightenment optimism about progress and economic growth. In short, it's a desperate attempt to revive the moribund notions embedded in neoliberalism and The Elightenment.

The neoliberal belief that everything can be commoditized, making society more efficient and rational--even if this belief rests on an alchemical notion of social "emergence"--is coming to an end. It will only survive--a few more years, and only if governments allow it to feed on the soon-to-be carcass of the welfare state.

As for the idea of perpetual growth and the bettering of mankind, so intimately associated with The Enlightenment, it seems clear we have not progressed morally as much as we have materially.

However, any lack of future material progress, or better put material distribution, may be the product of something else: the mistaken belief that 1948 to 1968 was for the United States normal growth rather than an aberration. This belle epoque of material progress, particularly for the United States, should be seen as happenstance, not part of the natural order.

No recent book better illustrates this than Robert J. Gordon's "The Rise and Fall of American Growth" (2016), which argues that economic growth in the U.S. was generated by a variety of tailwinds that simply do not exist today. In fact, we face four headwinds: social inequality, lack of educational access, a graying population, and government debt.

A.I. is not just an attempt to lever up the economy; it's the last gasp in an attempt to rescue neoliberalism and the utopian notions of The Enlightenment.

I think we need to ask ourselves some simple questions. Do we really need the digital intimacy of Sora? Do we really need to wage a war against the spirit of a technological antichrist? Poor Greta Thunberg looks so innocent to me. Or, do we need to sit down and have a discussion of how we are going to respond to some of the greatest moral questions of our age: climate change, ecological collapse, hunger, disease, war, famine, etc.?

The biggest con now being perpetrated on the world may no longer be religion, it might be the salvation being offered to us by billionaire techbros through A.I.

William Bowles's avatar

But AI has always been a scam, it was obvious from day 1. There's no such thing as artificial intelligence. There's machine learning backed by massive computing power, where speed is the operative word, so that it LOOKS like thinking but it's all smoke and mirrors.

Simple John's avatar

Intelligence requires goals. There is no machine with goals. Sorry dreamers.

Words other than pointing words will never be precise and thus will never reference objects of classical logic.

Plato told us the truth - the symbolic mind deals in shadows. Now we cast the shadows with compute.

Sean's avatar

One neuron in the current layer connects to n weights in the next layer and casts a shadow picture into the next layer with intensity x (the output of the neuron) through those weights. Plato's shade.

Martin Machacek's avatar

… and the 2 goals that all other are derived from are survival and reproduction (it can also be reduced to just 1: survival of species). All living organisms share those goals. That is what drives problem solving and in the end all autonomous creative activity. There is no AI capable of that and I don’t think we should create one. I also believe that it is not possible.

Simple John's avatar

I'm glad I inspired? a much better phrasing. Intelligence is manifest and possibly recognized when a problem is solved. Correct inferences and deductions not being driven just exist in a universe that is not interested in them. Goals and interest are striking me as the same except for intensity. Both are on the road to species survival.

Amy A's avatar

Yann Lecun apparently thinks he can give AI emotions and then it can have goals 😵‍💫

Larry Jewett's avatar

LeCun will probably give AI hostility toward Gary Marcus or anyone else who criticizes him.

Oleg  Alexandrov's avatar

Intelligence requires no goals. Not more than a physics simulator requires goals. What it requires is accuracy and good models. So I agree that "words other than pointing words will never be precise", but that has nothing to do with goals.

Martin Machacek's avatar

Intelligence is a poorly defined concept, but in my view of the world it is very different from physics simulator. I believe that human-like intelligence requires embodiment and self-reproduction which creates survival as the goal of existence.

Oleg  Alexandrov's avatar

Intelligence surely requires embodiment. In order to understand the world, one has to interact with it very closely, and learn lessons from that.

Self-reproduction and survival are traits of biological things. For live creatures, intelligence is a means to an end, at least for certain species, as it helps with survival. Intelligence is not always necessary for survival, nor does intelligence implies desire to survive.

Intelligence is the ability to understand the world well-enough to function in it. Likely anything in the world can be modeled in silicon, though likely it won't be easy as modeling physics. In that case, a machine can do what a person can.

Martin Machacek's avatar

My point is that the desire to survive eventually leads to human-like intelligence.

Oleg  Alexandrov's avatar

I don't think we necessarily need perfectly human intelligence. We need a machine that can understand the world well-enough to do work.

Then, the implication that need to desire to survive to have human-like intelligence is highly tenuous. It is likely simply a correlation based on just one example.

Intelligence has nothing to do with desire to survive. Different aspects.

Scott's avatar

Set up the initial conditions and run hundreds of thousands of iterations with minor tweaks to the initial conditions so that you can build a model of how the process works. But don’t expect it to think for you.

I do wish it was around during the days of running dozens of iterations per week of your model in the computer lab, with a piece of dot matrix printer paper taped over the monitor, reading: “Please leave running, I’m estimating models for my dissertation!”

William Bowles's avatar

I might add that if anyone here needs to get a deeper understanding of how this madness all came to pass, they need to read: David F. Noble's 'Forces of Production - a social history of industrial automation".

Kenneth Burchfiel's avatar

If investors had just lit all these US dollars on fire, at least they would have provided some heat . . . cotton is a renewable resource, after all.

Well, come to think of it, the data centers are producing plenty of heat as well. Either way, not a good allocation of capital!

George Shay's avatar

The old aphorism “No tree grows to the sky” comes to mind.

Matthew Sheffield's avatar

LLMs are normal technology. They definitely have a lot of use cases, but because they are shuffling ungrounded symbols, it means they are not as revolutionary as the companies are suggesting.

Oleg  Alexandrov's avatar

Grounding is happening when an AI agent can simulate the things it is dealing with, and inspect the outcomes. LLM can't do it itself, but it can be hooked up to tools that do.

James Jameson's avatar

michaeljacksoneatingpopcorn.gif

Jack's avatar
Oct 22Edited

If you compare the population growth rates of Utah and the US, the data strongly suggest that Utah will be bigger than the US before long.

Notorious P.A.T.'s avatar

Whoa, we had better learn to speak Utahese!

Scott's avatar

Just learn the prehistoric language of the Utahraptor. It’s tens of millions of years old!!

Riaan Visser's avatar

This is expected.... Novelty wears off.

But here's the metric the data doesn't show.

User AI prompt ability determines allot of your success with AI....

I use various frameworks for physics and maths, and it's and immensely powerful tool, if your prompting is powerful. But when the tool doesn't naturally live up to them hype....

Here's the truth about AI and the future....

AI breaks down walls.... But breaking down walls, often expose those who sheltered behind its comfort.

Imagine the illiterate story teller, he might hold the most magnificent ideas, but could never reach forget than host village and his local language, now his stories can reach the end of the earth.

Imagine a girl who hums tunes no one had ever heard, but doesn't play instruments, her voice can now sooth pain on the other side of the globe.

Imagine a blind guy who imagines a different world, he can now, with some careful prompting, reveal the things he could never show anyone.

Will AI disrupt the status quo? Sure will. But it's not the death of creatives, it's simply the dawning of a new era.

Let's be real.... If I ask AI to write me a story, it's going to give me a generic rehashing of is training data.... The danger goes off I be all for an original song, or picture....

But what if McCarthy was using AI to help him write his novels... That creativity, that brilliance cannot be reproduced by a data trained AI.

If you're a creative crying about AI taking your future, maybe make the conscious decision to take your future back.

A writer could be opposed to the Gutenberg press, and fall away into obscurity, or he can use the technology of the day.

AI isn't the death of creatives, its the death of those who try to fight the current, instead of paddling with the flow.

Doug Smith's avatar

Sam's willingness to turn ChatGPT into a porn shop (following Grok there) shows the beginning of the end. https://endsexualexploitation.org/articles/ncoses-blasts-openai-for-plans-to-introduce-erotica-to-chatgpt/

Justin's avatar

Our work is blocking chatgpt now because of this.

Bombaclaat's avatar

The internet's top use case in the early years was not only to share academia information but to a huge extent to share pictures of that nature (videos I remember were too low res or large file size). Friend of mine at that time with more advanced early Internet access and skills had a bot scouring the Usenet Newsgroups every night for fresh content he could enjoy in the morning, download times mattered. The groups had funny names, alt.sex or something. Now there's all these AI girlfriend apps, so maybe Sam's on to something, that's indeed a big, still mostly untapped market.

name12345's avatar

I guess there could be an argument that -- to use a 90s analogy -- to get the fiber optics laid more quickly and build out all the other internet technology, the irrational exuberance of Pets.com was needed. In other words, the Gartner Hype Cycle is a feature, not a bug. And the folks that lost their investments should have known they were at a casino.

This is something I've struggled with for years because I'd rather do things "well" and "correctly" (and if I were a technologist in AI, "safely"), but the brute fact seems to be that most humans are more interested in faster, "good enough" results than truthfulness/correctness/safety.

This helps explain why people like Altman rise to the top and why companies with good marketing beat companies with better technology.

GandalfTheGray's avatar

This is something of a platitude, but just want to offer sincere thanks for being a voice of sanity and reason during this mania. It's ironic because lots of these tech titans make claims to being "hyper-rational" in their analysis of stuff, yet they willfully close their eyes to well-substantiated facts when the facts don't fit the narrative of ai magically unlocking infinite GDP growth...

Quality Control's avatar

Instead of too big to fail, AI seems too fail to big. None of these liars and thieves deserve one cent of government bailout when this whole contraption comes crashing down. Or maybe they will make enough money renting porn bots to losers to repay those multi-billion dollar loans.

RK's avatar

This anecdotal piece by a journalist who uses ChatGPT Pro brings up another key issue.

The app triages the computing resources it will dedicate to any given request—and basically refuse if it decides it’ll take too much effort.

And the threshold is not high. This guy asked for a simple alphabetical list of stuff that could be found in a few internet searches, and it said no. Offered a sample list of say 100 instead …

And this is a paying user.

https://open.substack.com/pub/jessesingal/p/the-complex-calculations-underpinning?r=miuc&utm_medium=ios

Tim Koors's avatar

Scaling laws and a helping hand from the increase in AI slop in the training data. Using slop to generate more slop does not result in improved slop. It is a negative feedback loop.

See for more info:

https://www.arxiv.org/abs/2510.13928

It proves the old adage that 'you are what you eat' is also true for LLMs.

Could it also explain Grok's Mecha Hitler when Musk tried to eliminate the liberal bias of his LLM by feeding it with conservative content?

Larry Jewett's avatar

Yes.

Not much talk these days about Mad LLaMa Disease.

But people using the chatbots are not the only ones becoming psychotic. So are the chatbots which train on psychotic stuff (of which there is no shortage online)