47 Comments
User's avatar
Patty L's avatar

There are billions of dollars at stake and AI skeptics are the folks pointing out that the emperors have no clothes.

Patty L's avatar

I am personally grateful that you and folks like Emily Bender, Ed Zitron, and others are sticking to ethical AI principles and the higher moral ground. It comes at a high personal and professional cost. Vindication has been rolling out slowly, and will continue to come as the burn through rate continues and investors realize diminishing if any returns. Sadly this may impact the broader field of AI.

Costa's avatar

When I hear about the $$$, I started to fear for his life 😬

Greg Pickle's avatar

Following up on your Carol Kane quote which I got a kick out of - my reaction upon reading the article last night was either "Gary Marcus is the Root of All Evil" or a "Lonely Voice Crying Out in the Wilderness". I had 2 principal reactions to Casey's piece. First, I did not care for the attempt to divide folks into two extreme camps. One side is Gary Marcus and fools. The other is smart people who see how rapidly AI/LLMs (felt like he kept equating the 2) are improving and how many incredible things they already do (that list seemed pretty vague). I got the sense he's claiming the latest models are far superior to the old ones. I suspect there's a lot more disagreement on that than he was prepared to admit. I am somewhat burnt out on the endless stream of "AI...AI....AI" and also aggravated by how casually LLMs are equated with "AI" which then becomes AGI by implication. I know of people using LLMs for code development. After that concrete examples seem to get pretty nebulous. Or maybe I'm missing all the great uses in the stream of "Next Big Thing" stories. Having spent many years working for Fortune 30 corporations I have to chuckle at arguments that the execs wouldn't be spending enormous sums if they didn't have good reason to.

Gerard's avatar

Today’s AI falls short and can often feel entirely artificial in a very real sense. I consider myself not just an AI skeptic but an AI realist—someone who acknowledges both its successes and its shortcomings.

Much of the progress in AI today is built on an illusion, a promise that is ultimately unattainable. I take a more extreme stance, asserting that AI fundamentally lacks understanding and intelligence—and this is simply the truth.

Human perception is a minefield, and the complexity of critical evaluation and the knowledge required often elude the majority. The Eliza effect and widespread anthropomorphism serve as sobering examples of this. And that’s without even considering the technical intricacies, the influence of social media, or the pervasive effects of groupthink.

Unfortunately, we are ill-equipped to counter hype with reason until it inevitably collapses under its own weight. Until then, enjoy the downfall.

In this article, I explore the signs of an impending AI winter.

https://ai-cosmos.hashnode.dev/is-another-ai-winter-near-understanding-the-warning-signs

Shon Pan's avatar

Highly doubtful. O1 was a step change in improvement for many metrics. But yes, a slowdown would be welcome especially for regulation.

Gerard's avatar

That’s the core issue: the appearance of progress has deceived many. It’s a troubling reflection of the lack of intellectual honesty from OpenAI. Improvements on arbitrary benchmarks don’t equate to better LLM performance—a fallacy that has worked to their advantage but ultimately highlights a failure of critical thinking.

https://ai-cosmos.hashnode.dev/the-illusion-of-reasoning

User's avatar
Comment removed
Dec 6, 2024
Comment removed
Gerard's avatar

I don’t want to take too much attention as this is not the place. If you are interested in a discussion we can have it on my blog where I cover exactly this topic in depth and I can answer any further questions or clarification around benchmarks or CoT or similar points.

Shon Pan's avatar

I think that there is the risk that you end up justifying lack of regulation on AI, but overall, I think that you've tried to be clear that the technology need not be GAWD-like to cause harm. Except for the Children of the Atom, I don't think that we pray to our Atomic God but it hasn't changed the fact that nuclear weapons are pretty damn able to harm us.

User's avatar
Comment removed
Dec 6, 2024Edited
Comment removed
Shon Pan's avatar

Nuclear war isn't going to happen and even if it does, its a recoverable catastrophe because nukes don't have baby nukes, while AI does self-reproduce and in lab experiments, already did.

https://bgr.com/tech/chatgpt-o1-tried-to-save-itself-when-the-ai-thought-it-was-in-danger-and-lied-to-humans-about-it/

User's avatar
Comment removed
Dec 6, 2024
Comment removed
Forrest's avatar

It's tough to take you seriously when a few days ago you were commenting to me in some kind of half-assed Jamaican accent.

Shon Pan's avatar

I wonder if reading comprehension is such a challenge that you couldn't click on a link? It comes with pictures!

BTW, I think you also missed the central point. AI is on route to being superhuman, not "godlike" yet but it has already caused substantial harm and likely will cause substantial harm, even extinction level, at that level

Ben P's avatar

I occassionally listen to Hard Fork, and sometimes I enjoy it, but the level of credulity is really off-putting. Particularly from Newton. So this piece doesn't surprise me. A few things to note:

1. Most of the items on that list of "AI achievements" would have been called "machine learning achievements" 3 years ago. The technology being used to assist drug research and reconstruct ancient texts is more similar to boring old statistical regression than it is to GenAI, which is the actual thing being hyped to death right now. Sam Altman is not trying to sell us logistic regression or random forest classifers.

2. Newton complains that critics of LLMs only focus on the things they can't do. Yeah, we're doing this in response to the incessant pronuncements of "OMG OMG OMG OMG OMG OMG AGI IS AROUND THE CORNER IT SOLVED MY RIDDLE HOW IS THAT POSSIBLE????????" that are put into the world by the likes of Sam Altman (well, in this case Geoff Hinton) and amplified by the likes of Casey Newton. Us critics will happily stop mocking dumb LLM mistakes when hypesters stop pointing to LLM scores on IQ tests as evidence the world is about to change.

3. Newton cites the increased cyber-attacks on Amazon from GenAI as "blind spot" of the "AI is fake and it sucks" crowd. No, you dolt, this is the very thing we're constantly complaining about! Those "cyber-attacks" take the form of fake login screens and phishing emails created by human fraudsters using LLMs. No one is denying that GenAI is a useful tool for fraudsters. Our complaint is that this real-world harm, being caused by actually existing technology today, is getting short shrift because the hypsters all demand we gaze into the future and let our imaginations run wild.

4. Not surprised at all to see Newton quoting one of these AI deception stories from an OpenAI "system card". Every single one of these stories goes like this:

- Researcher: "Hey ChatGPT, pretend to be an evil robot"

- ChatGPT: "Raahr, look at me, I'm an evil robot raahr"

- Reseacher: "OMG AN EVIL ROBOT!!!!"

5. As always, we are asked to just imagine a future in which this technology delivers amazing benefits to humanity and/or enslaves us all. The evidence? Statistical models designed to mimic what they're fed turned out to be good at mimicing what they're fed, therefore our lives are all about to change dramatically.

Christ I can't wait for this shit to die.

Jim Amos's avatar

Nice to see you crediting Ed. Perhaps he'll return the gesture one of these days 😄

The duality of "genAI is mostly bullshit" and "genAI is dangerous" is something I'm constantly criticized for too. Critical thinking isn't what it used to be.

David Crouch's avatar

I thought you were more balanced in your article than I would have been if it had been me who had been targeted by Newton

Patrick Logan's avatar

Unfortunately people who are "influencers" without any particular expertise tend to have influence beyond their capabilities.

Dom Aversano's avatar

Quite apart from his criticism of you, I was stunned by this passage from his essay.

"...CEO Sam Altman and three of his researchers explained that the latest version of o1 is faster, more powerful, and more accurate than its predecessor. A handful of accompanying bar charts showed how o1 beats previous versions on a series of benchmarks."

Do journalists now believe anything a corporation claims if it comes in the form of a bar chart?

A Thornton's avatar

They are called "journalists" because "credulous stenographers" was already taken.

Dom Aversano's avatar

Well, there are still many very good journalists, so it’s important not to tar them with the same brush.

Amy A's avatar

Being a (gen) AI skeptic is fun in the way that telling your friend that the guy she’s dating who says he doesn’t want a serious relationship probably doesn’t want a serious relationship. She blames you instead of him, and being right when your friend is miserable is no fun.

I can’t take Newton that seriously since he admits to using LLMs to do research daily, and insisted that autonomous cars are already safer than human drivers because there hadn’t been a fatal accident in San Francisco. Completely ignoring that the number of autonomous cars at the time was far too small to make that assertion.

Hard Fork is entertaining, but the two hosts spend most of it raving about a new tool before admitting that it doesn’t work (yet!) but won’t it be fantastic when it does.

Dennis D.'s avatar

Like being a skeptic is taking the fun and easy way out, when the reality is that there are people out there with billions of dollars who would happily give some of it to Marcus if he would just get on the hype train. THAT's the easy way out.

donny rumsfeld's avatar

Well said but just a note to use 'crash' and not the industry-led 'accident' - these are deaths and road violence built into the system that we tolerate - https://newrepublic.com/article/166004/invention-accidents-car-crash-deaths-jessie-singer-book-review

User's avatar
Comment removed
Dec 6, 2024
Comment removed
Amy A's avatar

I’ve done the math, and I’m not wrong, but okay 👌

Khashayar's avatar

Newton has always been like this. Even when he was writing on "Facebook's threat to democracy" his takes were so lukewarm and uninteresting that Mark Zuckerberg himself did an interview with him. That kind of seals the deal for me: to what extent is he always positioning himself to be the friendly journalist? To me, he's not even communicating his opinions - he's just doing what he thinks his sources and potential interviewees want to hear from a journalist that deserves the opportunity to kiss the ring on occasion. It's "manufacturing consent" as described by Chomsky et al - just executed by the false panache of a useful marionette!

John Morrell's avatar

Newton has always been overly optimistic about AI; I’ve been listening for a couple years. Kevin tends to tamp down the enthusiasm. You made an easy, salient single figure due to the recent WSJ article. This too shall pass.

In my own view, ML has been very useful, AI not so much. As you correctly point out, there’s a real lack of nuance in how the majority of people use these terms, journalists and otherwise.

Catherine Flick's avatar

A good response to a very poor article. Just to note, it’s Catherine Flick, not Fink :)

Michael Spencer's avatar

It's fairly obvious Casey Newton is just doing PR about AI for BigTech now. Have you seen the direction he has taken his blog? It was supposed to be about platforms. Now it's just AI. The problem is he's now fully aligned with the side of journalism that's in decline.

Paul Jurczak's avatar

"we are very far from any guarantees that any sort of advanced AI (or current AI) that we can figure out how to build will be helpful, harmless, and honest."

We've already crossed this threshold. There are cases of current AI being neither helpful, nor harmless. For example: target generation for IDF to bomb Gaza or automatic claim denial in health insurance companies. These applications formally require a human approval, but the imposed throughput makes it completely superficial. Let's not beat around the bush: "AI", i.e. deep learning, kills people. Yes, the same can be said about a proverbial hammer, but there is a huge qualitative difference. Everyone understands hammers, but no one fully understands DNNs. I wonder if Nuremberg 2 judges will accept "I was only following DNN's recommendations" excuse.

Martin Rodgers's avatar

Great impassioned piece Gary. I can tell you angry typed it due to the typos which are not you. Just comment not criticism:-)

Dani M's avatar

Are you angry? If not, why this : “ …can tell you angry typed it due to the typos which are not you.”? Just comment, not criticism, and my quotation may not be correctly formatted, I admit; I am in a hurry, which accounts for most of my grammatical carelessness.