29 Comments
Jun 1·edited Jun 1Liked by Gary Marcus

I will never understand how (outside of outstanding hubris) anyone ever thought the crap on the internet would produce AGI. It’s a never ending game of whack-a-mole, where it seems like the leaders are fooling themselves and (trying) to fool everyone else. And damaging the environment.

The Information’s most recent podcast blames the internet as entertainment, which is an interesting addition to the conversation.

Expand full comment

I think you could add "deluded by the insatiable allure of limitless profits" to that.

Expand full comment

Crap from the internet will not produce AGI.

But custom-designed datasets, that show in painstaking details how problems are solved step-by-step, and when and how is appropriate to use tools and check one's work, will work very well at fixing the problems of LLM. What they lack is depth.

Expand full comment

Even if they'd used the finest and highest literature and philosophy in the world it could not produce AGI.

Expand full comment

Folks have to realize that A.I is human sourced and anything human source is not infallible! Failure is human..... and so is making it right. However, A.I is selling a utopian vision of perfection. All of these A.I 's are based on human output! So this illusion that has become a delusion needs to told to the masses but that won't happen because it is bathed in innovation which is now a word that has been bastardized.

Expand full comment

> He of all people should know better.

A ketamine addiction will do that to the brain.

Expand full comment

What amazes me is that none of these guys - probably because they’ve never taken a humanities class - has any response addressing the basic problem of inference faced by deep learning that Hume outlined for us 300 years ago. These algorithms try to learn from data by brute statistical force, finding patterns without starting with strong postulates. They have not somehow escaped the basic problem of inference that way, especially as they’ve exhausted all the available training data.

Expand full comment

I get the same feeling. I'm anxious to explore how a model can be trained on less data but with more behavioral and environmental feedback. From what I see, RLHF uses canned feedback instead of anything closer to like a group of human parents chiding the model. I suppose it comes partly down to cost but also that there's no guarantee that humans brought in to correct the model, won't sabotage it in various ways.

Expand full comment

Musk might be trying to bring down OpenAI by stealing their hype. He already stole $6 billion worth of investment money that otherwise could have gone to OpenAI. I don't know what happened between him and Altman but it's very likely personal now :)

Expand full comment

“should” is irrelevant. Musk doesn’t reliably know anything because he forms his judgements purely on vibes, social positioning, and whether or not he thinks something (e.g. slowing down AI development) is “woke.”

Expand full comment

I don't think Musk flipped on his views so much as his strategy. He still believes, as he stated recently, that AGI has a 10-20% chance of existential risk. But since he apparently believes now that the pause has no chance (and indeed it got promptly forgotten even as a movement almost as soon as the ink dried), he is now placing his efforts into developing truth-seeking AGI as the antidote to woke/dangerous AGI. I'm not endorsing his methods, just trying to shed light on his evolution.

Expand full comment

"[Musk] of all people should know."

Eh. A person who thinks it's viable to build a 300-mile vacuum tube across California, whose cars can not be exited when they break down, whose space rockets explode after launch, who thinks a rocket to Mars will have room for concert halls, and who built a truck that falls apart after delivery probably does not have great judgment in technological matters.

Expand full comment

Who watches the watchmen?

The insurance damage on the city caused by that robot must amount to billions.

Expand full comment
Jun 1·edited Jun 1

You are setting this up as Gary vs the dishonest hype-promoters and unscrupulous businessman. So Gary ends up looking good.

This is loud and not really serious.

AI is a multi-decade project. The advances in the last 5 years have been good. Nothing will "get solved" in a year or two.

But the recent emphasis on large systems that have a lot of world knowledge is the right approach. There's a lot more work to do.

Expand full comment

Andy. We’ve had language models in development for over four decades. How much more time is needed? 😊

Expand full comment

We've had computers for 70 years, and symbolic logic for 200. So?

The world is complex, the mind has many components, the progress is incremental, and in many directions at once.

Expand full comment

It’s not like Musk and Altman ever make things about themselves, eh 🤫

Expand full comment

To add, neural networks are not hitting a wall. Talking that way will expose to ridicule just fine. Neural nets are good at many things, but high-level reasoning is not one of them. There's many aspects to intelligence, and neural nets are the single most powerful tool we have. We need a lot more.

Expand full comment
author

QED “high-level reasoning is not one of them .. we need a lot mreo [tools]” sounds like a restatement of my 2018 article and characterization of the wall

Expand full comment

I agree with you a lot when there are specific nuanced arguments.

Expand full comment
Jun 3·edited Jun 3

The meme is loud and not really serious.

It isn't like Gary is responding to a strong, well-reasoned couterargument. He doesn't have a whole lot to work with.

Having said that, I agree with you, science takes time (a sentiment that I'm sure Gary agrees with). But, there is an awful lot of hype to the contrary, with Hinton making claims like a probability of about half that we will be dealing with an AI attempt to take control within the next 5-20 years.

Expand full comment
Jun 3·edited Jun 3

There is solid agreement that the current systems are far from AGI and far from being a threat. Deep learning isn't hitting any wall. Statistical prediction can do great things if paired up with methods that do modeling and verification.

Expand full comment

I think you may have misunderstood the main points of my comment (or I have misunderstood your points and need you to clarify them). I wasn't arguing that deep learning is hitting a wall. I know that Gary is, but that wasn't what I was saying.

In his blog post, Gary was responding to a meme that exemplifies the glib dismissal and hype often dominating these debates. My first point was that this meme, and Musk's amplification of it, was 'loud and not really serious,' highlighting the double standard in your original comment.

The prevalence of exaggerated claims and overheated rhetoric in the AI community is a problem. My second point was that even leading deep learning scientists, like Geoffrey Hinton, have made hyperbolic claims about the potential threats of AI. While I agree that current systems do not pose some kind of immediate existential threat, my concern is that such claims from prominent figures in the field contribute to a climate of hype and unrealistic expectations. Science takes time.

This hype makes it difficult to have grounded, evidence-based discussions about the real-world impacts and challenges of AI with the general public. The general public need to be more aware and educated on AI, and these claims and memes do not help.

Expand full comment

Elon does know better. But his personal animus toward people wrong him is severely overdeveloped. He will pound away on big bad LLM AI until he kills whatever OpenAI and Altman are doing. Then he will decide to be reasonable. Maybe.

Expand full comment

There's a bit of a fundamental question here. Nothing is 100% reliable. It's become common in software engineering to lean in to that. There's no goal to do as well as possible; just to do well enough that people will buy the product. Customers don't expect reliability; after long years of experience they just reboot the computer when it crashes, and get on with whatever they were trying to do.

From where I sit, the proponents of LLMs figure that they can get their products reliable enough that customers will accept them, and that's all they need to do. Their bet on the one hand is that this is consistent with let's say one egregious mistake per 100 or maybe 1000 queries - maybe more. And on the other, that they can bandaid their systems well enough that people who aren't trying to demonstrate breakage can get that level of reliability routinely, once they've trained themselves to phrase their prompts appropriately.

At that point, they are done. Their investment pays off; maybe someone gives selected lead researchers the Nobel prize.

Meanwhile, their product gets used in area where morally speaking, 99% or even 99.9% reliability is not enough. That's unfortunate, but Not Their Problem (TM). And in any case future research (read "even more bandaids") can be applied to hopefully raise that reliability still farther.

I'm not a fan of this approach, but I can see how it would seem utterly reasonable to many in software engineering, and your criticisms ridiculous and petty. Why "make the perfect the enemy of the good", when you can ship new products faster if you use a chatbot to produce them? Surely QA will catch any mistakes that are really important (sic). And as we all know, humans also make errors. Why single out chatbot errors as somehow worse than the errors that would have been made by the humans they replace?

Expand full comment

Mockery from a bunch of delusional tech bros is something to be proud of. Congratulations.

Expand full comment

Having read your various posts and your book, all I see you do is complain about the state of AI without having any actual suggestions (other than stop working on NNs). As far as I can tell, LLMs do a great job of replicating human behavior complete with hallucinations (see Fox News) and represent a huge step forward. Maybe AGI is not some Spock-like entity but is more like Dr McCoy.

Expand full comment

And we need this, why?

Expand full comment

Last I heard Musk still said he favors Pause but has changed teams because the world is refusing to Pause.

#PauseAI here of course and working to at least get oversight

Expand full comment