29 Comments
Mar 12·edited Mar 12

The average AI CEO has the greatest motivation to keep the hype train going for as long as possible. “So powerful it could destroy the world” is a bit more of a head turner than “pretty useful for summarising meeting notes”.

Expand full comment

Not only is ROI overstated, but liability is understated. How many companies using Open Ai without even knowing their risk? They're paying for a third party service using OpenAi on the backend and could be a potentially liable for all kinds of stuff. Copywrite issues, potentially publishing false or defamatory information. Guarantee CEO dude does not ask questions about API keys and training data.

Expand full comment

Is the big dose of hopium the AI orgs huffed when they realized how much improvement could be had by pouring training data into LLMs is starting to wear off? Now they've got very expensive models to run and operate, a weak biz case, and no way to fix hallucinations. Absent another major breakthrough-something as significant as the advancement they got from scaling up LLMs- or a major use case for LLMs AI-investors are going to feel burned. AI companies probably have until late 2024-2025 to prove utility in a serious way. If not-investment dries up very signifcantly- it probably won't go back to the post-Minsky "AI winter." But it may get a rather frosty.

Expand full comment
Mar 13Liked by Gary Marcus

I agree. The LLM approach is exciting but GPTs are not the way to go. OpenAI was unlucky that the world went crazy for GPT-3.5 as this technology needed another couple of iterations on from transformers before it was ready for prime time.

Expand full comment
Mar 27·edited Mar 27

We are nowhere near prime time.

LLMs will be at this model weights shell game for years to come. Even with Jensen Huang’s Jobsian demos, the hardware still isn’t able to deliver, say, FP16 quantization in near real time (a rather low bar for all the spin).

Decisions involving life and death increasingly rest precariously on a creaking GenAI tightrope* of overhyped, lossy nondeterminism, above the yawning chasm of disaster, with correct output on one end and wildly nutty hallucinations on the other.

* The most dangerous part of the tightrope is not in the middle but just inches away from the edge of accuracy, because the swift serpents of subtle spuriousness strike silently there.

Expand full comment

I am using GitHub CoPilot very intensely, and it is a limited but really lovely tool. There's no going back on that.

Google Bard is very capable when it comes to generating plots, explaining a topic, writing a draft proposal, making illustrations.

There is solid value for such tools, if the price is right, they can bring the costs down, and capabilities continue to improve.

Are things way too hyped up? For sure.

Expand full comment
deletedMar 12
Comment deleted
Expand full comment

CoPilot cannot help you with the hard work, indeed. It can help with the little annoying things, and they are many. Create for loops, debug lines, make a function interface for a given block of code, call a given function at a given location, add comments, bash syntax (so many dollar signs!), python syntax. The neat thing is that it is better than a person at avoiding bugs where you mix up a variable or an index.

You may say that this is not much time saved, but it makes the job more pleasant and lets you save attention for the more fun parts.

Expand full comment

That’s like asking what Excel is good for, since the hardest part in, for example, creating a financial statement is not adding up numbers.

Expand full comment
deletedMar 13
Comment deleted
Expand full comment

"adding up numbers is tedious, time consuming, and error prone, even for people who have a lot of practice at doing it."

It might just be me since I'm a mediocre programmer at best, but that exact quote applies to most of my programming tasks 😂

Expand full comment

It’s going to have very useful but more limited applications than promoted by snake oil salesman like Mr Altman.

Expand full comment

Maybe apply Hanlon's Razor (or Bonhoeffer) regarding evil versus stupid.

Expand full comment

Relevant post from a fellow Substacker and ChatGPT early adopter who gave it a solid try: https://open.substack.com/pub/vinayprasadmdmph/p/my-enthusiasm-for-chat-gpt-in-medicine

Expand full comment

Interesting comments, but not much re the clear, specific, significant, benefits of AI, and nothing re the potential harm to individuals and society, other than “Its going to kill all of us”.

Perhaps, more than anything, we need a rational cost-benefit analysis.

Expand full comment
author

NB GenAI is a subset of AI, not all AI.

and i certainly didn’t say it is going to kill us all. so you haven’t been a careful reader.

Expand full comment

Sorry, was talking about the comments, ie. “destroy the world”, not you.

Expand full comment

But what if GenAI is more like electricity? It was difficult to see a clear ROI on electricity initially. The upfront costs were high, and the benefits weren't immediately apparent to everyone. Its adoption was a gradual process that took decades.

Wouldn’t want to live without it today, tho :)

Expand full comment
author

Or like dirigibles?

Expand full comment

Exactly - no one knows. So let's build with it to find out instead of prematurely condemning it :)

Expand full comment

I don't know, Gary. One year is not enough time. Maybe speed to ROI is the issue. Imagine if people said the same thing about blogging in the late 90s when weblogs were the future. Of course, then none of us would be on Substack, even in bloggin's post peak glory. It took time.

As far as the Altman quote goes, I don't take anything he says seriously.

Expand full comment

There is hype, but dismissing the recent progress and demanding instant results is indeed short-sighted.

If we can do for reasoning and language what we were able to do for image and voice recognition, in a few years we could achieve miraculous things. Not guaranteed, of course, but looks promising to me.

Expand full comment

I sure wouldn't judge ROI by Copilot ... one of the least useful AI tools out there.

But also, ROI doesn't come from simply using AI, but by integrating AI into systems. If there is no strategy, then ROI will be low.

It's always been like this in the world of content development. You need content strategy, not just content.

Expand full comment

This whole post is incredibly myopic and ignores the exponential growth in intelligence, the advent of AI agents, and the ongoing incorporation of AI/agents into every aspect of human life including national defense and offense. We'll look back a year from now and laugh at the incredible shortsightedness of Prof. Marcus's suggestions here.

Expand full comment
author

just like y’all did with my warnings about oversold driverless cars in 2016, 2017, 2018, 2019, 2020….

Expand full comment

I'm not familiar with your comments on driverless cars, but we are of course now in the exponential growth curve for FSD also, after many years of "imperceptible" improvements. The lily pond is about to be covered. Tesla's FSD v12 is finally worthy of being called FSD and its rollout is now limited only by legal/regulatory issues. And ditto with AI after years of imperceptible improvements ChatGPT showed the dog leg and we're now in that vertical growth curve. Why would there be any natural logistical growth curve for AI improvements or accompanying energy demand? https://tamhunt.medium.com/the-ai-explosion-environmental-and-existential-disaster-f616f3a6347d

Expand full comment

"Unreliable mediocre intellect required" said no job ad ever.

Expand full comment

If that's true then short-selling chip producers and other companies which shares went up due to the recent AI hype, is a great way to earn a lot of money with little risk (surely the expected value is very positive). I wonder if you would do that.

Expand full comment
Mar 12Liked by Gary Marcus

The famous quip that the market can remain irrational much longer than you can remain solvent would certainly apply here, regardless of the ultimate economic value of genAI.

Expand full comment

You don't need to invest all your savings...

Expand full comment

Not everything is about money. Not everyone focuses on money solely. Belief in something does not necessarily imply desire to cash in on the same.

Expand full comment