The average AI CEO has the greatest motivation to keep the hype train going for as long as possible. “So powerful it could destroy the world” is a bit more of a head turner than “pretty useful for summarising meeting notes”.
Not only is ROI overstated, but liability is understated. How many companies using Open Ai without even knowing their risk? They're paying for a third party service using OpenAi on the backend and could be a potentially liable for all kinds of stuff. Copywrite issues, potentially publishing false or defamatory information. Guarantee CEO dude does not ask questions about API keys and training data.
Is the big dose of hopium the AI orgs huffed when they realized how much improvement could be had by pouring training data into LLMs is starting to wear off? Now they've got very expensive models to run and operate, a weak biz case, and no way to fix hallucinations. Absent another major breakthrough-something as significant as the advancement they got from scaling up LLMs- or a major use case for LLMs AI-investors are going to feel burned. AI companies probably have until late 2024-2025 to prove utility in a serious way. If not-investment dries up very signifcantly- it probably won't go back to the post-Minsky "AI winter." But it may get a rather frosty.
I agree. The LLM approach is exciting but GPTs are not the way to go. OpenAI was unlucky that the world went crazy for GPT-3.5 as this technology needed another couple of iterations on from transformers before it was ready for prime time.
LLMs will be at this model weights shell game for years to come. Even with Jensen Huang’s Jobsian demos, the hardware still isn’t able to deliver, say, FP16 quantization in near real time (a rather low bar for all the spin).
Decisions involving life and death increasingly rest precariously on a creaking GenAI tightrope* of overhyped, lossy nondeterminism, above the yawning chasm of disaster, with correct output on one end and wildly nutty hallucinations on the other.
* The most dangerous part of the tightrope is not in the middle but just inches away from the edge of accuracy, because the swift serpents of subtle spuriousness strike silently there.
Interesting comments, but not much re the clear, specific, significant, benefits of AI, and nothing re the potential harm to individuals and society, other than “Its going to kill all of us”.
Perhaps, more than anything, we need a rational cost-benefit analysis.
But what if GenAI is more like electricity? It was difficult to see a clear ROI on electricity initially. The upfront costs were high, and the benefits weren't immediately apparent to everyone. Its adoption was a gradual process that took decades.
I don't know, Gary. One year is not enough time. Maybe speed to ROI is the issue. Imagine if people said the same thing about blogging in the late 90s when weblogs were the future. Of course, then none of us would be on Substack, even in bloggin's post peak glory. It took time.
As far as the Altman quote goes, I don't take anything he says seriously.
This whole post is incredibly myopic and ignores the exponential growth in intelligence, the advent of AI agents, and the ongoing incorporation of AI/agents into every aspect of human life including national defense and offense. We'll look back a year from now and laugh at the incredible shortsightedness of Prof. Marcus's suggestions here.
I'm not familiar with your comments on driverless cars, but we are of course now in the exponential growth curve for FSD also, after many years of "imperceptible" improvements. The lily pond is about to be covered. Tesla's FSD v12 is finally worthy of being called FSD and its rollout is now limited only by legal/regulatory issues. And ditto with AI after years of imperceptible improvements ChatGPT showed the dog leg and we're now in that vertical growth curve. Why would there be any natural logistical growth curve for AI improvements or accompanying energy demand? https://tamhunt.medium.com/the-ai-explosion-environmental-and-existential-disaster-f616f3a6347d
If that's true then short-selling chip producers and other companies which shares went up due to the recent AI hype, is a great way to earn a lot of money with little risk (surely the expected value is very positive). I wonder if you would do that.
The famous quip that the market can remain irrational much longer than you can remain solvent would certainly apply here, regardless of the ultimate economic value of genAI.
The average AI CEO has the greatest motivation to keep the hype train going for as long as possible. “So powerful it could destroy the world” is a bit more of a head turner than “pretty useful for summarising meeting notes”.
Not only is ROI overstated, but liability is understated. How many companies using Open Ai without even knowing their risk? They're paying for a third party service using OpenAi on the backend and could be a potentially liable for all kinds of stuff. Copywrite issues, potentially publishing false or defamatory information. Guarantee CEO dude does not ask questions about API keys and training data.
Is the big dose of hopium the AI orgs huffed when they realized how much improvement could be had by pouring training data into LLMs is starting to wear off? Now they've got very expensive models to run and operate, a weak biz case, and no way to fix hallucinations. Absent another major breakthrough-something as significant as the advancement they got from scaling up LLMs- or a major use case for LLMs AI-investors are going to feel burned. AI companies probably have until late 2024-2025 to prove utility in a serious way. If not-investment dries up very signifcantly- it probably won't go back to the post-Minsky "AI winter." But it may get a rather frosty.
I agree. The LLM approach is exciting but GPTs are not the way to go. OpenAI was unlucky that the world went crazy for GPT-3.5 as this technology needed another couple of iterations on from transformers before it was ready for prime time.
We are nowhere near prime time.
LLMs will be at this model weights shell game for years to come. Even with Jensen Huang’s Jobsian demos, the hardware still isn’t able to deliver, say, FP16 quantization in near real time (a rather low bar for all the spin).
Decisions involving life and death increasingly rest precariously on a creaking GenAI tightrope* of overhyped, lossy nondeterminism, above the yawning chasm of disaster, with correct output on one end and wildly nutty hallucinations on the other.
* The most dangerous part of the tightrope is not in the middle but just inches away from the edge of accuracy, because the swift serpents of subtle spuriousness strike silently there.
It’s going to have very useful but more limited applications than promoted by snake oil salesman like Mr Altman.
Maybe apply Hanlon's Razor (or Bonhoeffer) regarding evil versus stupid.
Relevant post from a fellow Substacker and ChatGPT early adopter who gave it a solid try: https://open.substack.com/pub/vinayprasadmdmph/p/my-enthusiasm-for-chat-gpt-in-medicine
Interesting comments, but not much re the clear, specific, significant, benefits of AI, and nothing re the potential harm to individuals and society, other than “Its going to kill all of us”.
Perhaps, more than anything, we need a rational cost-benefit analysis.
NB GenAI is a subset of AI, not all AI.
and i certainly didn’t say it is going to kill us all. so you haven’t been a careful reader.
Sorry, was talking about the comments, ie. “destroy the world”, not you.
But what if GenAI is more like electricity? It was difficult to see a clear ROI on electricity initially. The upfront costs were high, and the benefits weren't immediately apparent to everyone. Its adoption was a gradual process that took decades.
Wouldn’t want to live without it today, tho :)
Or like dirigibles?
Exactly - no one knows. So let's build with it to find out instead of prematurely condemning it :)
I don't know, Gary. One year is not enough time. Maybe speed to ROI is the issue. Imagine if people said the same thing about blogging in the late 90s when weblogs were the future. Of course, then none of us would be on Substack, even in bloggin's post peak glory. It took time.
As far as the Altman quote goes, I don't take anything he says seriously.
I sure wouldn't judge ROI by Copilot ... one of the least useful AI tools out there.
But also, ROI doesn't come from simply using AI, but by integrating AI into systems. If there is no strategy, then ROI will be low.
It's always been like this in the world of content development. You need content strategy, not just content.
This whole post is incredibly myopic and ignores the exponential growth in intelligence, the advent of AI agents, and the ongoing incorporation of AI/agents into every aspect of human life including national defense and offense. We'll look back a year from now and laugh at the incredible shortsightedness of Prof. Marcus's suggestions here.
just like y’all did with my warnings about oversold driverless cars in 2016, 2017, 2018, 2019, 2020….
I'm not familiar with your comments on driverless cars, but we are of course now in the exponential growth curve for FSD also, after many years of "imperceptible" improvements. The lily pond is about to be covered. Tesla's FSD v12 is finally worthy of being called FSD and its rollout is now limited only by legal/regulatory issues. And ditto with AI after years of imperceptible improvements ChatGPT showed the dog leg and we're now in that vertical growth curve. Why would there be any natural logistical growth curve for AI improvements or accompanying energy demand? https://tamhunt.medium.com/the-ai-explosion-environmental-and-existential-disaster-f616f3a6347d
"Unreliable mediocre intellect required" said no job ad ever.
If that's true then short-selling chip producers and other companies which shares went up due to the recent AI hype, is a great way to earn a lot of money with little risk (surely the expected value is very positive). I wonder if you would do that.
The famous quip that the market can remain irrational much longer than you can remain solvent would certainly apply here, regardless of the ultimate economic value of genAI.
You don't need to invest all your savings...
Not everything is about money. Not everyone focuses on money solely. Belief in something does not necessarily imply desire to cash in on the same.
That’s like asking what Excel is good for, since the hardest part in, for example, creating a financial statement is not adding up numbers.
"adding up numbers is tedious, time consuming, and error prone, even for people who have a lot of practice at doing it."
It might just be me since I'm a mediocre programmer at best, but that exact quote applies to most of my programming tasks 😂