Anyone remember this piece? We are starting to see some signs in that direction. The WSJ recently reported that Microsoft Copilot was perhaps underwhelming some customers. Today Stephanie Palazzolo The Information asked: A longer story there (that she pointed to in the above)) was
The average AI CEO has the greatest motivation to keep the hype train going for as long as possible. “So powerful it could destroy the world” is a bit more of a head turner than “pretty useful for summarising meeting notes”.
Not only is ROI overstated, but liability is understated. How many companies using Open Ai without even knowing their risk? They're paying for a third party service using OpenAi on the backend and could be a potentially liable for all kinds of stuff. Copywrite issues, potentially publishing false or defamatory information. Guarantee CEO dude does not ask questions about API keys and training data.
Is the big dose of hopium the AI orgs huffed when they realized how much improvement could be had by pouring training data into LLMs is starting to wear off? Now they've got very expensive models to run and operate, a weak biz case, and no way to fix hallucinations. Absent another major breakthrough-something as significant as the advancement they got from scaling up LLMs- or a major use case for LLMs AI-investors are going to feel burned. AI companies probably have until late 2024-2025 to prove utility in a serious way. If not-investment dries up very signifcantly- it probably won't go back to the post-Minsky "AI winter." But it may get a rather frosty.
Interesting comments, but not much re the clear, specific, significant, benefits of AI, and nothing re the potential harm to individuals and society, other than “Its going to kill all of us”.
Perhaps, more than anything, we need a rational cost-benefit analysis.
But what if GenAI is more like electricity? It was difficult to see a clear ROI on electricity initially. The upfront costs were high, and the benefits weren't immediately apparent to everyone. Its adoption was a gradual process that took decades.
I don't know, Gary. One year is not enough time. Maybe speed to ROI is the issue. Imagine if people said the same thing about blogging in the late 90s when weblogs were the future. Of course, then none of us would be on Substack, even in bloggin's post peak glory. It took time.
As far as the Altman quote goes, I don't take anything he says seriously.
This whole post is incredibly myopic and ignores the exponential growth in intelligence, the advent of AI agents, and the ongoing incorporation of AI/agents into every aspect of human life including national defense and offense. We'll look back a year from now and laugh at the incredible shortsightedness of Prof. Marcus's suggestions here.
If that's true then short-selling chip producers and other companies which shares went up due to the recent AI hype, is a great way to earn a lot of money with little risk (surely the expected value is very positive). I wonder if you would do that.
The average AI CEO has the greatest motivation to keep the hype train going for as long as possible. “So powerful it could destroy the world” is a bit more of a head turner than “pretty useful for summarising meeting notes”.
Not only is ROI overstated, but liability is understated. How many companies using Open Ai without even knowing their risk? They're paying for a third party service using OpenAi on the backend and could be a potentially liable for all kinds of stuff. Copywrite issues, potentially publishing false or defamatory information. Guarantee CEO dude does not ask questions about API keys and training data.
Is the big dose of hopium the AI orgs huffed when they realized how much improvement could be had by pouring training data into LLMs is starting to wear off? Now they've got very expensive models to run and operate, a weak biz case, and no way to fix hallucinations. Absent another major breakthrough-something as significant as the advancement they got from scaling up LLMs- or a major use case for LLMs AI-investors are going to feel burned. AI companies probably have until late 2024-2025 to prove utility in a serious way. If not-investment dries up very signifcantly- it probably won't go back to the post-Minsky "AI winter." But it may get a rather frosty.
I am using GitHub CoPilot very intensely, and it is a limited but really lovely tool. There's no going back on that.
Google Bard is very capable when it comes to generating plots, explaining a topic, writing a draft proposal, making illustrations.
There is solid value for such tools, if the price is right, they can bring the costs down, and capabilities continue to improve.
Are things way too hyped up? For sure.
It’s going to have very useful but more limited applications than promoted by snake oil salesman like Mr Altman.
Relevant post from a fellow Substacker and ChatGPT early adopter who gave it a solid try: https://open.substack.com/pub/vinayprasadmdmph/p/my-enthusiasm-for-chat-gpt-in-medicine
Interesting comments, but not much re the clear, specific, significant, benefits of AI, and nothing re the potential harm to individuals and society, other than “Its going to kill all of us”.
Perhaps, more than anything, we need a rational cost-benefit analysis.
But what if GenAI is more like electricity? It was difficult to see a clear ROI on electricity initially. The upfront costs were high, and the benefits weren't immediately apparent to everyone. Its adoption was a gradual process that took decades.
Wouldn’t want to live without it today, tho :)
I don't know, Gary. One year is not enough time. Maybe speed to ROI is the issue. Imagine if people said the same thing about blogging in the late 90s when weblogs were the future. Of course, then none of us would be on Substack, even in bloggin's post peak glory. It took time.
As far as the Altman quote goes, I don't take anything he says seriously.
I sure wouldn't judge ROI by Copilot ... one of the least useful AI tools out there.
But also, ROI doesn't come from simply using AI, but by integrating AI into systems. If there is no strategy, then ROI will be low.
It's always been like this in the world of content development. You need content strategy, not just content.
This whole post is incredibly myopic and ignores the exponential growth in intelligence, the advent of AI agents, and the ongoing incorporation of AI/agents into every aspect of human life including national defense and offense. We'll look back a year from now and laugh at the incredible shortsightedness of Prof. Marcus's suggestions here.
"Unreliable mediocre intellect required" said no job ad ever.
If that's true then short-selling chip producers and other companies which shares went up due to the recent AI hype, is a great way to earn a lot of money with little risk (surely the expected value is very positive). I wonder if you would do that.