And all the more so for drawing out the comment that the cost component derived from current and future litigation around lack of explainability, responsibility, injustice, negligence, copyright (and others yet to emerge), remains far from fully understood.
That's good angle of looking at the AI industry. It is far from being even profitable, not speaking about returning burned money.
By the way has anyone seen any analysis of the economics of each AI value chain layer? I am quite sure that some of them, like FM development, will remain unprofitable.
I find the content licenses, fines and damages rather glaring omissions on the cost side. This is likely a technology with no path to profitability at all.
There were 120+ lawsuits against generative AI companies in July. The “programmers will never go back” from the damage claims of 9 BUSD over breaches of 11 separate software licenses in GitHub CoPilot.
That’s before the publishers and the remaining authors have gone to court, and the FTC probe even got started. Add in the 197,000 pirated e-books in Bibliotik and 200 million copyrighted texts from C4 that went into ChatGPT, all under legally obligatory opt-out and licensing, and you’re looking at algorithm disgorgement and/or fines of up to 4% of turnover.
Same with privacy law. Non-compliance comes with threat of disgorgement and hefty fines, and it’s technically impossible for them to comply with GDPR right-to-be-forgotten.
I think you're right, but I also think that regulators are trying to wait and see how transformative and/or important generative AI may be towards future benefits in society. If we're still in a learning phase and generative AI seems like it might be as big as the hype, regulators may try to work with them on these (currently massive and prohibitive) regulatory problems. If/when regulators see that generative AI isn't likely to live up to the hype, they'll clamp down on existing law and likely crush all current models.
Thank you Johan for your thoughts on the legal side of things. I have wondered about this, so your comments are helpful. It seems similar to crypto and blockchain in terms of promise, but that space is very burdened right now with many different proposed laws and regulations all over the world. In this country, state and federal laws are in discussions but are slow to finalize which is holding the space back. What also seems similar is that it is hard for regulators to even understand how to regulate these rapidly changing, but complex technologies. Reading sci fi dystopian novels is not terribly helpful on a practical level. Perhaps someone should write a benevolent AI novel just for a change of pace. 😃
Rallying a group to map out the whole set. I only got as far as the most recent 40. Mostly copyright but also integrity, privacy, false arrests, care denial … she has some link collections on her blog, too.
Sure it is. Assigning probability distributions based on trees in order to do cognitive work of ranking stuff, designed by two students of one of AI’s leading figures, Terry Winograd.
Generative AI will exceed its potential when applied to existing Web2 foundational models with sound fundamentals. Example: Within tools on a application layer. As a stand alone product in a new category, beware. The big incumbenst rule this domain. There must be business and revenue models first.
Personally I am quite optimistic about the future of OpenAI, given their alliance with Microsoft. This revenue stream is unlikely to end while Microsoft is integrating OpenAI’s in their products and services.
AI is an innovation burst - good analogies would be the canal craze of the early 1800's (e.g., https://en.wikipedia.org/wiki/Indiana_Central_Canal) or the automobile companies of the early 1900's (from hundreds of companies to a handful in 20 years). And as noted below, the dot-com era.
Many startups in all these cases were searching for product/market fit that could prove profitable in the long run. The vast majority failed. The survivors (Erie Canal, Ford, amazon) were wildly successful.
This is the nature of such things: some will make money, most will fail.
I'm very disappointed with this article, I came here semi-expecting some form of rational economic-based analysis about AI and this was entirely one-sided with rose-tinted glasses. It completely ignores the fundamentals of basic economics when factor markets break down, or the sieve action you get with companies who eventually starve out their consumer base before then going belly-up. There's almost no mention of the cost side which inevitably goes up with any monopoly/oligopoly over time for dependencies.
The only thing these products are good for is for replacing humans and eliminating jobs; jobs which won't be replaced in new ventures anytime soon for the bottom and middle population demographics. If you can't earn bare subsistence unrest is not far off; its lessons from basic history coming to life.
My point was no rational person will invest time in reading if its biased writing, and rosy benefits with no downsides is a common characteristic of biased writing.
If you don't cover potential downsides that are more realistic, and notably that last angle isn't realistic because it doesn't even need to happen for more majorly poor outcomes to occur, then there's no reason people should spend the time reading.
Few seem to realize that to stall a motor all you need to do is create enough friction/interference and Economics is a motor of human action. People are half the equation, and history is full of examples about what happens when people are cut out of that process by interference in what has arguably been way less impactful than what we are seeing today.
If you look at the demographics for jobs, what happens when the bottom two thirds of all jobs are replaced by a computer. We are just a little bit beneath this with the public GPT derivatives. Robots for blue collar factor jobs, Software bots for the lower white-collar. (i.e. the roles that people usually get their experience from initially to move up the ladder).
Historically, what do people do when they cannot get food, have no means, and the government can't provide it. Malthusian Law of Population pretty much sums this up, which expresses the severity of the issues given our current population levels.
Great thought provoking piece.
And all the more so for drawing out the comment that the cost component derived from current and future litigation around lack of explainability, responsibility, injustice, negligence, copyright (and others yet to emerge), remains far from fully understood.
That's good angle of looking at the AI industry. It is far from being even profitable, not speaking about returning burned money.
By the way has anyone seen any analysis of the economics of each AI value chain layer? I am quite sure that some of them, like FM development, will remain unprofitable.
I find the content licenses, fines and damages rather glaring omissions on the cost side. This is likely a technology with no path to profitability at all.
There were 120+ lawsuits against generative AI companies in July. The “programmers will never go back” from the damage claims of 9 BUSD over breaches of 11 separate software licenses in GitHub CoPilot.
That’s before the publishers and the remaining authors have gone to court, and the FTC probe even got started. Add in the 197,000 pirated e-books in Bibliotik and 200 million copyrighted texts from C4 that went into ChatGPT, all under legally obligatory opt-out and licensing, and you’re looking at algorithm disgorgement and/or fines of up to 4% of turnover.
Same with privacy law. Non-compliance comes with threat of disgorgement and hefty fines, and it’s technically impossible for them to comply with GDPR right-to-be-forgotten.
This is a dead-end technology.
I think you're right, but I also think that regulators are trying to wait and see how transformative and/or important generative AI may be towards future benefits in society. If we're still in a learning phase and generative AI seems like it might be as big as the hype, regulators may try to work with them on these (currently massive and prohibitive) regulatory problems. If/when regulators see that generative AI isn't likely to live up to the hype, they'll clamp down on existing law and likely crush all current models.
Thank you Johan for your thoughts on the legal side of things. I have wondered about this, so your comments are helpful. It seems similar to crypto and blockchain in terms of promise, but that space is very burdened right now with many different proposed laws and regulations all over the world. In this country, state and federal laws are in discussions but are slow to finalize which is holding the space back. What also seems similar is that it is hard for regulators to even understand how to regulate these rapidly changing, but complex technologies. Reading sci fi dystopian novels is not terribly helpful on a practical level. Perhaps someone should write a benevolent AI novel just for a change of pace. 😃
Well, this just happened: https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services
And the lawsuits are now beyond 150 and counting ...
source on the 150?
Rallying a group to map out the whole set. I only got as far as the most recent 40. Mostly copyright but also integrity, privacy, false arrests, care denial … she has some link collections on her blog, too.
Josourcing on Twitter. 150 legal processes, I should clarify. She also tracks regulatory probes, national bans, etc
Yes very interesting. All this drama and hype just feeds this level of legal scrutiny.
I believe there are factual errors. Google’s original page range algorithm was not powered by “AI” in any meaningful sense.
Sure it is. Assigning probability distributions based on trees in order to do cognitive work of ranking stuff, designed by two students of one of AI’s leading figures, Terry Winograd.
Generative AI will exceed its potential when applied to existing Web2 foundational models with sound fundamentals. Example: Within tools on a application layer. As a stand alone product in a new category, beware. The big incumbenst rule this domain. There must be business and revenue models first.
Thanks for this interesting essay.
Personally I am quite optimistic about the future of OpenAI, given their alliance with Microsoft. This revenue stream is unlikely to end while Microsoft is integrating OpenAI’s in their products and services.
P.s. I wrote an article about this topic before: https://marknuyens.substack.com/p/challenging-openai
thanks so much for this essay. very insightful!
AI is an innovation burst - good analogies would be the canal craze of the early 1800's (e.g., https://en.wikipedia.org/wiki/Indiana_Central_Canal) or the automobile companies of the early 1900's (from hundreds of companies to a handful in 20 years). And as noted below, the dot-com era.
Many startups in all these cases were searching for product/market fit that could prove profitable in the long run. The vast majority failed. The survivors (Erie Canal, Ford, amazon) were wildly successful.
This is the nature of such things: some will make money, most will fail.
I'm very disappointed with this article, I came here semi-expecting some form of rational economic-based analysis about AI and this was entirely one-sided with rose-tinted glasses. It completely ignores the fundamentals of basic economics when factor markets break down, or the sieve action you get with companies who eventually starve out their consumer base before then going belly-up. There's almost no mention of the cost side which inevitably goes up with any monopoly/oligopoly over time for dependencies.
The only thing these products are good for is for replacing humans and eliminating jobs; jobs which won't be replaced in new ventures anytime soon for the bottom and middle population demographics. If you can't earn bare subsistence unrest is not far off; its lessons from basic history coming to life.
Not in the very short term. Some form of future AI eventually will.
My point was no rational person will invest time in reading if its biased writing, and rosy benefits with no downsides is a common characteristic of biased writing.
If you don't cover potential downsides that are more realistic, and notably that last angle isn't realistic because it doesn't even need to happen for more majorly poor outcomes to occur, then there's no reason people should spend the time reading.
Few seem to realize that to stall a motor all you need to do is create enough friction/interference and Economics is a motor of human action. People are half the equation, and history is full of examples about what happens when people are cut out of that process by interference in what has arguably been way less impactful than what we are seeing today.
If you look at the demographics for jobs, what happens when the bottom two thirds of all jobs are replaced by a computer. We are just a little bit beneath this with the public GPT derivatives. Robots for blue collar factor jobs, Software bots for the lower white-collar. (i.e. the roles that people usually get their experience from initially to move up the ladder).
Historically, what do people do when they cannot get food, have no means, and the government can't provide it. Malthusian Law of Population pretty much sums this up, which expresses the severity of the issues given our current population levels.