33 Comments

A real-life anecdote to back up what Gary has outlined: a few months ago, I was interviewed by a global organization that needed help with their technical documentation (communicating the importance of a specific set of green technologies to a wider, non technical audience). This organization has over 50,000 members all over the world, and the work they do directly impacts the built environment.

The first question they asked me was, "How do you feel about using ChatGPT in your work (as a writer/editor)?"

Note the open-ended nature of the question. They weren't making a value judgment. The question didn't lead—they didn't ask "Do you use ChatGPT in your work"

I said, simply and immediately, "I refuse." And then qualified it with many of the points Gary, I myself, and many others have been making about the reliability of generative AI.

There was a beat of silence as the three people on the call looked at each other. Then they broke into applause. (And hired me a few weeks later)

I have never received actual applause in an interview.

Expand full comment

Damn. What a moment.

Expand full comment

I knew there was a reason I liked you ;-)

Expand full comment

This says it all: “OpenAI painted a false dichotomy. The choice is not between them building AI or not, it is between them building AI for free.” With all the hype and fascination OpenAI was never properly evaluated on basic business fundamentals.

Gary, the more you write and with knowledge of the last 1000 years of business and technology. OpenAI is incredibly comparable to the dot.com era crash and burn. Technology and times may change but human greed and stupidity doesn’t.

Expand full comment

Gary, this is a very nice synopsis. I've forwarded to colleagues. Keep up the good work.

Expand full comment

BTW, I'm also a comrade in arms of Kathryn Conrad, at KU's nascent Center for Cyber-Social Dynamics.

Expand full comment

Great analysis! I might add a 10th; energy costs. All those data centres and even if much is offloaded to devices, that's still a huge cost. One that will only get higher.

Expand full comment

Great overview for anyone new to the controversy. Like your phrasing about training materials. Why shouldn’t OpenAI have to pay for their raw materials like any other business? What if Spotify were to say that paying royalties to artists makes their business model untenable? It would be an absurd argument. Why does the same logic get thrown around with AI tech?

Expand full comment

Even Spotify, which itself has decimated the payments model for and livelihoods of many musicians, has not dared to take such a brazen stance. Goes to show you the unprecedented level of hubris here.

Expand full comment

Good summary. My five cents are that some of these are worse than others. Legal troubles may go away with enough money and claims of strategic importance, but plateauing performance and the problem of long-term economic sustainability will probably be the big ones.

The grifters and cultists in the space claim that progress can only be exponential, and in two years generative AI will create all the blockbuster movies, making actors unnecessary, and in ten years it will "solve all of science". Reality is unlikely to oblige, because it is indeed full of diminishing returns.

As for the economics, the old observation from social media that if you aren't paying for it, you are not the customer but the product can be flipped around to observe that whoever is paying for it is the customer. So, in the case of OpenAI, I guess the customer is Microsoft, and it also follows that the core product is hype.

The intended small-scale customers of the service seem to fall into a limited number of categories: (1) those playing around with it for amusement and the novelty; (2) coders who use it instead of a keyword search on Stackoverflow; (3) those who want to save on salaries and fees for writers and artists; (4) spammers and fraudsters who generate low quality web content. The first group will not pay anything substantial, and at any rate the novelty will wear off soon. The second to fourth groups will be willing to pay something, but it is a big, open question if they are willing to pay what it actually costs to keep the models running long-term. It may just be easier to go back to Stackoverflow, or a writer may turn out to be cheaper than a "prompt engineer" whose output still has to be labouriously revised and checked anyway. It is surely already a bad sign that there appear to be some companies who claim to use AI but are really Mechanical Turks where human artists are producing the output or human engineers control the "self-driving cars". I also doubt that the quality produced by the current generation of generative AI will ever be good enough for anybody except the fourth group, who don't have to care about their company's reputation when things go wrong, but that is the question of diminishing returns again.

It also leads to a follow-up question: if the hallucinations and glitches we get right now cost hundreds of millions of dollars to produce, what will a model cost that consistently produces outputs of acceptable quality? What would an AI cost to run that performs at the cognitive level of the average human, who, after all, costs only tens of thousands of dollars per year?

Expand full comment

Accurate assessment of problems facing OpenAI.

Hallucinations are fundamental to how LLMs operate so very hard problem to solve.

LLMs work best with additional tools to augment it's hallucinatory capabilities.

Expand full comment

Hi Tomasz,

What tools? AI validating other AI to make a lower chance of "bad/wrong" responses? Users perhaps need to flag any responses that they suspect to be hallucinatory so that professional teams at companies can review and work on finding trends and optimizing an overseer AI that will filter/iterate with the underlying primary LLM output to improve the integrity/bounding of the final output... during the inference process? Is this perhaps already happening?

Expand full comment
Comment removed
Jan 27Edited
Comment removed
Expand full comment

If you use AI for code generation then it's pretty simple to verify if program is executing correctly.

Another point is that interaction with AI is iterative so yo work on required solution via chat.

I used this technique successfully to write code.

When you get error then you can simply provide output to AI to look for correction.

AI is time saver in this context.

Expand full comment

I'm surprised the phrase "new AI winter" didn't occur in this post. There's lots of shivering already.

Expand full comment

May at best be an "OpenAI winter!" Why AI winter? While they did contribute to making GenAI popular and they are a significant player for sure, the AI field is beyond GenAI, and certainly far beyond OpenAI, the company.

Expand full comment

The problems with the current wave of AI technology go far beyond just OpenAI. I mention a few in my comments above. Refer to OP Gary Marcus's recent work for more details.

Expand full comment
Comment removed
Jan 28Edited
Comment removed
Expand full comment

Sure, the AI winter we may be heading into now is different than the previous ones. That said, it is not accurate to suggest expert systems were only marketed by "some small companies". All the major computer companies spent substantial sums on them.

These AI winters are mostly a matter of investment drying up due to unfulfilled profits and dreams. They don't actually kill the technology behind them. Expert systems and their technologies are still with us and I would guess a few companies continue to profit from them. I'm sure that will be the case with LLMs as well. My guess is the profit that comes from this technology will disappoint investors and they will invest much less in AI for a while. Self-driving cars aren't really living up to expectations and I doubt that will change anytime soon. The copyright fiasco is only just getting started. People are still starry-eyed about the idea of LLMs doing programming jobs. I've used it and it is helpful but it isn't going to replace many programmers any time soon. And we don't even need to get into the mess that deep fakes will make.

Expand full comment

Open AI will certainly survive being supported by MS’s money but Open AI self-appointed status as a leader of ethical and fair AI development must be questioned. So I wonder if another, really nonprofit, major AI company would not be needed, a new company which would operate legally and candidly from the start. For instance, licensing all copyrighted material used for feeding the models, inviting creators to contribute to the data basis, assessing the hallucination risk for users, etc. Generally, establishing rules of good conduct for all the AI industry. The goal of this company would not be to make instantly a technological breakthrough but rather to make an ethical one, to draw a safe path to sustainable AI for all. This company could be created by AI’s professionals of good will and funded by donators of good will, with a vision and an ambition, who are aware of the risks and of the stakes. The AI-driven tools proposed by this company would be at the beginning less powerful than the others on the market but a guarantee of fair contribution and fair access could be a valuable compensation for users. Ethically guided AI companies proposing safe and reliable tools, in full compliance with the law, are urgently needed as administrations all over the world are not eager to set a real safety net for AI applications.

Expand full comment

Great summary, definitely need to share. It often feels like, as Gen Z would say, OpenAI is gaslighting us. They promise amazing technology that is easy for every user, while encouraging us to blame ourselves for bad prompts if we get unreliable answers, even though they admit that their unreliable systems will gives us different answers every time. They claim new discoveries, but genAI simply regurgitates what is already known or created through others’ hard work. It’s all a very expensive and environmental costly parlor trick. If the emperor has any clothes, it’s just underwear.

Expand full comment

I would argue that chatgpt4 is about the second most useful thing for coding to me, after integrated development editors. I still get far more productivity boost from things like the debugger. Perhaps OpenAI should have a slighter better evaluation than companies like IntelliJ?

Expand full comment

Good Article - I trust your realistic view of the present state of affairs.

Expand full comment

The appalling hallucination rate of legal error puts me in mind of a tweet from M. Andreessen trying to spin a recent study that even LLMs trained not to be deceptive continue to be deceptive. He touted that as "gloriously uncontrollable." WTF does THAT mean? Anybody in a C suite anywhere looking to inject uncontrollable AI into their products/services? How about frequently incorrect legal advice? I'm starting to think there are fewer legit use cases for current AI than hype suggests, unless you use case is churning out quantities of text where errors aren't important (i.e. propaganda or fiction).

Expand full comment

When they spin **that** hard, you know they're running on fumes. Even in fiction you need to get things right :P

(Sorry li'l chatbot didn't mean to kill your Pulitzer prize dreams)

Expand full comment

Gary - Nice summary of the blisters forming in OpenAI’s shoes. As I’m sure you know, I’ve been beating my own drum about very similar issues in my own Substack. I even credited you and one of your recent posts in my submission yesterday (“The Next AI Winter Has Already Begun.”).

But one thing I think you could have put more emphasis on is that OpenAI’s problems actually pose a far greater threat to Microsoft. After Satya Nadella pumped over $13 billion into OpenAI, MS is at far greater risk of a. losing market cap (like, maybe, a trillion?), and just as costly, b. VERY Substantial reputational risk.

You agree?

Again, nice job!

Bill Lambos

Expand full comment

Good point. Microsoft certainly won’t go out of business but you are right that the market cap might sink, and reputation could take a hit, if things don’t turn out well.

Also, they may look a little squirrelly after the FTC digs in .

Expand full comment

These days, I see GPT, Machine Learning and AI being used interchangeably, almost being synonymous with each other, but these are three very distinct terms, with AI being the overall name for anything that exceeds human stupidity, I mean machine-enabled thinking. 20 years ago I wrote a paper on DSS (decision support systems), which support so many AI applications, but are a research area in their own regard.

I think it should be noted that many of these challenges are very speculative. Educated guesses can be valuable, but it's important to distinguish between speculation and certainty.

I'd agree with you that copyright and plagiarism will be an achilles heel of OpenAI. Sadly, I believe that the lawmaker will legislate this sooner than regulating Facebook. Somehow individuals' privacy isn't as high in the books of our elected representatives than the money-making enterprises that lobby these representatives.

Don't quite agree with your third argument "OpenAI lacks both profits and a moat": While the general principles for building generative AIs might be fairly well-known, refining them, making them better at the "truthiness problem" (your forth argument) and more suitable for specific applications is where a lot of the IP will sit. Google's been working on improvements for years and only came out with Bard and Gemini when pressured by OpenAI. Sure, there are sensitive areas, like the mentioned banking and military, but even they will have AI support in the not so near future, no doubt. The military might go with initial focus areas such as logistics and supply management to reduce waste, ensure timely resupply and improve the overall efficiency with minimal risk to human life. Surveillance, reconnaissance and cyber warfare (another one of my papers) will also be AI supported.

That "most of the high quality data sources have already been tapped" is highly speculative in the fifth argument. We all know that GPT suffered from low quality data initially and that it's gotten better but whether most high quality data sources have already been used is too much out of thin air IMHO (unless you have prove).

Seventh argument should be a given. Ever since the Dotcom crash we should be clear not to put all our eggs in one basket. At times the article gave me the impression that AI (or GPT) is doomed because of the hype that OpenAI started. I don't think it is. GPT (and from there more of AI) is here to stay. OpenAI isn't all of AI or GPT. It'd be a fallacy to think that it's up to OpenAI to do the heavy lifting.

(To clarify: I'm not affiliated with OpenAI, nor am I defending ChatGPT. To that end, I support your question about candour.)

PS: I don't quite understand the link to apple.com over “OpenAI Quietly Deletes Ban on Using ChatGPT for ‘Military and Warfare'", but it might just be a mistake.

Expand full comment

It occurs to me that GPT4 is probably trained on some immense corpus of scientific literature.... but the last time someone tried to download all the scientific papers, he was arrested by the feds, charged under the CFAA, and faced up to 35 years in prison, so... he committed suicide, and then a movie was made about him (https://www.imdb.com/title/tt3268458/)

So, some people take copyright & TOS very seriously, and certainly US laws themselves do.

Expand full comment