31 Comments

Lucid and sound. And again I’ll point to the idea/expression fallacy as a strong point for the unwitting and unwilling suppliers of professional expressive content to the models. Author class action lawsuit #5 today (unless I’ve missed any) adds the non-fiction writers to the claimants of those proposed billions.

Not even Big 4 consultancy fever projections of gen-AI total market value comes anywhere close to the value of total misappropriated intellectual property, meaning that this is a value *transfer* scheme, not value creation.

Expand full comment

Thank you for a nice summary, Gary!

I feel like this is a sign the DL field is finally feeling the forcing function of hard financial requirements on compute, and learning that it will indeed have to consider other approaches in addition to DL, if it wants to stay in business at all.

Expand full comment
Nov 22, 2023Liked by Gary Marcus

The most likely IP is Berners-Lees' Web 3.0 semantic AI model (SAM). No pun on the two Sam's. The moat would be the depth of general knowledge which would have to be as broad as the LLM writes about falsely or not. How could it fact check otherwise? The failures of Watson, CYC, Allen Institute have been noted in your book. What the win might look like is combining SAM as the prompt writer and alignment layer guiding LLM. SAM reads. LLM writes.

Expand full comment

ChatGPT is the best thing since sliced bread. Every member of our household uses it daily for problem solving, search and as a source of knowledge.

And GPT-5 with internet based RAG will be even better.

Expand full comment

What's the melanie mitchell paper you're referring to?

Expand full comment

As far as I can tell, all OpenAI has really done (wrt ChatGPT) is to push someone else's data through someone else's model using someone else's money, after which the best and most imaginative thing they could come up with was to do more or less exactly the same thing, only bigger (several times). Each iteration was then accompanied by exponentially increasing PR, which generated an exponentially increasing volume of news stories about OpenAI. Accordingly, OpenAI is now the most *famous* name in AI, as opposed to the most innovative (which for my money would be Google DeepMind). Nevertheless, this strategy ultimately secured OpenAI a tentative $10 billion investment from Microsoft, as well as the Hawking Fellowship. Or am I being too harsh...?

Expand full comment

"There’s still no clear business model, and systems like GPT-4 still cost a lot to run."

Expand full comment

Lol we get it, you hate AI

Expand full comment

I don't know if OpenAI is worth 86 billion dollars. However, I do want to push back on the idea that there is no business model for ChatGPT4 or that other LLMs (such as Grok) can easily replace ChatGPT4. A few angles to consider: a) When considering text production and revision, or idea-generation, ChatGPT4 is miles ahead of 3.5 or any other offering. Ethan Mollick has written a bunch of posts making this point quite convincingly, and it also fits my experience as well as performance on standardized tests. b) At least 6-7 RCTs have been carried out, comparing people (e.g. case analysts, creative writers, coders etc.) that had access to ChatGPT4 vs. some that didn't, and they all show a substantial performance improvement effects. Now, there are limitations to these studies (and the one we are currently running might deviate a bit), but it is still remarkable in social science, to see such a consistent results. c) We do have examples of industries that clearly have been severely impacted by GAI: https://www.ft.com/content/b2928076-5c52-43e9-8872-08fda2aa2fcf. d) More generally, as someone working at a business school, I care less about a LLMs ability to do well on standardized tests, solve math problems or advance science. ChatGPT4 is useful and is used in the production of text segments (be it emails, grant applications, or reports), can provide feedback on written text (in particular to non-native English speakers), can generate ideas (as RCTs show) etc. So, while companies might not be able to use it to control robots, manage production, replace CEOSs, solve science problems - it is still easily worth a 3 digit figure every month to many highly educated employees (I'd pay that for access to ChatGPT4, in particular a finetuned one). I find that a great business case. Whether OpenAI or another (open source) approach will win, I have no idea. But there are billions of dollars to made, yearly, already now. How much current kind of GAI can improve, is then another question.

Expand full comment

"There has never been a solution to the hallucination problem, and it’s not clear that there will be until we find a new paradigm, as Altman himself partly acknowledged last week, when he did his best Gary Marcus impression. (I hope that’s what got him fired…) OpenAI might find that new paradigm some day, but someone else (e.g., DeepMind or Anthropic) might well get there first."

That will never come. Hallucination is just material for fine-tuning the statistical outcome of LLMs and objectively comes from the differences in peoples' life experiences and in their not open for the new points of view.

It's not a problem it's a window of opportunity to expand our scientific horizons. "Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world." - Albert Einstein. It's a translation scheme -

Source Terms - Target Terms - Translation - Human ->

Meaning - Scheme - Translation - Human -> (in English)

Смысл - Схема - Перевод - Человек (in Russian)

Epistemological General Intelligence System - Building a Knowledge-based General Intelligence System, Michael Molin - https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF

Expand full comment

A new paradigm is here https://alexandernaumenko.substack.com/p/is-generalization-about-similarities

It is not a "quick fix" because the approach is orthogonal to DL. There is a lot of work to be done.

Expand full comment

Nice summary. Although don’t you think OpenAI researchers would have been working on other AI areas and gradually transitioning from only LLMs to other paradigms?

Expand full comment

This definitely feels like a rant. I'm pretty pissed they got rid of the women on the OpenAI board. If A.I. startups are taken over by BigTech, is their funding even real? We've been told a lot of the 13 Billion from Microsoft are just Azure credits.

Expand full comment

"as Altman himself partly acknowledged last week, when he did his best Gary Marcus impression. (I hope that’s what got him fired"

The OpenAI debacle was obviously not about that, and it was obviously not about you. You write lucid stuff, except when you again veer into such things.

Expand full comment

The larger question is if there is a good business model around generative AI.

Copilot for Microsoft 365 will start being offered for $30 a month. If enough people in a company say such an assistant saves them time each day, a company may be inclined to buy it, especially if longer term it can help them control labor costs.

Expand full comment

My impression is that a large number of people have found ChatGPT* useful in a variety of ways.

So if OpenAI goes belly up, is there a company/product left to serve that market? Or could the shell of OpenAI be capable of continuing to offer access to their dated product?

Perhaps LLMs that were trained with curated data in a relatively limited area by a knowledgable user community would be more reliable and useful.

Expand full comment