31 Comments

Lucid and sound. And again I’ll point to the idea/expression fallacy as a strong point for the unwitting and unwilling suppliers of professional expressive content to the models. Author class action lawsuit #5 today (unless I’ve missed any) adds the non-fiction writers to the claimants of those proposed billions.

Not even Big 4 consultancy fever projections of gen-AI total market value comes anywhere close to the value of total misappropriated intellectual property, meaning that this is a value *transfer* scheme, not value creation.

Expand full comment

Thank you for a nice summary, Gary!

I feel like this is a sign the DL field is finally feeling the forcing function of hard financial requirements on compute, and learning that it will indeed have to consider other approaches in addition to DL, if it wants to stay in business at all.

Expand full comment
Nov 22, 2023Liked by Gary Marcus

The most likely IP is Berners-Lees' Web 3.0 semantic AI model (SAM). No pun on the two Sam's. The moat would be the depth of general knowledge which would have to be as broad as the LLM writes about falsely or not. How could it fact check otherwise? The failures of Watson, CYC, Allen Institute have been noted in your book. What the win might look like is combining SAM as the prompt writer and alignment layer guiding LLM. SAM reads. LLM writes.

Expand full comment

ChatGPT is the best thing since sliced bread. Every member of our household uses it daily for problem solving, search and as a source of knowledge.

And GPT-5 with internet based RAG will be even better.

Expand full comment

+1, I do use ChatGPT a lot too as Google keep on failing in giving me the results i'm looking for. LLMs have their use, and they don't need to be perfect, they're not meant to control our nuclear arsenals.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

-100 - I try to use the ChatGPT from time to time but every time it hallucinates, and it's not even on shrooms. If it was maybe I could have benefited from its insight.

Expand full comment

That is BS.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023Liked by Gary Marcus

It was a joke implying that ChatGPT is not a reliable source of information, which is what Gary Marcus says as well. At least that's my experience. I used it for programming from time to time, and every single time I have to thoroughly double-check the answer by going back to google and product documentation. Then what's the point? In my context, ChatGPT is nothing but a glorified search engine, that most of the time doesn't tell the truth. Your experience is different, it appears. Good for you and for your dear ones. But never forget that truth is not the same with what gets returned as result of a statistical analysis. It may be close, but it's not the same.

Expand full comment

What's the melanie mitchell paper you're referring to?

Expand full comment
author

arxiv last week. tried to link it

Expand full comment

As far as I can tell, all OpenAI has really done (wrt ChatGPT) is to push someone else's data through someone else's model using someone else's money, after which the best and most imaginative thing they could come up with was to do more or less exactly the same thing, only bigger (several times). Each iteration was then accompanied by exponentially increasing PR, which generated an exponentially increasing volume of news stories about OpenAI. Accordingly, OpenAI is now the most *famous* name in AI, as opposed to the most innovative (which for my money would be Google DeepMind). Nevertheless, this strategy ultimately secured OpenAI a tentative $10 billion investment from Microsoft, as well as the Hawking Fellowship. Or am I being too harsh...?

Expand full comment
author

10b that was mostly server credits, perhaps at list price

Expand full comment

"There’s still no clear business model, and systems like GPT-4 still cost a lot to run."

Expand full comment

Lol we get it, you hate AI

Expand full comment

I don't know if OpenAI is worth 86 billion dollars. However, I do want to push back on the idea that there is no business model for ChatGPT4 or that other LLMs (such as Grok) can easily replace ChatGPT4. A few angles to consider: a) When considering text production and revision, or idea-generation, ChatGPT4 is miles ahead of 3.5 or any other offering. Ethan Mollick has written a bunch of posts making this point quite convincingly, and it also fits my experience as well as performance on standardized tests. b) At least 6-7 RCTs have been carried out, comparing people (e.g. case analysts, creative writers, coders etc.) that had access to ChatGPT4 vs. some that didn't, and they all show a substantial performance improvement effects. Now, there are limitations to these studies (and the one we are currently running might deviate a bit), but it is still remarkable in social science, to see such a consistent results. c) We do have examples of industries that clearly have been severely impacted by GAI: https://www.ft.com/content/b2928076-5c52-43e9-8872-08fda2aa2fcf. d) More generally, as someone working at a business school, I care less about a LLMs ability to do well on standardized tests, solve math problems or advance science. ChatGPT4 is useful and is used in the production of text segments (be it emails, grant applications, or reports), can provide feedback on written text (in particular to non-native English speakers), can generate ideas (as RCTs show) etc. So, while companies might not be able to use it to control robots, manage production, replace CEOSs, solve science problems - it is still easily worth a 3 digit figure every month to many highly educated employees (I'd pay that for access to ChatGPT4, in particular a finetuned one). I find that a great business case. Whether OpenAI or another (open source) approach will win, I have no idea. But there are billions of dollars to made, yearly, already now. How much current kind of GAI can improve, is then another question.

Expand full comment
author

agree that there are billions, but are there tens or hundreds of billions, as envisioned by the valuations? this is i am not sure of esp given open source and lack of moat

Expand full comment

To have +1 billion in revenue in first effective year of operation, with most of the year spending time on setting up business models, is not too bad. Also not sure about lack of moat. It's been a year now, and OpenAI has been consistently and clearly in the lead. Maybe Google Gemini will make a serious charge, but given their very different business model and implementation of GAI in their products, there'll still be plenty of room for OpenAIs approach.

Expand full comment

"There has never been a solution to the hallucination problem, and it’s not clear that there will be until we find a new paradigm, as Altman himself partly acknowledged last week, when he did his best Gary Marcus impression. (I hope that’s what got him fired…) OpenAI might find that new paradigm some day, but someone else (e.g., DeepMind or Anthropic) might well get there first."

That will never come. Hallucination is just material for fine-tuning the statistical outcome of LLMs and objectively comes from the differences in peoples' life experiences and in their not open for the new points of view.

It's not a problem it's a window of opportunity to expand our scientific horizons. "Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world." - Albert Einstein. It's a translation scheme -

Source Terms - Target Terms - Translation - Human ->

Meaning - Scheme - Translation - Human -> (in English)

Смысл - Схема - Перевод - Человек (in Russian)

Epistemological General Intelligence System - Building a Knowledge-based General Intelligence System, Michael Molin - https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF

Expand full comment

Take a look -

Semantic Analysis: An Example - window of opportunity

General Intelligence System - https://activedictionary.com/ - Deep exploration

Time is the actualization of space. (in English)

Время - актуализация пространства. (in Russian)

window of opportunity - (in Russian)

https://vk.com/thematrixcom?w=wall451874784_5972%2Fall

References:

PowerThesaurus - https://www.powerthesaurus.org/window_of_opportunity/synonyms

WordHippo - https://www.wordhippo.com/what-is/another-word-for/window_of_opportunity.html

General Intelligence System - https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF

Expand full comment

A new paradigm is here https://alexandernaumenko.substack.com/p/is-generalization-about-similarities

It is not a "quick fix" because the approach is orthogonal to DL. There is a lot of work to be done.

Expand full comment

Nice summary. Although don’t you think OpenAI researchers would have been working on other AI areas and gradually transitioning from only LLMs to other paradigms?

Expand full comment
author

possibly but don’t see a ton of evidence for that, other than the great hire of Noam Brown

Expand full comment

I am quite sure OpenAI and Google are furiously throwing everything they can think of at the chatbot problem, while keeping mum. That's a lot of scientific and engineering talent, massive number of data annotators, and lots of low-hanging fruit in terms of approaches.

Expand full comment

This definitely feels like a rant. I'm pretty pissed they got rid of the women on the OpenAI board. If A.I. startups are taken over by BigTech, is their funding even real? We've been told a lot of the 13 Billion from Microsoft are just Azure credits.

Expand full comment

"as Altman himself partly acknowledged last week, when he did his best Gary Marcus impression. (I hope that’s what got him fired"

The OpenAI debacle was obviously not about that, and it was obviously not about you. You write lucid stuff, except when you again veer into such things.

Expand full comment

He made a tongue in cheek joke.

Expand full comment

The larger question is if there is a good business model around generative AI.

Copilot for Microsoft 365 will start being offered for $30 a month. If enough people in a company say such an assistant saves them time each day, a company may be inclined to buy it, especially if longer term it can help them control labor costs.

Expand full comment

My impression is that a large number of people have found ChatGPT* useful in a variety of ways.

So if OpenAI goes belly up, is there a company/product left to serve that market? Or could the shell of OpenAI be capable of continuing to offer access to their dated product?

Perhaps LLMs that were trained with curated data in a relatively limited area by a knowledgable user community would be more reliable and useful.

Expand full comment

The question is, though, whether that usefulness that people found has financial value commensurate with the cost of running the model.

On the other hand, from reading the semiconductor industry news, said industry is not only planning for there being humongous demand for next-gen AI chips, they're also actually starting to make said chips. So maybe that flooded market will bring costs down. Either way (no sales or bargain sales), the investors ain't gonna be happy.

Expand full comment

Searching Google for the cost to run ChatGPT - it pops up with $700,000 a day.

The source article is dated 8/14/23 - that would be $21 million a month.

180.5 million users, 1.5 billion visits in September.

They would be breaking even at 1.5 cents a visit.

At least the sign looks right.

Expand full comment
author

hard to know exactly the current numbers but maybe right order of magnitude, independent of training costs? and gpt-5 would cost more

Expand full comment