Watched the interview before you promoted it on substack and it was a good interview. Eisman had a general understanding of AI and LLMs but you provided a more detailed and nuanced understanding as well as some history to those who listen to the interview. I will recommend it to friends and acquaintances looking for an explanation of the current situation and future prospects.
Truly one of the best I've ever watched. Eisman asked great questions, and Gary was on fire. I've watched many of Gary's interviews, and I learn something new every time. It was a joy be able to watch two highly intelligent and knowledgeable people carry on an animated,respectful, and inspiring conversation.
Great interview! Really liked the novelty perspective. It’s kafkaësk; genA.I is opening up new avenues and bringing novelty and now it’s up to humans to understand that at a time we’re turning to genA.I to innovate 🤯
Business documents from 10 pages to 5000 pages are written in a combination of Legal and Business English, with links from objects to context, so Word Asociation doesn't work. The scoundrels knew this. The lure of easy money was too strong.
Gary. Watch out, now. The "kill the messenger" model is out there. It's one of those models that already lives in some very powerful human beings' minds. (e.g., Trump and all his cabinet.) even though it need not.
But in THIS model, "KILL" can mean that, when the investment money begins to disappear, it will be the fault of Gary Marcus. Personal attacks follow. But it's an UGLY HEAD that will probably raise up sooner than later in those who will be influenced in a very bad way, and in direct relationship to the popularity of that interview. Kudos.
There are bunches of long-standing truth out there, BTW, but it hides. It's just getting into a good communications vehicle that makes the difference in a world culture that is so complex as ours. And I think you have done it with this interview, which was truly chock full of it, . . . truth, that is. Much appreciated. C. King aka: Catherine Blanche
This was a really awesome interview. Mr. Marcus is exceptional in the way he uses words, cites examples, and interacts with the person he's conversing with. I learned a helluva lot I didn't know before. I have a couple of random questions - maybe showing my ignorance by asking:
1. I'm surprised the hallucination/ false information problem persists... when AI makes ANY assertion, why can't it do some basic checks on the basis; e.g. When it states Harry Shearer was born in London, why can't it check, where on the internet does it say that? Or if it proposes a chess move, why can't it check to see if the move is in accordance with the rules of chess?
2. Gary stated ~ 45:23, "The limited partners are going to lose a lot of money in the end...." .Will they lose real money, or money on paper (i.e. valuation of their venture, not $ they actually put in)?
3. I'm a worrier. I worry TPTB are so sure AI is the next big thing, and the US must lead or risk falling behind China they'll commit a lot of government $ and other resources- so much so that they'll make the same threat they've made in the past with banks- if we don't keep pouring $ into it, if we don't prop up the NVIDIA's of the world (the way we did w/ GM), the financial system will collapse. Is this a valid concern?
Hat tip also to Steve Eisman for asking solid, intelligent questions and keeping the discussion moving.
Re hallucinations, I think this is difficult because the model doesn’t really “know” that it made an assertion so it can’t check it. It just found, somehow, that some words strung together were statistically likely. I think this is why when an LLM is challenged on something it often is insistent that it is right because it keeps finding that association somewhere in its data.
The LLM only has words and letters at its disposal. We have actual ideas in our minds which we code as words. The words are not the ideas themselves.
You can read about this philosophical argument on Wiki, search for “Chinese Room”. This is an old argument which has been discussed a lot. In this thought problem an English speaker can process written Chinese messages and produce answers by following a lot complex rules measuring the various lines written on the paper and then producing a bunch of markings which the Chinese people can read. (The unspoken assumption is that it is possible to code knowledge such that it can be processed this way.)
This may be why Gary is promoting the idea of world models which consist of facts against which new ideas can be tested. I would say the unspoken assumption here is that there is a generalized way to digitize knowledge.
Regarding your third point, this seems to be the way of the world. Many people want a short cut to riches. They want to kill the goose laying the golden eggs. They will spend vast fortunes, ruin the air and the water, just to be king of the mountain.
@Gary Marcus Having watched your interview and thought through your discussion on ‘novelty’, would I be correct in saying that LLMs are structurally incapable of providing outputs that do not exist in training data, embeddings, vector graphs, RAG data and the like? They can create combinations and permutations of these ‘data points’, but cannot create anything new that is unknown within that conglomerated data?
Current AI cannot create something wholly new because it lacks both intuitive understanding for how things function and any info beyond what it is given in text or image form.
That said, very little out there is wholly new. Almost all of the effort of people is spent on observation, repetition, synthesis, adaptation, and correction. A building may be wholly new, but still built with known construction materials and refinements of existing methos.
The current phase is about agents that learn not data, but workflows, strategies, and examples for when to use what tool what for, and how to handle failure. This will go a long way.
It's interesting because LLM's capabilities are in the category of novelty. If we want to use, embed and adopt these novel capabilities it's still up to humans to innovate and find the true applications
Sure, workflows are data, just of the meta kind. Like software vs the inputs to that software. LLM are able to separate somewhat the methods from the specific inputs and apply them in new contexts, and also synthesize methods by recombination.
You can search for Gary Marcus on YouTube. There are a lot of interviews and discussions to be found there, though none that I've watched (a lot!) were as good as the one with Eisman, who is as exceptional as Gary in intelligence, curiosity, judgment, and willingness to question popular trends rooted in ignorance and hubris. Eisman emerges as a standout in Michael Lewis's book The Big Short. He and Gary are both heroes of mine.
I noted he did not contradict your portrayal of the motivations of VC managers - that is, as long as there's a plausible 'story', there are fees to be had, what's not to like?
RCTweatt: I don't want to give the devil more of than his-her due, but we ARE in an age of specialization which means, being a CEO or money manager means they probably don't have a specialist's handle on many of the arenas of investment that they have control over. And IF they don't have a good set of backup resources, finger-on-pulse specialists on staff who are not afraid of losing their jobs because they go against the tide of otherwise-smart stupid people out there in the pick-your-bubble and/or rabbit-hole universe (whichever suits your desires) who are also smart but not wise, THEN, you get what you pay for, so to speak.
I was glad to see Gary expose the "get-fee/2%-and-run," who-cares-what-happens-then universe, which can be described as a person making the choice of working in this particular a moral model. Not all, but probably way too many "people" that universe.
Watched the interview before you promoted it on substack and it was a good interview. Eisman had a general understanding of AI and LLMs but you provided a more detailed and nuanced understanding as well as some history to those who listen to the interview. I will recommend it to friends and acquaintances looking for an explanation of the current situation and future prospects.
A corking interview 👌
Truly one of the best I've ever watched. Eisman asked great questions, and Gary was on fire. I've watched many of Gary's interviews, and I learn something new every time. It was a joy be able to watch two highly intelligent and knowledgeable people carry on an animated,respectful, and inspiring conversation.
Great interview! Really liked the novelty perspective. It’s kafkaësk; genA.I is opening up new avenues and bringing novelty and now it’s up to humans to understand that at a time we’re turning to genA.I to innovate 🤯
Brilliant interview, thanks Gary and Steve!
Business documents from 10 pages to 5000 pages are written in a combination of Legal and Business English, with links from objects to context, so Word Asociation doesn't work. The scoundrels knew this. The lure of easy money was too strong.
Watched the episode and I was really glad you appeared in Eisman's show!
Yay 🙌 I watched this and knew I recognised your name from reading your Substack 😊 great interview!
Gary. Watch out, now. The "kill the messenger" model is out there. It's one of those models that already lives in some very powerful human beings' minds. (e.g., Trump and all his cabinet.) even though it need not.
But in THIS model, "KILL" can mean that, when the investment money begins to disappear, it will be the fault of Gary Marcus. Personal attacks follow. But it's an UGLY HEAD that will probably raise up sooner than later in those who will be influenced in a very bad way, and in direct relationship to the popularity of that interview. Kudos.
There are bunches of long-standing truth out there, BTW, but it hides. It's just getting into a good communications vehicle that makes the difference in a world culture that is so complex as ours. And I think you have done it with this interview, which was truly chock full of it, . . . truth, that is. Much appreciated. C. King aka: Catherine Blanche
Your Interview on the Eisman show was a must see, hear, listen...It was measured, reflective and snarkless...
...but also worrisome for future portfolios. :)
This was a really awesome interview. Mr. Marcus is exceptional in the way he uses words, cites examples, and interacts with the person he's conversing with. I learned a helluva lot I didn't know before. I have a couple of random questions - maybe showing my ignorance by asking:
1. I'm surprised the hallucination/ false information problem persists... when AI makes ANY assertion, why can't it do some basic checks on the basis; e.g. When it states Harry Shearer was born in London, why can't it check, where on the internet does it say that? Or if it proposes a chess move, why can't it check to see if the move is in accordance with the rules of chess?
2. Gary stated ~ 45:23, "The limited partners are going to lose a lot of money in the end...." .Will they lose real money, or money on paper (i.e. valuation of their venture, not $ they actually put in)?
3. I'm a worrier. I worry TPTB are so sure AI is the next big thing, and the US must lead or risk falling behind China they'll commit a lot of government $ and other resources- so much so that they'll make the same threat they've made in the past with banks- if we don't keep pouring $ into it, if we don't prop up the NVIDIA's of the world (the way we did w/ GM), the financial system will collapse. Is this a valid concern?
Hat tip also to Steve Eisman for asking solid, intelligent questions and keeping the discussion moving.
Re hallucinations, I think this is difficult because the model doesn’t really “know” that it made an assertion so it can’t check it. It just found, somehow, that some words strung together were statistically likely. I think this is why when an LLM is challenged on something it often is insistent that it is right because it keeps finding that association somewhere in its data.
The LLM only has words and letters at its disposal. We have actual ideas in our minds which we code as words. The words are not the ideas themselves.
You can read about this philosophical argument on Wiki, search for “Chinese Room”. This is an old argument which has been discussed a lot. In this thought problem an English speaker can process written Chinese messages and produce answers by following a lot complex rules measuring the various lines written on the paper and then producing a bunch of markings which the Chinese people can read. (The unspoken assumption is that it is possible to code knowledge such that it can be processed this way.)
This may be why Gary is promoting the idea of world models which consist of facts against which new ideas can be tested. I would say the unspoken assumption here is that there is a generalized way to digitize knowledge.
Regarding your third point, this seems to be the way of the world. Many people want a short cut to riches. They want to kill the goose laying the golden eggs. They will spend vast fortunes, ruin the air and the water, just to be king of the mountain.
Much thanks for your explanations.
I'm still playing catch-up so I'm going to have to give some of it a think.
I can heavily recommend Gerben Wierda’s writing on the topic: https://ea.rna.nl/the-chatgpt-and-friends-collection/
@Gary Marcus Having watched your interview and thought through your discussion on ‘novelty’, would I be correct in saying that LLMs are structurally incapable of providing outputs that do not exist in training data, embeddings, vector graphs, RAG data and the like? They can create combinations and permutations of these ‘data points’, but cannot create anything new that is unknown within that conglomerated data?
Current AI cannot create something wholly new because it lacks both intuitive understanding for how things function and any info beyond what it is given in text or image form.
That said, very little out there is wholly new. Almost all of the effort of people is spent on observation, repetition, synthesis, adaptation, and correction. A building may be wholly new, but still built with known construction materials and refinements of existing methos.
The current phase is about agents that learn not data, but workflows, strategies, and examples for when to use what tool what for, and how to handle failure. This will go a long way.
It's interesting because LLM's capabilities are in the category of novelty. If we want to use, embed and adopt these novel capabilities it's still up to humans to innovate and find the true applications
Are workflows, strategies and examples data?
Sure, workflows are data, just of the meta kind. Like software vs the inputs to that software. LLM are able to separate somewhat the methods from the specific inputs and apply them in new contexts, and also synthesize methods by recombination.
Thank you for the steer. A very interesting interview
Gee Gary, great. I would love to hear you more in addition to reading you. Is that available?
You can search for Gary Marcus on YouTube. There are a lot of interviews and discussions to be found there, though none that I've watched (a lot!) were as good as the one with Eisman, who is as exceptional as Gary in intelligence, curiosity, judgment, and willingness to question popular trends rooted in ignorance and hubris. Eisman emerges as a standout in Michael Lewis's book The Big Short. He and Gary are both heroes of mine.
I noted he did not contradict your portrayal of the motivations of VC managers - that is, as long as there's a plausible 'story', there are fees to be had, what's not to like?
Many of us suspected just that.
RCTweatt: I don't want to give the devil more of than his-her due, but we ARE in an age of specialization which means, being a CEO or money manager means they probably don't have a specialist's handle on many of the arenas of investment that they have control over. And IF they don't have a good set of backup resources, finger-on-pulse specialists on staff who are not afraid of losing their jobs because they go against the tide of otherwise-smart stupid people out there in the pick-your-bubble and/or rabbit-hole universe (whichever suits your desires) who are also smart but not wise, THEN, you get what you pay for, so to speak.
I was glad to see Gary expose the "get-fee/2%-and-run," who-cares-what-happens-then universe, which can be described as a person making the choice of working in this particular a moral model. Not all, but probably way too many "people" that universe.
New York Times gift article: Happenings as DAVOS regarding AI:
https://www.nytimes.com/2026/01/20/business/davos-trump.html?unlocked_article_code=1.GFA.oR5g.dk26baRKoOCw&smid=url-share
This is awesome! Thank you