88 Comments
May 5Liked by Gary Marcus

Sam is a pseudo-philosopher in a world that has forgotten how to think critically. Thanks for this. Best Gary thus far. Look forward to more.

Expand full comment

Well said.

Expand full comment

“all of this has happened before. all of this will happen again.”

Yep, I’ve been there. Working on AI research in 1990-1991, just before the second AI Winter.

Expand full comment
May 5Liked by Gary Marcus

There is a major flaw less said. GenAI can't certify itself. It is a black box and even if it seems to reach a 2 sigma the human labor costs to prove that which are huge cannot guarantee it will respond correctly on the very next prompt. And if it takes more labor to check it than it saves it hardly is a general intelligence.

Expand full comment

Altman has ninja-level self aggrandizement and media manipulation skills, that I will grant. Whether he actually cares about anything he claims to care about, I have serious doubts. You don't have to be a psychological genius to know UBI would be wildly inadequate replacement for any job that provided meaning and purpose. But you can't keep pushing a "full speed ahead to AGI" agenda while acknowledging UBI won't work, so Altman just doesn't address it. Its one of many, many ways he's fundamentally dishonest.

Expand full comment

Isn’t this what any CEO does given enough cultural capital? I don’t know why are we surprised when we suddenly find out again that capitalism doesn’t have our best interests at heart! No offense Gary. I love your reporting and I love what you do!

Expand full comment

Capitalism is an imprecise word for plutocracy. It is imprecise because it imagines that the various window dressings are especially meaningful and thus is helpful for plutocrats to keep people distracted and confused. Capitalism. Communism. The results are the same: Plutocracy. Law's first and foremost mandate is to protect elite privilege. That is why it chiefly exists. Everything else serves that primary purpose.

Expand full comment

Thanks. Very insightful.

Expand full comment
May 5Liked by Gary Marcus

And dude - worth $7 T - as you know, in tech that means $700 T valuation !! What a fucking circus show - curious to see who falls for this shizz

Expand full comment
May 6Liked by Gary Marcus

This is a great post, although, unfortunately, that is because what it describes is so infuriating.

I am indeed reminded of Musk's claims to put a man on Mars and to release fully self-driving cars next year, going on for years now. Why are they taken seriously despite talking easily refuted nonsense? Because they are very rich. That means hardly anybody dares to check notes from two or five years ago and bring up the broken promises, because then they lose access for interviews or exchanges of favours.

One of the most frustrating aspects of the AI and singularity discourse is the idea that AGI will solve problems, when those problems are nearly always either unsolvable as long as the laws of physics persist (immortality, interstellar travel) or have been solved decades ago but face issues of political will (global warming, poverty).

One of the IMO most frustrating aspects of public discussion in the last twenty years or so is the pseudo-profundity expressed with "intelligence is a property of matter". I have seen the same with consciousness, for example. The move here is to completely ignore the established meaning of words. That is simply not what intelligence and consciousness mean, end. of.

Expand full comment

I do want to add /some/ nuance here, because "intelligence is a property of matter" has at least some validity when intelligence is seen through the lens of collective systems and non equilibrium dynamics. Of course, this is merely one way to define intelligence, and it's usefulness may be mot actionable not in constructing a robot god but in better understanding the possible hidden competencies and behaviors of things other than us.

I'd really recommend following Michael Levin, who tackles the broader subject of diverse intelligence in a truly experimental, scientiically formal way: https://thoughtforms.life/ Even if you don't agree with him (he has the honest humility to start from the premise of 'we don't know!'), there is active research done by several other labs around this topic, and it's super fascinating!

But that said, most of the current discourse is done in hand wavey ways whose only bar for rigor is whether it produces sufficient marketting buzz, so I get the feel.

Expand full comment

There are three possible sophistries here:

First, the argument from gradualism. A human has lots of intelligence, a dog has less, and the gradient just continues across beetles and plants down to single electrons, with less and less but still a smidgen of intelligence. This argument is easily reduced to absurdity by applying it to other qualities - the same logic applies to "generosity is a property of matter" or "life is a property of matter", but no, an iron bar simply is neither generous nor living nor intelligent. That is not what those words mean. At some point the gradient ends in zero, and that end is a long way before we get to atoms.

Second, the cognitive fallacy of concluding that because A is a precondition for B, every A is part of B: every intelligent (conscious, living) system is composed of matter, so all matter is part of an intelligent (conscious, living) system. That doesn't follow, to put it mildly. Every car has tires, but not every tire is part of a car.

Third, and most relevant, simply redefining the word intelligence (or conscious, or life) to sound profound while talking nonsense. There isn't even much to be refuted in this case because it is simply twisting words, but it helps to understand how words or concepts arise. It is not the case that some philosopher invented intelligence out of thin air, and now it is up to us to compare that concept to parts of the world and see if it applies. Instead, our ancestors needed a term to describe the difference between Bertha, who easily grasps new ideas and easily solves problems, and Fred, who needs a pocket calculator to add 2 and 2 and still gets it wrong three times out of four, or between Bertha and her pet snake. It is simply not the case that the word as defined by our ancestors and still used by us today is such that it applies to all matter.

Expand full comment
May 5Liked by Gary Marcus

Hey Gary,

I would try to add more structure (adding sub chapters) and references. He is not the first one that uses this approach. Steve Jobs was doing the same to a lesser extent. Snake Oil vendors have done the same

Expand full comment
May 5Liked by Gary Marcus

I would also add a couple of elements:

(1) AGI is set as a goal but today there is no agreement around what AGI really is.

(2) The whole structure of OpenAI as a no profit company that is making profit is at least not clear to say the least.

Expand full comment

(1) It is clear to many of us what AGI is. For example: https://arxiv.org/html/2311.02462v2

(2) A capped profit entity is described here: https://openai.com/our-structure

Not trying to be a jerk, hope the links help.

Expand full comment

I am aware of both items. Concerning AGI there is no agreed definition. Concerning (2) the point is that the strategy of OpenAI doesn't seem consistent with the mission. Elon Musk took legal action against them for this reason.

Expand full comment

Here's a simple question, Gary: Is there anything analogous to this in the past 50 years of human history? Another instance where the public was bamboozled so thoroughly over a flawed, half-baked technology that dominated the news media...but then finally fizzled into something that became about a tenth as useful as its proponents' original (wild) claims?

Previously, it seems to me that such things have usually died—sometimes ignominiously—before ever making it to market.

Expand full comment
May 5Liked by Gary Marcus

Like ChatGPT, Altman relies on all of us confusing confidence with competence. My question is whether he will every truly have to answer for any of it.

Expand full comment

Great piece! He’s betting the farm on emergent behaviour magically appearing if an LLM model is large enough.

Expand full comment

Thanks for calling it like it is, seeing through the illusion, Gary*.

Even before I read past the first few paragraphs, I wrote:

The Faith lives on… In a shining future. Requiring a certain ignoring of the present. Or of facts. Kind of like a religion, hmm.

So this pattern has been around for a while. First noticed it in spades with the Singularity, life extension, futurism folk.

Good luck trying to persuade them drop the hopeful goose that, one day, over the rainbow, will lay golden AGI eggs, and at the same time quell existential fears.

(Yet Altmans’ eyes look worried. What’s going on behind the facade?).

(*Although in my opinion the hope for AGI is even more distant than Marcus thinks).

Expand full comment

A follow up to my comment

From Gary Marcus 12/28/23

Systems like DALL-E and ChatGPT are essentially black boxes. GenAI systems don’t give attribution to source materials because at least as constituted now, they can’t. (Some companies are researching how to do this sort of thing, but I am aware of no compelling solution thus far.)

Unless and until somebody can invent a new architecture that can reliably track provenance of generative text and/or generative images, infringement – often not at the request of the user — will continue.

Expand full comment

Yes, at minimum you need quality data, the ability to signal provenance, and explainability. Instead we have a black box filled with garbage and zero accountability.

Expand full comment

The inability to provide the sources is a feature not a bug. If those systems told you who's work they were cribbing from those people would be able to sue for compensation.

Also building those things into the system would be so expensive a ChatGPT subscription would have to cost hundreds of dollars a month. Can you imaging the labor costs of reviewing and checking attribution on all the data that went in?

Expand full comment
May 6Liked by Gary Marcus

Gary, I wonder if you've read this essay, The Myth of the Secret Genius, by Brian Klaas: https://www.forkingpaths.co/p/the-myth-of-the-secret-genius

Expand full comment
author

Thanks for the tip!

Expand full comment

That's a great essay. The just world fallacy is important to point out, and to put what he wrote about the normal distribution of talent in another way: as I get older, I have grown increasingly skeptical of the concept of genius in and of itself. I have certainly run into some people who are a bit dim, I have run into many who are uninformed, and I have run into people who have significantly greater talent in some area than I have, be it ability to learn languages, memorise facts, network with others, build statistical models, or play instruments. But I have yet to meet a single person I would consider a genius.

Much more plausible that there are simply those who have a good idea when they are in the right place at the right time plus enough talent to see that idea through. But, crucially, there would have been thousands of others who would also have had the same idea and the same talent but weren't in the right place at the right time. For every Elon Musk or Sam Altman there are millions who would perform better in their roles if they had been given the opportunity, but they had to knit carpets at age twelve or were told that women can't do tech or simply decided to apply their talents to something less harmful to society than concentrating billions of dollars in wealth in one hand.

Expand full comment

Genius absolutely exists but Einstein could have been born to poverty and ended up a bricklayer.

Just because someone is a genius does not mean they will be recognized and supported by social conditions in a manner necessary to even slightly meet their potential. It increases the likelihood but certainly does not guarantee it, especially as one moves further and further away from centers of power/privilege. The food one is given (and womb teratogens and even DNA damage to father from things like working at a steel mill) impact IQ. The quality of sleep one can have in one's dwelling. Et cetera.

The abuse of Alan Turing due to heterosexism shows that even having one's genius recognized by society is sometimes not enough to be supported enough to fulfill one's potential, too.

Finally, how "genius" an idea is seen to be and how much a particular person is allowed to be seen as its originator is also highly rooted in privilege much of the time. A poor person's incredibly insightful idea elicits a shrug if the shadowbanning algorithms even allow anyone to see it.

Expand full comment
May 6Liked by Gary Marcus

Altman once tweeted "i am a stochastic parrot and so r u" and that seems to say it all.

Expand full comment

I’ve noticed how Mustafa Suleyman applies many of the same rhetorical gymnastics (I covered his latest TED talk for my newsletter)

Expand full comment