100 Comments

Deep breath.

As a former special education professional who worked a LOT with cognitive assessments, and spent many hours correlating cognitive subtest scores with academic performance in order to create learning profiles, do I ever have an opinion on this.

Too many people are simply unaware of the complexities of human cognition. I've seen how one major glitch in a processing area...such as long-term memory retrieval (to use the Cattell-Horn-Carroll terminology) can screw up other performance areas that aren't all academic. Intelligence is so much more than simply the acquisition and expression of acquired verbal knowledge (crystallized intelligence) that tends to be most people's measure of cognitive performance. I have had students with the profile of high crystallized intelligence, low fluid reasoning ability and...yeah, that's where you get LLM-like performance. The ability to generalize knowledge and experience, and carry one learned ability from one domain to another is huge, and it is not something I've seen demonstrated by any LLM to date. I don't know if it is possible.

Unfortunately, too many tech people working on A.I. haven't adequately studied even the basics of human cognitive processes. Many of those I've talked to consider crystallized intelligence to be the end-all, be-all of cognition...and that's where the problems begin.

Expand full comment

I apologize if i presume too much, but i have to : you probably talked to the wrong people. I believe the distinction between fluid and crystallized intelligence is widely accepted and recognized by most (if not the entirety) of the well-read population, and some ML engineers read cognitive science textbook out of curiosity (Ilya comes to mind). Even kids make the difference between memorizing and reasoning.

Expand full comment

You're much more optimistic than I am, because I have not seen the evidence that people understand the difference. And yes, I have had these discussions with people who were allegedly knowledgeable.

Expand full comment

Interesting. But isn't the acknowledgment of IQ for example the testimony of some rudimentary understanding of the distinction between crystalized and fluid intelligence ?

Expand full comment

To start with, let's bag the term IQ and go to a more accurate term for a full scale cognitive score--G, for "general intelligence." Gf (fluid) and Gc (crystallized) are just two aspects of cognition that make up a full scale score. They're also the two areas which really distinguish the difference between human intelligence and artificial intelligence--the remaining areas tend to be more performance and processing oriented.

You have to have both components for G, as well as the rest of the mix. Memory, processing speed, visual-spatial, and so on. AI is competitive if not superior in memory and processing speed. However, without Gc, Gf isn't really workable (you need those past experiences and knowledge to make those intuitive connections to solve a problem). Additionally, Gf is more about thinking outside of the box, where Gc is regurgitating acquired knowledge.

But you can cruise right along with Gc without a strong Gf. There's the difference.

Expand full comment

Note...there's a lot to this but please tell me if I'm preaching to the choir and you know all about CHC theory. If not, here's a decent intro from Wikipedia. I tend to use crystallized for Comprehension-Knowledge as that's the term I learned.

https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory

Expand full comment

Joyce, this information you're sharing about the more complete context for intelligence has been helpful and, sadly, nearly absent from the growing discussion about the evolution of AI. Looking forward to more commentary from you. It would be interesting to hear thoughts on this subject from Gary Marcus.

Expand full comment

This is one of your better posts. Thank you.

Expand full comment
Nov 25, 2023·edited Nov 26, 2023Liked by Gary Marcus

What I learned from using AI as a writing assistant (ChatGPT 4) is that is was great at doing what I call "low level editing" (technical aspects, grammar, sentence structure, fragments, run-ons, typos, etc.), and sometimes helping improving readability, as well as proposing words, phrases and occasionally sentences, as options to choose from – like throwing a dice and seeing what comes up – using it's vast memory and database of word and sentence patterns. But if I let it get out of hand, write too much or for too long without my hand at the wheel, it was crap: generic or hyperbolic or completely out of left field, whatever – and not my or the author's *voice* I was editing. Cheesy. And the longer it writes on it's own the more it confabulates, wanders away from the intention, adds things, or hallucinates.

The AI cannot *hear* how something *sounds* and how that *feels*, as whole, which is exactly what a good writer is sensitive to: an intangible flow, an organic *voice* that cannot be pinned down.

But most importantly, when experimenting with having it help with a philosophical work, it struck me at one point, as clear as a bell, the insight that this thing had ZERO *real* understanding of the actual real *meaning* of what it's saying. It is *purely* mechanical. (Having worked in the computer field for most of my life helped me see this too).

That was very freeing, because it dispelled all illusions, any projections of intelligence, intentionality or awareness into it. It was simply responding to what I was asking, in it's own weird, complicated stimulus-response way to be impressively human-language-text-like (and thus the clarity of prompts, and understanding what kind of machine one is dealing with is absolutely critical).

It was even more clear then that rather than a replacement for human creativity, the machine is a tool and a slave only. That is it's natural place. And enhancer and amplifier for us. After all, *we* created it – the creativity is that: in the software, hardware, and the human ideas it embodies. It has none of it's own: that what springs as a whole, "out of the blue" in an instant, from the quantum field of divine beauty love truth of Life (or whatever you want to call the Source).

As a machine adjunct to writing and human brains, it can save a tremendous amount of time and tedium, such as in editing a long work from a poor writer. But it is not a replacement for real creativity, understanding, intention, feeling, sense of beauty, truth, and love – and all that human and "divine" "stuff". :)

Expand full comment

Cue the fanboi comment “…yet :)” and we inevitably digress back to talking about what intelligence is. There’s some kind of fundemental disagreement going on between skeptics (aka realists, if you ask me) and fanbois.

Expand full comment

Yes, exactly. It's a *very* fundamental agreement - about the nature of reality you could say. To me it's very practical and realistic-experiential; the their belief system it sounds airy-fairy probably. I know, because i used to be one of the fanboy tehch-realist hard-headed skeptic materialists (At least on the surface... ), so I know what they thought-system / belief systems (religion) is like, and what the resistance feels like.

Expand full comment

(I just added the word *meaning* to the above. :) – important!:

"...ZERO *real* understanding of the actual real *meaning* of what it's saying")

Meaning can be a tricky topic because it’s not in the words or in the processing in time or mechanics (by the way here everything I’m referring to here is experiential and directly empirical, not about *models*) - we are *already* using an awareness of meaning understanding to do the investigation and communication, and it’s hard to see where it’s coming from, Being so immediate, this popping in of seeing meaning of something - getting the punchline of a joke is a good example - it like this invisible connection – and is entirely subjective (In a non-psychological sense) - i.e., non-objective. Therefore you have to show what it’s NOT in order to get an insight about what it is. But philosophic tools like Searle’s Chinese room experiment can be good intuition pumps pointing in this direction of.

So we don’t need new engineering, we need a new outlook to *drive* the engineering...

Expand full comment
Nov 25, 2023Liked by Gary Marcus

Couldn't agree more. An excerpt from my unfinished draft AGI paper [https://bigmother.ai]:

"Unfortunately:

• humans are primarily motivated by short-term self-interest

• humans are instinctively tribal

• human cognition is far less perfect than we like to think it is.

As a result, Molochian behaviour (whereby many tribes compete in their own short-term interest, oblivious to any consequent long-term harm) is deeply ingrained into human nature."

I already plan to cite "Kluge" (among others) when I expand the relevant section.

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Gary Marcus

Great read. I agree with most of it but have a few issues. I don't think an unbiased, unerring, "Commander Data" type of intelligence is possible. Intelligence is very much about making assumptions and correcting them if necessary. This is what learning is about.

When we fully understand intelligence and build intelligent machines, I believe we will find them to be just as prone to errors as we are, especially during their upbringing phase. They will eventually be more reliable than humans but only because they will be more focused on achieving their goals. They won't be distracted by most of the things that distract us, the things that make us human such as the taste of food, love relationships, reproduction, subjective aesthetics, etc. I could be wrong but this is my current take.

Expand full comment

I don't think emotions are the real culprit here, humans are just "intrinsically flawed" in their hardware (as compared to some hypothetical ideal), just think about our working / long term memory and the shitty API for consciousness to make full use of those two (like repeating a number 50 times until you get a notepad, anyone ?). Forgetting is probably there for a very good reason, but that reason is tied to our hardware. An AI could probably make full use of both perfect memory and forgetting, or at worse have all the human knowledge directly connected to its subconscious with <1 ms queries to it.

Being able to design the software behind intelligence is a cheat code, ASI in the sense of AGI++ will be almost immediate.

Expand full comment

Human brains are not nearly as flawed as you think. It's a wondrous machine. Perfect memory is only possible in Star Trek. In the real world, there is no room for it.

Expand full comment

For an AI, near perfect memory is extremely easy to implement, like remembering the exact color of somebody's shirt during a meeting 20 years ago is a piece of cake and wouldn't take that much space. Almost perfect memory of what's useful for survival ? A no-brainer.

But what i meant is that human brains will seem flawed when the first AGIs will roll out and put us to shame. We struggle to multiply 11 by 12, or to remember were we put our keys, how hard is it to beat that ?

Expand full comment

I couldn’t agree more. For me, studying AI has teached me a lot about what it means to be human. And learning what it means to be human, helps me better understand what AI is and isn’t.

I feel one our biggest blindspots today is that by building AI in our own image, we are ingesting these systems with all the same vunerabilties that make humans flawed.

The fact that LLMs can be coerced into all sorts of behavior is a testament to that. I don’t see how that is going to go away as the technology becomes ‘smarter’, in fact, I predict this only to get worse. This is why the whole idea of ‘aligning’ models with ‘human preferences’ is deeply misguided in my opinion.

Expand full comment

What alternative do you propose to the current paradigms ?

Expand full comment

I’m not proposing any alternative, I’m not in the position to do so, companies like OpenAI are. I’m merely pointing out that this technology is a mixed bag and may cause more problems than it promises to solve.

Expand full comment

Well shucks, this made my day too.

Expand full comment

I'm glad you held up the Star Trek computer (STC) as a goal rather than Commander Data from the same show. Although a Data-style AI would also be interesting, we need the STC first.

Expand full comment

The "Star Trek computer" is really an AI layer (an advanced non-hallucinating LLM maybe?) querying a huge network of dumb databases and sensors. I would not personally call it intelligent in the biological sense.

Expand full comment

I agree that it relies on access to the internet (or whatever they're calling it in the Star Trek universe) and it is Siri-like in how it interacts with a human. But how is it not intelligent if it can guess the human's intent correctly most of the time and ask the right questions if unsure? I'm not sure what you mean by "in the biological sense" here.

Expand full comment

My take is that true intelligence must go through an upbringing phase during which they are trained or raised via classical conditioning. This means reward and punishment. After conditioning, an intelligent machine will behave the way its teachers want it to behave and it will not depart from it. Intelligence is the slave of motivation, not the other way around. Classical conditioning (motivational training) is one of the many things missing from LLMs.

Expand full comment

First, we have no idea how the Star Trek computer was trained, or if it was even trained at all in the sense we use the term now. Second, and much more importantly, there's no reason to assume that some kind of human-like training process is required. Perhaps we can give the AI its innate knowledge by simply initializing it with the necessary data (world models, urges, algorithms, etc.).

We must think beyond current artificial neural networks. Not only do they not match biological neurons very well, I suspect there's a lot of structure missing compared to the human brain. We know the brain contains a lot of innate knowledge but haven't a clue as to where it is held.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Gary Marcus

Well said! Puts me in mind of Dijkstra's aphorism, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." I love it that his saying will now live in my mind on par with Ali G's "neither is better", thanks to this piece.

Expand full comment
Nov 26, 2023Liked by Gary Marcus

Yes, good post. I'm starting to read most of what hit's my inbox. Keep the good work up and thanks for play by play on OpenAI's drama - also.

Expand full comment

The stochastic parrot argument that says that LLMs are not intelligent because they are merely predicting the next token is analogous to saying that humans are not intelligent because they are merely maximizing inclusive fitness. In both cases the original training process has led to remarkable capability on a variety of downstream tasks. Nevertheless it is still important to take into account the nature of the original training process, whether it is natural selection or a Machine-Learning, when trying to understand the capabilities of a cognitive system, artificial or otherwise. Humans make many "mistakes" in certain tasks, but these can be understood as adaptations through the lens of evolutionary psychology. For example, we perceive sounds moving towards us as moving faster, as compared with sounds moving away from us, but this "mistake" makes sense in terms of a fitness-maximizing adaptation to prioritizing sounds associate with danger in our ancestral environment. Similarly, LLMs make, to us, basic mistakes like being confused by word-order, but again these make sense when considering the original objective function. Analogous to evolutionary psychology, we need to take an teleological approach to understanding artificial cognition which incorporates known facts about the architecture and training process of the system under study. These ideas are discussed in detail in the paper "Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve" by R.T. McCoy et. al. (https://arxiv.org/abs/2309.13638). My own commentary on their paper is available here: https://sphelps.substack.com/p/a-teleological-approach-to-understanding.

Expand full comment

I feel like you could go even lower level than inclusive fitness, especially since it's way easier in modern society to not prioritize inclusive fitness or even to ignore it. I mean, look, atoms and molecules dancing together, how could THAT possibly be intelligent !

Expand full comment

Gary, have to share one of the most painfully incisive bits of prose on this. In the May 2010 issue of Discover Magazine, in article on robots, Bruno Maddox concluded with these words:

I’d argue that the revolution of the last 20 years has quenched our robo-fear, not so much by giving us a taste for change as by taking the gleam off that spark of humanity that we used to be so proud of. What is Man? people used to wonder. Is consciousness divine in origin? Or is it a mere accident of nature that we alone, of all the matter in this Great Universe, adrift upon this marbled speck, have the power to dignify and enoble our condition by understanding it, or at least attempting to?

Then along came the Internet, and now we know what Man is. He enjoys porn and photographs of cats on top of things. He spells definitely with an a, for the most part, and the possessive its with an apostrophe. On questions of great import or questions of scant import, he chooses sides based on what, and whom, choosing that particular side makes him feel like, and he argues passionately for his cause, all the more so after facts emerge to prove him a fool, a liar, and a hypocrite. If a joke or a turn of phrase amuses him, he repeats it, as if he thought of it, to others who then do the same. If something scares him, he acts bored and sarcastic.

That’s pretty much it for Man, it turns out after all the fuss. And so our robo-fear has become a robo-hope. Our dreams for ourselves are pretty much over, having been ended by the recent and vivid reiteration of the news that we really are just grubby and excitable apes, incapable by our nature of even agreeing on a set of facts, let along working together to try and change them….It’s already clear that we’re not building robots in our own image. We’re building them in the image of the people we wish we were….

Expand full comment

Gary, why do you think that so many people (AI “experts” included) are hungry for AI to be human-like to the extent that they fear such outcome, versus thinking of AI as a tool to make humankind more productive, efficient, and able to take on more difficult tasks. Why is there such a desire in humans to abdicate responsibility for thinking through what needs to be done and how? This is also frequently the case with laws people demand (until they have them...or as some friends and I like to say, “you don’t want that” ;). Yes, humankind is imperfect but it feels like the ideals being proposed or the capabilities being aspired to by some of the AI researchers are less about the tool view of the world and more about the wholesale replacement of our species...unless you’re Kurtzweil, then it’s about merging the two (ugh!) 😉. Your book, “Rebooting AI” was excellent on the shortcomings of what is the state of AI at present which I really appreciated and feel should be a must-read for anyone dealing in public policy around these issues.

Expand full comment
Nov 26, 2023·edited Nov 26, 2023

It's not that people want to replace humanity, it's that a liberal economy + (super)intelligent AI = humans replaced. The AI as tool describes only this tiny short period where AI is not good enough to replace us yet still useful (= ChatGPT, Dall-E...). Humans will desire to be replaced by AI : why would anyone want to spend 10 hours a day assembling toys in a factory instead of having quality time with their family ?

Now, perhaps you're scared of AI replacing humans in creative sectors, like art, writing poems, making movies, music, ... ? Perhaps you could have policies that protects these fields from being overrun by AI creations, right ? Well, that's impossible, unless you plan to control every computer in the world. Even a "Made without AI" label is impossible to protect from AI made/assisted content.

AI replacing us in almost every endeavor is a certainty, as much in odds as having the sun rise the next morning.

Expand full comment

Well, any time I see that out of the nearly infinite paths the future can take, someone espouses certainty, it’s perhaps best to consider that opinion with a grain of salt 😉. A strong sense of humility is necessary through all of this to avoid the “hair on fire, end of the world” panicked behavior we see on all things these days. As they say, “strong opinions held lightly” is really the model to apply through all of this.

My concern is less with AI taking on various tasks regardless of how easy or hard, just that it should always do so in the context of “tool for humans” not as abdication of responsibilities. Machines and various inventions have a history displacing human physical labor. Books displaced knowledge-based labor. AI feels like a continuation of this sort of evolution. That’s fine IMO so long as the tool context remains in tact. My concern is that too many in the AI field are in search of a machine autonomy that they don’t understand and don’t truly want (once it happens of course). It’s all a matter of context and that’s where my bigger concerns lie.

Expand full comment

My bad, i probably didn't get the meaning behind your message. I'd say people probably believe achieving robust intelligence comes with giving it human-like traits, and surrendering responsibilities to these tools seems inevitable because of monetary incentives. It's not that they wish or are hungry for that (although some are), just that it's one (quite) probable outcome.

Expand full comment

This is one of the finest essays by Gary I've read. Focused on the future, on big and important themes, inspirational, motivational.

Expand full comment

What do you mean that AI is pretty poor at "inferring semantics from language"? LLMs learn all about the semantic of words and sentences from training on texts written in text and languages. And are apparently pretty good at understanding the words as they can write decent collage essays.

Expand full comment
author

see earlier post i wrote with elliott murphy on things AI could learn from linguistics, roughly a year ago

Expand full comment

AI needs to get a lot better at going from language to meaning, from meaning to detailed understanding and execution, then all the way back.

Any prior approaches at this utterly failed. Only recently, with Transformers (much derided on these pages), we managed to connect these, especially when Transformers are integrated with third-party functionality.

I know that current attempts are rough, but for the first time ever, we are making progress. With a large quantity of examples that have patterns in them, machines are learning to connect the dots.

Expand full comment

Gary tends to be behind in some of the state of the art.

Tends to underrepresent Ai's capacity, maybe at the cost of wanting to be a contrarian

⚠️ Can leave believers helpless in light of growing automation.

⚠️⚠️ Writers have lost jobs to GPT, teaching assistants, etc.

Expand full comment

"Thou shalt not make a machine in the likeness of a human mind." I feel like LLM's and things like Midjourney are a strange, almost misanthropic progression for the technology. Does a computer need a complex language model to handle a spread sheet? Does it need an advance imaging program to maintain a database?

Expand full comment

Does my boat needs an engine when i can row ? There will be human suffering caused by such technologies, but they're unavoidable in today's societal configuration. All we can do is brace for impact and navigate the cataclysm the best we can. We'll adapt i'm sure, worst case we'll augment ourselves in a way that minimizes suffering in the new world.

Expand full comment

A boat still needs a person to navigate, start, stop, unload, and load up. You're "prediction" for the future isn't helping sell the technology. Just another piece of evidence for throwing it, and the people behind it in a bin.

Expand full comment

Yes, but in a world where everybody is trying to row faster than their neighbor, the engines becomes a must-have, just like LLMs handling spreadsheets becomes a must-have. Can we blame anybody for wanting to row faster than their neighbor, when rowing faster often truly equates with more happiness given the selective pressure in every part of our lives ? Even friendship is vulnerable to the rowing faster than your neighbor paradigm, and you can be ejected from a group of friends if some faster rower suddenly decides you're a nuisance.

To expand on my previous answer, what i meant is that there is no going back possible, it's over, AGI is coming, all you can do is try to slow it down, but funny fact, trying to slow it down might even bring it faster as engineers would start exploring architectures with lower training costs, which are actually the highway to AGI. Even in an almost perfect setting where all countries including Russia, China, Cuba, North-Korea, ... would all decide to drastically regulate AI, AGI would still appear and propagate like wildfire.

The genie is out, and there's no bin where it will fit. Let's not forget that AGI also potentially means considerable improvement in the life of billions, it could yield true material abundance for everyone, the end of work and jobs, imagine having everybody doing community service in the form of taking care of the more vulnerable and less fit for example ? The net happiness in the world could just reach unprecedented levels. What we need is to minimize the downsides during the transition by guiding the political scene using our votes.

What consequence of AGI scares you the most Michael ? Loss of jobs ? Loss of meaning when AI overtakes humans even in creative endeavors ? Dangers associated with misuse ?

Expand full comment

There are so many problems with your response I'm going to have trouble getting to all of them.

The rowing analogy has gone way too far and now seems absurd. My friends cut me loose because I can't row as fast? What?

The destruction of any and all creative pursuits and condemning us all to lives of community service and menial labor is bad. I don't know how else to explain that. It's bad. The AI utopia you just described is a utopia, a great no such place.

I don't trust AI, I don't trust the people behind AI, I don't trust the government to regulate AI.

Expand full comment

In a group of friends, a friend that rows faster (= is better equipped, more charisma, more social experience, more intelligent, more persuasive...) can potentially evict you from the group if he really wants to. This applies mostly to newly formed groups of friends where the bonds are still fragile. People use LLMs because these LLMs help them have more agency over their world, and agency is prized by nearly everybody because it helps them get happier.

I was just offering an idea for the community service thing, we'll be free to do whatever we want. Nobody knows how the transition from now to a world conquered by AGI will unfold, it's gonna be a fluid process, nothing will happen overnight, creative pursuits will probably not be the first to fall, hence lessons will have been learnt from previous careers that were stabbed through the heart. In the end, we'll adapt, just like people still play chess despite computers vastly outperforming them, people will still do all kinds of art. Only you won't get paid for it, but you probably won't need to be paid anymore in the future.

The utopia is not so far fetched, it's speculative but AGI should speed up technological progress by a very large factor. Many positive things should come out of it and quickly (with associated risks as well). Hopefully abundance for everybody is one of these outcomes.

Expand full comment

What happened that you have this view of social connections? No friend group, no work group, no group I've heard of acts like this unless they are awful people.

Expand full comment
Comment deleted
Expand full comment