100 Comments

Deep breath.

As a former special education professional who worked a LOT with cognitive assessments, and spent many hours correlating cognitive subtest scores with academic performance in order to create learning profiles, do I ever have an opinion on this.

Too many people are simply unaware of the complexities of human cognition. I've seen how one major glitch in a processing area...such as long-term memory retrieval (to use the Cattell-Horn-Carroll terminology) can screw up other performance areas that aren't all academic. Intelligence is so much more than simply the acquisition and expression of acquired verbal knowledge (crystallized intelligence) that tends to be most people's measure of cognitive performance. I have had students with the profile of high crystallized intelligence, low fluid reasoning ability and...yeah, that's where you get LLM-like performance. The ability to generalize knowledge and experience, and carry one learned ability from one domain to another is huge, and it is not something I've seen demonstrated by any LLM to date. I don't know if it is possible.

Unfortunately, too many tech people working on A.I. haven't adequately studied even the basics of human cognitive processes. Many of those I've talked to consider crystallized intelligence to be the end-all, be-all of cognition...and that's where the problems begin.

Expand full comment

This is one of your better posts. Thank you.

Expand full comment
Nov 25, 2023·edited Nov 26, 2023Liked by Gary Marcus

What I learned from using AI as a writing assistant (ChatGPT 4) is that is was great at doing what I call "low level editing" (technical aspects, grammar, sentence structure, fragments, run-ons, typos, etc.), and sometimes helping improving readability, as well as proposing words, phrases and occasionally sentences, as options to choose from – like throwing a dice and seeing what comes up – using it's vast memory and database of word and sentence patterns. But if I let it get out of hand, write too much or for too long without my hand at the wheel, it was crap: generic or hyperbolic or completely out of left field, whatever – and not my or the author's *voice* I was editing. Cheesy. And the longer it writes on it's own the more it confabulates, wanders away from the intention, adds things, or hallucinates.

The AI cannot *hear* how something *sounds* and how that *feels*, as whole, which is exactly what a good writer is sensitive to: an intangible flow, an organic *voice* that cannot be pinned down.

But most importantly, when experimenting with having it help with a philosophical work, it struck me at one point, as clear as a bell, the insight that this thing had ZERO *real* understanding of the actual real *meaning* of what it's saying. It is *purely* mechanical. (Having worked in the computer field for most of my life helped me see this too).

That was very freeing, because it dispelled all illusions, any projections of intelligence, intentionality or awareness into it. It was simply responding to what I was asking, in it's own weird, complicated stimulus-response way to be impressively human-language-text-like (and thus the clarity of prompts, and understanding what kind of machine one is dealing with is absolutely critical).

It was even more clear then that rather than a replacement for human creativity, the machine is a tool and a slave only. That is it's natural place. And enhancer and amplifier for us. After all, *we* created it – the creativity is that: in the software, hardware, and the human ideas it embodies. It has none of it's own: that what springs as a whole, "out of the blue" in an instant, from the quantum field of divine beauty love truth of Life (or whatever you want to call the Source).

As a machine adjunct to writing and human brains, it can save a tremendous amount of time and tedium, such as in editing a long work from a poor writer. But it is not a replacement for real creativity, understanding, intention, feeling, sense of beauty, truth, and love – and all that human and "divine" "stuff". :)

Expand full comment
Nov 25, 2023Liked by Gary Marcus

Couldn't agree more. An excerpt from my unfinished draft AGI paper [https://bigmother.ai]:

"Unfortunately:

• humans are primarily motivated by short-term self-interest

• humans are instinctively tribal

• human cognition is far less perfect than we like to think it is.

As a result, Molochian behaviour (whereby many tribes compete in their own short-term interest, oblivious to any consequent long-term harm) is deeply ingrained into human nature."

I already plan to cite "Kluge" (among others) when I expand the relevant section.

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Gary Marcus

Great read. I agree with most of it but have a few issues. I don't think an unbiased, unerring, "Commander Data" type of intelligence is possible. Intelligence is very much about making assumptions and correcting them if necessary. This is what learning is about.

When we fully understand intelligence and build intelligent machines, I believe we will find them to be just as prone to errors as we are, especially during their upbringing phase. They will eventually be more reliable than humans but only because they will be more focused on achieving their goals. They won't be distracted by most of the things that distract us, the things that make us human such as the taste of food, love relationships, reproduction, subjective aesthetics, etc. I could be wrong but this is my current take.

Expand full comment

I couldn’t agree more. For me, studying AI has teached me a lot about what it means to be human. And learning what it means to be human, helps me better understand what AI is and isn’t.

I feel one our biggest blindspots today is that by building AI in our own image, we are ingesting these systems with all the same vunerabilties that make humans flawed.

The fact that LLMs can be coerced into all sorts of behavior is a testament to that. I don’t see how that is going to go away as the technology becomes ‘smarter’, in fact, I predict this only to get worse. This is why the whole idea of ‘aligning’ models with ‘human preferences’ is deeply misguided in my opinion.

Expand full comment
Nov 26, 2023Liked by Gary Marcus

Well shucks, this made my day too.

Expand full comment
Nov 25, 2023Liked by Gary Marcus

I'm glad you held up the Star Trek computer (STC) as a goal rather than Commander Data from the same show. Although a Data-style AI would also be interesting, we need the STC first.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023Liked by Gary Marcus

Well said! Puts me in mind of Dijkstra's aphorism, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." I love it that his saying will now live in my mind on par with Ali G's "neither is better", thanks to this piece.

Expand full comment
Nov 26, 2023Liked by Gary Marcus

Yes, good post. I'm starting to read most of what hit's my inbox. Keep the good work up and thanks for play by play on OpenAI's drama - also.

Expand full comment

The stochastic parrot argument that says that LLMs are not intelligent because they are merely predicting the next token is analogous to saying that humans are not intelligent because they are merely maximizing inclusive fitness. In both cases the original training process has led to remarkable capability on a variety of downstream tasks. Nevertheless it is still important to take into account the nature of the original training process, whether it is natural selection or a Machine-Learning, when trying to understand the capabilities of a cognitive system, artificial or otherwise. Humans make many "mistakes" in certain tasks, but these can be understood as adaptations through the lens of evolutionary psychology. For example, we perceive sounds moving towards us as moving faster, as compared with sounds moving away from us, but this "mistake" makes sense in terms of a fitness-maximizing adaptation to prioritizing sounds associate with danger in our ancestral environment. Similarly, LLMs make, to us, basic mistakes like being confused by word-order, but again these make sense when considering the original objective function. Analogous to evolutionary psychology, we need to take an teleological approach to understanding artificial cognition which incorporates known facts about the architecture and training process of the system under study. These ideas are discussed in detail in the paper "Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve" by R.T. McCoy et. al. (https://arxiv.org/abs/2309.13638). My own commentary on their paper is available here: https://sphelps.substack.com/p/a-teleological-approach-to-understanding.

Expand full comment

Gary, have to share one of the most painfully incisive bits of prose on this. In the May 2010 issue of Discover Magazine, in article on robots, Bruno Maddox concluded with these words:

I’d argue that the revolution of the last 20 years has quenched our robo-fear, not so much by giving us a taste for change as by taking the gleam off that spark of humanity that we used to be so proud of. What is Man? people used to wonder. Is consciousness divine in origin? Or is it a mere accident of nature that we alone, of all the matter in this Great Universe, adrift upon this marbled speck, have the power to dignify and enoble our condition by understanding it, or at least attempting to?

Then along came the Internet, and now we know what Man is. He enjoys porn and photographs of cats on top of things. He spells definitely with an a, for the most part, and the possessive its with an apostrophe. On questions of great import or questions of scant import, he chooses sides based on what, and whom, choosing that particular side makes him feel like, and he argues passionately for his cause, all the more so after facts emerge to prove him a fool, a liar, and a hypocrite. If a joke or a turn of phrase amuses him, he repeats it, as if he thought of it, to others who then do the same. If something scares him, he acts bored and sarcastic.

That’s pretty much it for Man, it turns out after all the fuss. And so our robo-fear has become a robo-hope. Our dreams for ourselves are pretty much over, having been ended by the recent and vivid reiteration of the news that we really are just grubby and excitable apes, incapable by our nature of even agreeing on a set of facts, let along working together to try and change them….It’s already clear that we’re not building robots in our own image. We’re building them in the image of the people we wish we were….

Expand full comment

Gary, why do you think that so many people (AI “experts” included) are hungry for AI to be human-like to the extent that they fear such outcome, versus thinking of AI as a tool to make humankind more productive, efficient, and able to take on more difficult tasks. Why is there such a desire in humans to abdicate responsibility for thinking through what needs to be done and how? This is also frequently the case with laws people demand (until they have them...or as some friends and I like to say, “you don’t want that” ;). Yes, humankind is imperfect but it feels like the ideals being proposed or the capabilities being aspired to by some of the AI researchers are less about the tool view of the world and more about the wholesale replacement of our species...unless you’re Kurtzweil, then it’s about merging the two (ugh!) 😉. Your book, “Rebooting AI” was excellent on the shortcomings of what is the state of AI at present which I really appreciated and feel should be a must-read for anyone dealing in public policy around these issues.

Expand full comment

This is one of the finest essays by Gary I've read. Focused on the future, on big and important themes, inspirational, motivational.

Expand full comment

What do you mean that AI is pretty poor at "inferring semantics from language"? LLMs learn all about the semantic of words and sentences from training on texts written in text and languages. And are apparently pretty good at understanding the words as they can write decent collage essays.

Expand full comment

"Thou shalt not make a machine in the likeness of a human mind." I feel like LLM's and things like Midjourney are a strange, almost misanthropic progression for the technology. Does a computer need a complex language model to handle a spread sheet? Does it need an advance imaging program to maintain a database?

Expand full comment