84 Comments
Apr 7Liked by Gary Marcus

It certainly seems the negative societal impacts of generative AI are far outpacing any potential benefits, and at a staggering speed. The pollution from LLMs threatens the internet as a useful communication medium, sucking up human-generated content and then turning out a firehose of at best mediocre, unreliable, generated swill.

If someone *wanted* to harm society and the economy with one efficient stroke, I doubt they could have come up with a better move than having OpenAI release ChatGPT in 2021, with grandiose claims (that continue to not hold up), and set off the rat race that's currently playing out.

Shakespearan tragedy seems too small to describe this. This is like the Illiad, or the Mahabharata. Humankind letting loose its worst instincts and causing mass suffering and harm.

Expand full comment

Funny how if you slog through LeCun's most recent appearance on Lex Fridman podcast, LeCun's now very skeptical of LLMs as the path to AGI or seriously advanced AI. The most dangerous thing about AI development is it promotes people who are highly technically proficient-which LeCun clearly is-but also unbelievably intellectually dishonest. They repeatedly hype AI's alleged capabilities while disparaging those concerned about safety and reliability. When the safety concerns turn out to be impossible to deny, the AI hype people move on and pretend they knew all along that, for example, LLMs are unreliable. No! You were shouting down people saying that just a few months ago as "doomers!" The people with tech skills AND shameless hype get billions in seed capital, and the people warning about safety concerns get belittled and scorned by people like Marc Andreessen, who claims AI will be the silver bullet for literally every problem humanity has. Meanwhile, LLMs can be hacked by people who know nothing about AI by prompting an LLM with a few sentences the model can't handle! Or a 30 year old computer graphic!

Expand full comment
Apr 7Liked by Gary Marcus

Spot on. Your list of AI deficiencies and inadequacies echos my own. The only thing I would add is that hallucinations are not to be imputed to a system which lacks any concern for realness and truth, and has no care whatsoever for anything other than its own feedback loop (feeding into your argument around echochambers). The most pernicious effect, I agree, is that of contamination. Imagine, therefore, if two LLMs started talking to each other and the fruits of their exchange became the dataset of a third.

Expand full comment

All this is driven largely by the vast amount of money being thrown at the field. In that sense, it's a gold rush and no one really cares if it's fairy gold that might simply vanish tomorrow. Not as long as they've stuffed their bank accounts first.

I'm afraid that the only way to derail this train is to come up with something that can outperform it, or perform more or less as well for a lot less investment of money and training data.

Expand full comment

Gary, I completely am with you on this! Since most LLMs are based on pattern matching,via Attention, which is based on cross-correlation, the downsides of LLM are not surprising.

Expand full comment
Apr 7Liked by Gary Marcus

“a pile of unreliable systems that are unlikely to live up to the hype.” Every single place I turn, the buzz word of AI is used. When the hype falls flat, I think it will be more than ego’s that will be bruised. People are investing in AI tools thinking that this is a great product that has vastly more pros than cons

Expand full comment

Edgar Djikstra - "goto statement considered harmful"

That's an unvarnished truth from a computer science god.

That didn't mean one could never use a goto statement in non critical code or in ones own private code.

Simple John - "language generators considered harmful"

That doesn't mean they can't be used for innocuous limited distribution writings or for your personal entertainment.

Otherwise they must be banned.

Our children will not stand a chance when 999 out of a thousand inputs to their brains are word stew. (Up from 9 or 99).

Banning LLMs and language generation would absolutely improve life for all but the stockholders and the narcissists. ABSOLUTELY. Does anyone disagree?

I've not touched a GPT or image generator. They're not human. They have zero soul.

I let a staff member Gemini a letter to customers. Came back nicely formatted and reasonably capturing the transactional aspect of the letter but it obliterated what I think is my humanity.

I wound up using the Gemini first sentence and otherwise stuck to my guns.

Soul baby. Do you really want our kids to see 99.9 soulless every time they look at our adult world?

Is banning radical? So is war. Some things are worth fighting for.

Expand full comment

Greed for compute/electricity as well of course...

Expand full comment

The solution is not ideas it is scale. No matter how in the weeds the LLM industry is they figured out the scale problem in human knowledge representation. As you and most researchers know a semantic representation whether CYC or Watson has not scaled despite decades of ontology curating and lexical clustering. Humans hit a wall when they try to scale.

The holy grail has been fully automated construction of knowledge graphs. The data structure for the semantic web is many decades old with Sir Tim getting that fully defined as Web 3.0. RDF and W3C the result. Filling in that data structure has failed as a large scale human activity. Even when limited to narrow specialized areas like Health. Ask Watson.

Expand full comment

And just to take things from a 1950s interpretation of 1940s Neuroscience ...

"The past 40 years have witnessed extensive research on fractal structure and scale-free dynamics in the brain. Although considerable progress has been made, a comprehensive picture has yet to emerge, and needs further linking to a mechanistic account of brain function. Here, we review these concepts, connecting observations across different levels of organization, from both a structural and functional perspective. We argue that, paradoxically, the level of cortical circuits is the least understood from a structural point of view and perhaps the best studied from a dynamical one. We further link observations about scale-freeness and fractality with evidence that the environment provides constraints that may explain the usefulness of fractal structure and scale-free dynamics in the brain. Moreover, we discuss evidence that behavior exhibits scale-free properties, likely emerging from similarly organized brain dynamics, enabling an organism to thrive in an environment that shares the same organizational principles. Finally, we review the sparse evidence for and try to speculate on the functional consequences of fractality and scale-freeness for brain computation. These properties may endow the brain with computational capabilities that transcend current models of neural computation and could hold the key to unraveling how the brain constructs percepts and generates behavior. "

Grosu GF, Hopp AV, Moca VV, Bârzan H, Ciuparu A, Ercsey-Ravasz M, Winkel M, Linde H, Mureșan RC. The fractal brain: scale-invariance in structure and dynamics. Cereb Cortex. 2023 Apr 4;33(8):4574-4605. doi: 10.1093/cercor/bhac363. Erratum in: Cereb Cortex. 2023 Sep 26;33(19):10475. PMID: 36156074; PMCID: PMC10110456.

Expand full comment

"there systems" -> "their systems" (I'm assuming you can remove this comment and I don't know if you see anything else as quickly)

Expand full comment

I would have to disagree with most of this.

Perhaps I'm the only one actually experimenting with LLM's at scale over multiple decades, but the current versions are staggering. Most of the quoted issues are not entirely relevant anymore - the old complex chestnut "Time flies an arrow" is easily comprehended.

The first few thousand novels I generated in the past had issues with what I called the "physics" of reality, impossible descriptions, but no longer.

The unreliability - I suppose nobody has heard of an "unreliable narrator" - is a matter of naivte in fact-checking. Grammar, factual statements - show me a human who is 100% accurate in non-fiction please.

Major American newspapers happily misrepresented certain recent conflicts without correction, no AI needed.

Internet was long, long ago (speaking 1980's) polluted by opinion masquerading as fact, which grew exponentially without AI intervention. Tantalizing misinformation resonated in a manner akin to a laser with mirrors on either end stimulating and amplifyjng crud until it burst out decimating facts in it's path.

Most of these critiques are of internet, not AI or constructed LLM's.

The only major failure I continue to see exhibited in the fiction and nonfiction novels, screenplays, papers, training and analysis texts I generate are embodiment-related.

Encoded sensations within perceptual systems that have shared cognitive strata with abstract reasoning don't yet translate into LLM's - perception of time, or physical position (proprioception), or similar noninguistic perceptual models we hold.

My regression test set to see how generations are doing include harcore erotica (quite good now) which as moments arrive which are purely sensatory, glitches arrive with human body reasoning.

We live with our cognitive systems having encoded reality which we access through consciousness and dimoelled by multiple overlapping sensory/feedback loops. LLM's are already "multimodal" visual, perhaps auditory, and only lack a dozen more sensory encodings to make them even more stunning - chronosensory encoding, chemosensory, proprioceptive, nocisensory, imteroswnsory, thermo- hygro-, equilibrio-, mechano-, and perhaps electro- , magneto-, spatio- sensory encodings during training are the only way to add the dimensions required to encode and connect embodiment, that and homeostatic feedback loops like fatigue, thirst, hunger, temperature, immune systems, reproductive hormones and so on.

Expand full comment

The source of the hallucinations is you, human.

Expand full comment

The prospect of people becoming mere fact-checkers for AI is remarkably dehumanizing.

I have yet to see a product of AI that isn't derivative, pedestrian drivel in an unctuous voice of fake sincerity.

The AI gold rush is this year's cryptocurrency--a new recipe for irresponsibility and falsehood.

Expand full comment

> Gary Marcus desperately hopes the field of AI will again start to welcome fresh ideas.

And I see it as a Human Language Project for all humankind.

Expand full comment

The thing is though, AI automation of the white collar world is upon us. It may be a good idea or a bad idea, we may like it or hate it, be confused or clear, enthusiastic or bored. Whatever our personal situation, AI automation of the white collar world is still going to proceed. And it will proceed for the same reasons agriculture was mechanized and factories went robotic. This process of automation is now more than a century old, at the least. Our opinions on the current automation transition don't really matter, because we have little power to change the course of history.

I've been yelling about the overheated knowledge explosion for years now. Even if all my rants were published on the front page of the New York Times, it wouldn't make a bit of difference. Such things are bigger than any of us. They're bigger than all of us.

We are entirely within our rights to yell about AI. But doing so makes about as much sense as yelling at the weather. What does make sense is trying to figure out how we're going to adapt to the inevitable.

Expand full comment