37 Comments

No one disputes the fact that Yann LeCun is a praiseworthy deep learning pioneer and expert. But, in my opinion, LeCun's fixation on DL as the cure for everything is one of the worst things to have happened to AGI research.

Deep learning has absolutely nothing to do with intelligence as we observe it in humans and animals. Why? Because it is inherently incapable of effectively generalizing. Objective function optimization (the gradient learning mechanism that LeCun is married to) is the opposite of generalization. This is not a problem that can be fixed with add-ons. It's a fundamental flaw in DL that makes it irrelevant to AGI.

Generalization is the key to context-bound intelligence. My advice to LeCun is this: Please leave AGI to other more qualified people.

Expand full comment

The LLM charade continues... hopefully not for long.

Expand full comment
Mar 4, 2023Liked by Gary Marcus

I think that Ian Bogost came up with the best single sentence that explains the problem and its dangers:

"Once that first blush fades, it becomes clear that ChatGPT doesn’t actually know anything—instead, it outputs compositions that simulate knowledge through persuasive structure." ("Generative Art is Stupid," _The Atlantic,_ 2023-01-14.)

I initially found ChatGPT interesting, but at this point I find it quite frightening. As the electronic communications and Internet have become more and more prevalent over the past few decades, we seem to have moved from a world where the primary problem is finding good information where information is hard to find to a world where the primary problem is filtering out bad information from a huge flood of information both true and false, and the latter seems to me a much more difficult problem. Programs that rapidly generate more information are only going to exacerbate this flood and make finding good information more difficult yet. (And of course they'll produce ever more bad information as they are trained on the flood of other bad information.)

Expand full comment
Nov 21, 2022Liked by Gary Marcus

This feels like they're trying to use an axe to turn a screw. LLMs are simply the wrong tool for the job of you're trying to make factually correct statements about reality or conduct any kind of logically sound reasoning.

Expand full comment

There is a need in a conceptual model of the world i.e. its scientific picture - t.me/thematrixcom

Expand full comment

"Is this really what AI has come to, automatically mixing reality with bullshit so finely we can no longer recognize the difference?"

Well, consistent with FB story.

Expand full comment

Only barely more impressive than https://thatsmathematics.com/mathgen/ (which did fool a journal once)

Expand full comment

But can you point out profound research efforts for true knowledge representation that really rethink AI from a bottom-up approach?

It seems like the vast majority of ML/AI research is focused on beautifying bullshit.

Expand full comment

I cannot stop laughing as I contemplate Terrence Tao's discovery of Lennon-Ono [sic] complementarity according to the AI. Despite claims that chatGPT with GPT-4 is much improved, I think this ACM post remains valid https://cacm.acm.org/blogs/blog-cacm/270970-gpt-4s-successes-and-gpt-4s-failures/fulltext

Expand full comment
Jan 12, 2023·edited Jan 12, 2023

Galactica, ChatGPT, and all other LLMs are all con-men in the most literal sense, and this ain't comfy. If has the logic skills of a teenager but the BSing skill of a professor, this would unironically make smart people worth more than marketers. Also time to use this to "turing test" academic frauds. https://threadreaderapp.com/thread/1598430479878856737.html https://davidrozado.substack.com/p/what-is-the-iq-of-chatgpt https://en.wikipedia.org/wiki/Sokal_affair

Also not to doot my own hoot, https://bradnbutter.substack.com/p/porn-martyrs-cyborgs-part-1

Expand full comment

Marcus writes:

"And, to be honest, it’s kind of scary seeing an LLM confabulate math and science. High school students will love it, and use it to fool and intimidate (some of) their teachers. The rest of us should be terrified."

Yes, you should be terrified because AI is going to make you obsolete. You are in effect promoting the source of your own inevitable career destruction.

As example, here's an article by an academic philosopher:

https://daily-philosophy.com/jasper-ai-philosophy/

The article says:

"I tried out Jasper AI, a computer program that generates natural language text. It turns out that it can create near-perfect output that would easily pass for a human-written undergraduate philosophy paper."

So, how long will it be until AI can write near perfect output that will easily pass for PhD level philosophy papers? I don't claim to know the timing, but isn't such a development inevitable?

What's going to happen when those who fund the ivory tower can't tell the difference between articles written by humans and those generated by AI? The answer is, the same thing that happened to blue collar workers in factories.

In the coming era we won't need any of you to write us articles about AI, because AI will do a better job of that, at a tiny fraction of the cost. And we won't need you to further design AI either, because AI will out perform you there as well.

What we are witnessing in all these kinds of discussions all across the Net are very intelligent well educated people with good intentions who don't yet grasp that they are presiding over their own career funerals.

Expand full comment

Thanks Gary for your recent presentation - https://www.youtube.com/watch?v=xE0ycn8dKfQ - I absolutely agree about the need of deep understanding and conceptual knowledge for further development of AI - that's what I'm working on in my project - t.me/thematrixcom

Expand full comment

The criticism here isn't what I expected. Being able to invent some plausible sounding bullshit is a characteristic of human domain experts. The real test is whether the explanations of real phenomena are accurate

Expand full comment

I think this is an easy problem to solve though right. You just need to pass the output through a second neural network that checks for truthfulness. And labels how confident it in the accuracy just like alpha fold does. How do tell what is accurate? You cross check multiple different sources and if they all give similar answers then you can say confidently it is true. Or you generate several different outputs with minor differences in the prompting. This basically the only way we can tell whether things are true today, unless you are a front line researcher. There is no reason that this would be particularly hard for a neural network. Obviously this will only give you the accepted wisdom. But asking for anything more is obviously not feasible at this point.

Expand full comment

I'm not sure if you are tilting at windmills full of straw men, or just missing the point. Perhaps I am simply unaware of grandiose claims for LLM that are meant to be taken seriously, as opposed to a bit of gee-whiz hype.

The thing to be impressed by here is what these models do with so little. At 120 billion parameters, it is working with something in the neighborhood of 0.01% to 0.1% of the capacity of the human brain. At inference time, it generates these texts in a matter of seconds. They certainly can't afford to run this model on the latest greatest supercomputers, so it is safe to say the texts are generated with an even lower percentage of the equivalent human brain computer in the same time period. And a human writing such texts would likely spend time at least on the order of hours, perhaps at least 15 minutes on a short summary of a familiar topic. I would be surprised if the texts are generated with any more than 1/1000000th of the compute a human mind would apply to such the tasks, more likely 1/100000000th (1 billionth). Finally, the efficiency with which both the parameters and computefvggggggg are utilized likely falls short of biological brains by at least an order of magnitude..

Galactica and other LLMs do what they do with the equivalent computing resources of a literal and proverbial birdbrain. And a very, very small birdbrain at that.

What if you were told that Galactica was in fact a clump of cultured neurons the size of a small pea?

Get annoyed all you like with the "hype". But the achievement is astounding by any measure.

Expand full comment

I asked the model for a "Wikipedia article on Tom Cruise's genetic code"

It came up with this, which is totally fictional:

Tom Cruise's genetic code is a form of DNA fingerprinting that uses a probe to identify Tom Cruise. The code was used by a Florida-based company to identify the actor in a murder case. The company, GenePrint, was sued by Cruise in 2005.

Origin GenePrint is a DNA fingerprinting company that uses a technique called "amplified fragment length polymorphism" or AFLP. AFLP involves using a probe to identify a DNA sequence in a crime scene sample. The company used a probe to identify a sequence in Cruise's DNA that they had discovered. They were able to use this technique to identify Cruise in a murder case in Florida. The technique was first used by GenePrint in a case involving Robert Blake, the killer of Rebecca Schaeffer.

Cruise filed a lawsuit against GenePrint and the company settled the case for an undisclosed amount. The case was dismissed because of the settlement.

Expand full comment