96 Comments
Mar 24, 2023Liked by Gary Marcus

Not only that, but OpenAI is misleading the public by naming their company “open” gaining trust and confidence they do not deserve

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

Gary, I want to start by saying thank you. In general, your tone and assertions anger me, AND they also force me to look at AI / AGI in a critical way, with honesty -- confronting my own inner hype cycle / desire for AGI experience -- and that is a priceless gift. Now, to the specific points of this post, which are, btw, EXCELLENT:

Your characterization of MSFT's monstrous 145 page "research" report as a "press release" is genius. perfect turn of the phrase. caught me off guard, then I chuckled. Let's start, by blaming arXiv and the community. Both my parents were research scientists, so I saw firsthand the messy reality that divides pure "scientific method idealism" from the rat race of "publish or perish" and the endless quest for funding. In a sense, research papers were *always* a form of press release, ...BUT...

they were painstakingly PEER-REVIEWED before they were ever published. And "publication" meant a very high bar. Often with many many many rounds of feedback, editing, and re-submission. Sometimes only published an entire year (or more!) after the "discovery". Oh, and: AUTHORS. In my youth, I *rarely* saw a paper with more than 6 authors. (of course, I rarely saw a movie with more than 500 names in the credits, too... maybe that IS progress)

Here's the challenge: I actually DO agree with the papers assertion that GPT4 exhibits the "sparks of AGI". To be clear, NOT hallucinating and being 100% accurate and 100% reliable were never part of the AGI definition. As Brockman so recently has taken to saying "Yes, GPT makes mistakes, and so do you." (the utter privilege and offensiveness of that remark will be debated at another time). AGI != ASI != perfectAI. AGI just means HLMI. Human Level. Not Einstein-level. Joe Six Pack level. Check-out clerk Jane level. J6 Storm the Capitol level. Normal person level. AGI can, and might, and probably will be highly flawed, JUST LIKE PEOPLE. It can still be AGI. And there is no doubt in my mind, that GPT4 falls *somewhere* within the range of human intelligence, on *most* realms of conversation.

On the transparency and safety sides, that's where you are 100% right. OpenAI is talking out two sides of their mouths, and the cracks are beginning to show. Plug-ins?!!?! How in god's name does the concept of an AI App Store (sorry, "plug-in marketplace") mesh with the proclamations of "safe deployment"? And GPT4, as you have stated, is truly dangerous.

So: Transparency or Shutdown? Chills reading that. At present, I do not agree with you. But I reserve the right to change my mind. And thank you for keeping the critical fires burning. Much needed in these precarious times...

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

For a budding AGI, gpt-4 is kind of dumb. The other day I was using Bing chat to learn about the relationship between study time and LSAT scores. After several mostly useless replies, bing got confused and talked about the correlation between LSAT scores and first year law school performance. When I thought it had gotten back on track, it got confused again and talked about the relationship between study time and SAT scores.

I tried to use it to troubleshoot a code. It told me "ah, you wrote 5 words here instead of 6." That was irrelevant to the problem (it was a code to divide text into 3 word sentence fragments and bing implied the code required the number of words in the text had to be multiples of 3). When I showed bing that the sample text in fact had 6 words (1. This 2. Sentence 3. Has 4. Exactly 5. Six 6. Words) bing told me I was wrong and that punctuation doesn't count as a word.

Most maddingly, I was playing a trivia with bing, which got boring fast because it asks the same 7 or 8 questions over and over again, but on one question, where the correct was a, I answered "a" and bing responded, "Incorrect! The correct answer is a."

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

FYI: a version of the sparks of AGI paper is available through arxiv that shows more evaluation in the commented out portions (h/t https://twitter.com/DV2559106965076/status/1638769434763608064). The comments show some preliminary eval for the kinds of safety issues you've been worried about Gary---yes, there are still issues (for toxicity, whatever version this paper discusses does worse than GPT3).

Granted, this version of GPT4 is an "early" version. Not sure what exactly that means, but field-internal gossip suggests it is post-RLHF but pre-rule based reward modeling.

I think this means there's a worm in the apple, and simply shining the apple's skin isn't gonna fix it.

Expand full comment
Mar 24, 2023Liked by Gary Marcus

From an email to a friend:

The current situation pisses me off. The technology is definitely interesting. I’m sure something powerful can be done with it. Whether or not it WILL be done is another matter.

The executives are doing what executives do, pushing stuff out the door. I have no idea what these developers think about all this. But my impression is that they know very little about language and cognition and have an unearned disdain for those things.

You know, "They didn’t work very well in the past, so why should be care?" Well, if you want to figure out what these engines are doing – perhaps, you know, to improve their performance – perhaps you should learn something about language and cognition. After all, that’s what these engines are doing, “learning” these things by training on them.

The idea that these engines are mysterious black boxes seems more like a religious conviction than a statement of an unfortunate fact.

Expand full comment

Ah, here the corporate PR machine goes again...The smokescreens are getting thicker by the day and gaslight shines brighter by every Tweet. Thank you for keep speaking up!

Expand full comment
Mar 24, 2023Liked by Gary Marcus

Such breathless self-promotion is misleading and damaging. Feynman's famous quote about nature vs PR, applies here as well.

Thank you for continuing to point out the reality!

Also: https://rodneybrooks.com/what-will-transformers-transform/

Expand full comment
Mar 24, 2023Liked by Gary Marcus

"When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.

We should not accept this. That means all of us, computer professionals as well as those of us who just use computers. If we don’t, then the future of computing will belong to biology, not logic. We will continue having to use computer programs that we don’t understand, and trying to coax them to do what we want. Instead of a sensible world of computing, we will live in a world of homeopathy and faith healing."

Leslie Lamport, "The Future of Computing: Logic or Biology", 2003.

Expand full comment

Great article - also loved Rebooting AI, which I picked up just before this latest AI frenzy. It armed me brilliantly to confront some of the wild-eyed (frankly embarrassing) enthusiasm I am witnessing in my job and more generally.

My take on all of this so far is that it's as though Nasa scientists sat down in 1961 and agreed that the most common sense way to get to the moon was to glue ladders end to end until we inevitably get there - and then proceeded to shout down anybody who pointed out inconvenient facts like gravity, atmosphere or orbital mechanics.

Expand full comment
Mar 24, 2023Liked by Gary Marcus

Do people still tweet? I'm asking only half jokingly!

Coca Cola can take a page from Microsoft's book and declare sparks of AGI in their drink, as Artificial Goodness Injected!

Besides Gary's good points about AGI claims , I'm wondering what happens in the next few years when this technology will be in widespread use? How many jobs will be lost or transformed?

Expand full comment

Thank you. Clearly argued points, as usual, especially your point about the arrogance and haughtiness of some of the players in the LLM game. But I disagree with the title of your article. I don't think that it's an either-or situation. I expect neither sparks of AGI nor the end of science resulting from this mess. If anything, LLMs are great examples of how not to solve AGI and how not to do science. There is nothing in LLMs that is intelligent. No LLM-based system will ever walk into an unfamiliar kitchen and make coffee. Language is not the basis of intelligence but a communication and reasoning tool used by human intelligence. Heck, no DL-based system will ever be truly intelligent in my opinion. DL is a dead-end technology as far as AGI is concerned.

I was worried about the impact of LLMs before but, lately, I have developed a new understanding. Silicon Valley, and Big Tech in general, get the AI they deserve: fake malevolent AI. They don't have the moral fiber to create anything other than fake AI. Good science is not about profits. More than anything else, good science is grounded in benevolence, and uncompromising honesty and integrity. Yes, AGI is coming but it won't come from that morally bankrupt bunch.

Expand full comment

Now we're getting somewhere. Here are a few topics related to this article that AI commentators might choose to expand upon.

1) What are the compelling benefits of AI which justify taking on _yet another_ significant, or maybe existential, risk? Are any of these companies willing to answer this? Do they dodge, weave and ignore, or do they have a credible case to make? Can't seem to find a writer who will address this specific question. Help requested.

2) The biggest threat to the future of science is in fact science itself. This article seems to illustrate this principle, in that people we can't trust to be objective are determined to give us ever more power at an accelerating rate, and there is as yet no proof that we can handle either the power, or the rate of delivery. In fact, neither we in the public, nor the AI developers, even know what it is that society is being expected to successfully manage.

3) Apologies for this one. Those working in the AI industry would seem to be the last people we should be asking whether the AI industry should exist. They aren't evil, but how could they possibly be objective about the future of an industry their careers and incomes depend on? This is true of any emerging industry of vast scale, such as genetic engineering.

When considering the future of any technology, what if we're looking for answers from exactly the wrong people?

Expand full comment

MSFT only cares about shareholder returns. They (and others like them) have multiple billions in corporate debt that must be tied to some fabled valuation to keep the game moving. All the Wall Street chatter, Gates’s “blog”, early access reports and “leaks” are all just modern adverts.

Expand full comment

The fact they don’t share the datasets used, besides the limitations it imposes to our capacity to replicate experiments, has another derivative: we cannot determine potential copyright infringements of tools that are basically processing and regurgitating (almost) the whole of human created text.

Expand full comment
Mar 25, 2023·edited Mar 25, 2023

The problems of GPT4 do not follow from it being bad at what is does, it comes from being good at what it does.

GPT4 is creating impressive results. Part of that comes from us humans being impressionable. Part may be because of sloppy science with data sets, who knows. The social construct of science is one of few the ways we can try to guard against bad science. But the social system is far from perfect: much bad science slips through the cracks, either because humans (including peer reviewers) are fallible or because the message ends up in a place and form that makes it look like science (e.g. the autism-vaccine nonsense started from something published in The Lancet, the fact that this paper is on arXiv but not peer-reviewed in a real magazine — though I suspect it will sail past peer review easily enough)

So, this seems to be a formidable (and at least disruptive) new tool in the arsenal, and next to useful stuff for which people will want to use it, it is very likely going to produce a lot of 'bad use', 'toxic waste' and (social and physical) 'environmental damage'. It's like the early days of the chemistry revolution. Chemical warfare came from that. Information warfare will get an enormous boost from this.

Oh, and a good example of bad use: if it is good enough for coding, how long before someone loads all open source from github into it and makes it look for weak spots and exploits? How long before it is part of phishing and scamming?

And because it is good at what it does (btw definitely not AGI-like) there will probably be an arms race of using it. Think of asset managers not wanting to lose the competition with other asset managers and having the means to invest. People will strongly feel that they risk 'missing the boat'. See Dave Karpfs story about crypto from last year.

We will probably not have enough realism as humans to prevent the bad things. Science as a social construct made of fallible humans is too weak to prevent these disasters — assuming that it is indeed so that the technology has become powerful enough to affect society and doesn't flame out when it escapes from the marketing-world it now lives in.

Expand full comment

Marcus writes, "Everything must be taken on faith..."

I know you meant this in a more limited way, but your words shine a light on a much larger picture.

The science community wants us to take the philosophical foundation of modern science on faith, just as they do. That foundation is the typically unexamined assumption that our goal should always be ever more knowledge delivered ever more quickly.

The spectacular success of science over the last century has created a revolutionary new environment, which the science community doesn't wish to adapt to. They want to keep on doing what they've always done in the past, the way they've always done it, and they want us to believe on faith that their simplistic, outdated 19th century "more is better" knowledge philosophy is suitable for the 21st century, a very different environment.

THE PAST: A knowledge scarcity environment, where seeking more knowledge without limit was rational.

TODAY AND TOMORROW: A radically different knowledge environment where knowledge is exploding in every direction at an accelerating rate. In this case seeking more knowledge without limit is irrational, because human beings are not capable of successfully managing unlimited power.

If you doubt that claim, please remember, we are the species with thousands of massive hydrogen bombs aimed down our own throat, an ever present existential threat that we typically find too boring to bother discussing. This is who the science community wishes to give ever more power, at an ever accelerating rate. Rational???

It's not just these AI companies who want us to take their products on faith. It's much bigger than that.

You know how during the Enlightenment about 500 years ago a variety of thinkers began to challenge the unquestioned blind faith authority of The Church? It's time for another one of those.

Expand full comment