95 Comments
Mar 24, 2023Liked by Gary Marcus

Not only that, but OpenAI is misleading the public by naming their company “open” gaining trust and confidence they do not deserve

Expand full comment

They've officially abandoned openess, ostensibly out of concerns for safety and abuse. From OpenAI's "Planning for AGI and Beyond" (https://openai.com/blog/planning-for-agi-and-beyond)

> As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems. We still believe the benefits of society understanding what is happening are huge and that enabling such understanding is the best way to make sure that what gets built is what society collectively wants (obviously there’s a lot of nuance and conflict here).

Funny how "safely sharing access to and benefits of the system" looks a lot like "protecting trade secrets".

Expand full comment

Thats a point 😁

Expand full comment
Mar 24, 2023Liked by Gary Marcus

I know it’s not a revolutionary point in that OpenAI is not open, but in recent months it’s been clear that not only are they open but their name, and claims, and actively misleading the masses.

Expand full comment

Not* open

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

Gary, I want to start by saying thank you. In general, your tone and assertions anger me, AND they also force me to look at AI / AGI in a critical way, with honesty -- confronting my own inner hype cycle / desire for AGI experience -- and that is a priceless gift. Now, to the specific points of this post, which are, btw, EXCELLENT:

Your characterization of MSFT's monstrous 145 page "research" report as a "press release" is genius. perfect turn of the phrase. caught me off guard, then I chuckled. Let's start, by blaming arXiv and the community. Both my parents were research scientists, so I saw firsthand the messy reality that divides pure "scientific method idealism" from the rat race of "publish or perish" and the endless quest for funding. In a sense, research papers were *always* a form of press release, ...BUT...

they were painstakingly PEER-REVIEWED before they were ever published. And "publication" meant a very high bar. Often with many many many rounds of feedback, editing, and re-submission. Sometimes only published an entire year (or more!) after the "discovery". Oh, and: AUTHORS. In my youth, I *rarely* saw a paper with more than 6 authors. (of course, I rarely saw a movie with more than 500 names in the credits, too... maybe that IS progress)

Here's the challenge: I actually DO agree with the papers assertion that GPT4 exhibits the "sparks of AGI". To be clear, NOT hallucinating and being 100% accurate and 100% reliable were never part of the AGI definition. As Brockman so recently has taken to saying "Yes, GPT makes mistakes, and so do you." (the utter privilege and offensiveness of that remark will be debated at another time). AGI != ASI != perfectAI. AGI just means HLMI. Human Level. Not Einstein-level. Joe Six Pack level. Check-out clerk Jane level. J6 Storm the Capitol level. Normal person level. AGI can, and might, and probably will be highly flawed, JUST LIKE PEOPLE. It can still be AGI. And there is no doubt in my mind, that GPT4 falls *somewhere* within the range of human intelligence, on *most* realms of conversation.

On the transparency and safety sides, that's where you are 100% right. OpenAI is talking out two sides of their mouths, and the cracks are beginning to show. Plug-ins?!!?! How in god's name does the concept of an AI App Store (sorry, "plug-in marketplace") mesh with the proclamations of "safe deployment"? And GPT4, as you have stated, is truly dangerous.

So: Transparency or Shutdown? Chills reading that. At present, I do not agree with you. But I reserve the right to change my mind. And thank you for keeping the critical fires burning. Much needed in these precarious times...

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

> As Brockman so recently has taken to saying "Yes, GPT makes mistakes, and so do you."

This is a profoundly stupid argument--a fallacy of affirmation of the consequent.

LLMs, by their nature, are not and cannot be AGI or on the path to AGI. They do no cognition of any sort and completely lack symbol grounding. The "human intelligence" that you find in GPT-4's outputs are solely a result of its training corpus consisting of utterances of humans, which it recombines and regurgitates.

Expand full comment

I see honestly a lot of assuredness from people who just seem to KNOW what constitutes cognition or intelligence. I find this way of thinking frankly at the limit of delusion right now. Arguing that GPT-4 "lacks symbol grounding" is essentially just blinding oneself to the obvious right now. Yes, GPT-4 doesn't "know" what an apple is the way we do, having never touched it or weighted it or tasted it. But that doesn't preclude the possibility that it can manipulate its symbols as relating to a purely theoretical concept. We can also talk about black holes electrons and magnetic fields, all things we can't see or touch or experience, because our ability to reason doesn't impinge on those things.

The only evidence we have is the practical results, and the practical results suggest that GPT-3.5 and GPT-4 manipulate and remix concepts with a degree of flexibility and coherence that suggests that something very much like symbols is being used. To say that they don't have them is ungrounded in anything: they're black boxes who got optimised to do the job as well as possible, and clearly having symbols is an excellent way of doing the job, so they might as well have developed that! We can argue then about what tests we should run to determine that precisely, and it's an interesting topic (sadly would be easier to do if GPT-4 wasn't closed source and we could inspect it as it runs), but to say that they can't possibly be doing symbolic reasoning flies in the face of the evidence we have and what we know about them.

Expand full comment

Simone, I couldn't agree more. The terms "sentience," "consciousness," "self-awareness," "world model," and, yes, "symbol grounding" etc etc are highly subjective terms without provable "tests". So, for that matter, is "human-level" and/or "human-equivalency."

In countless debates and conversations, I've come to understand that these distinctions are far more a belief system than a logical assessment. For instance, I've realised that certain people could encounter a fully embodied AI that was trained to converse naturally and fluently, and to respond emotionally and compassionately, with a full see / touch / smell / taste sensor array. And yet. In their definition, it would not be "conscious" or "sentient." Why?

When I push, it is because "it is a machine," "it does not have a soul or a spirit," "it was created by humans (not God)" etc. So in that population's definition, "consciousness" and "sentience" *equate* to "humanity". In that worldview, those are "human exclusive" qualities and it is not possible for any other entity to possess them. And why?

When we probe, I happen to think it is because such assignments of quality pose a deep threat to the "specialness" of humans in the cosmos. They pose a reckoning. When confronted with machina sapiens, how does homo sapiens respond? We shall see.

Expand full comment

"I see honestly a lot of assuredness from people who just seem to KNOW what constitutes cognition or intelligence."

You need to read the research from the people who are approaching the question of consciousness from the other direction- animal consciousness. Consider the implications of their findings for machine intelligence- particularly in terms of the capacities of animal consciousness that are simply absent from AI algorithm programs.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116194/

https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cit2.12035

There's more required for inducing consciousness into a machine than a hypothetical "Boltzmann brain" simulation, per the unexamined assumptions of AI specialists who speculate that a result of that sort would emerge simply as a function of increasing computing power.

Not only is there no need for a machine intelligence to achieve self-aware consciousness, there's no reason to assume that there's a path toward effectively inducing it. https://arxiv.org/abs/2304.05077

It comes down to the difference between information processing done with the stable digital circuitry of silicon-based hardware and the information processing done with dynamic, chemically mediated wetware. Wetware doesn't simply consist of a disembodied CPU "brain" terminal of stable circuitry; it betokens another order of being.

Expand full comment

Literally none of this has any relevance here because I said "cognition or intelligence", not "consciousness".

Consciousness is an interesting question (though I really doubt that there is any such substantial difference between silicon and chemical processing, or no way to make the silicon reproduce even the necessary "imperfections" if one wanted to). But it is only relevant to asking ethical issues about the use of AI; obviously we should not keep conscious and potentially suffering beings as our servants with no rules. None of that has anything to do with understanding or cognition, though. A perfect human-like P-zombie would be completely unconscious yet able to do anything a human can do, and we have no particular reason to think that's not possible. At some point, it doesn't really matter what's going on inside if the system is still obviously smart enough. It matters from a philosophical viewpoint, but not a practical one. It can still take your job, it can still deceive you, and potentially, it can still kill you. Whether it does so by taking pleasure in it or just as a pure automaton with no self-experience is irrelevant.

Expand full comment

I don't think that you've put much thought into considering the papers I've linked. You clearly don't understand their implications.

Also, I weary of pronouncements like "we" (who is this "we"?) "have no particular reason to think that's not possible"- which can plausibly be parsed as a conclusion drawn from incomprehension of the actual extent of what's implicitly being claimed as a potential (unlimited capability). "No particular reason to think it's not possible" works as a statement of faith, but as a mission statement in a funding appeal, it's a worthless handwave of that other possibility: overriding performance constraints.

From there, you swerve toward a more defensible point: "...it doesn't really matter what's going on inside if the system is still obviously smart enough. It matters from a philosophical viewpoint, but not a practical one. It can still take your job, it can still deceive you, and potentially, it can still kill you. Whether it does so by taking pleasure in it or just as a pure automaton with no self-experience is irrelevant."

" It can still take your job, it can still deceive you, and potentially, it can still kill you."

I can agree that AI can replace some human jobs (my preference centers on replacing inhumane jobs, like mining- why isn't there more work on increasing automation in that occupation?)

"it can still deceive you" I'm actually a pretty hard sell (and often air-gapped, conditionally; I use my iPhone primarily as a phone, and quite often leave it at home, as a landline function.). I agree that AI can probably be used to deceive some people. I question how far the program might take it.

"it can still kill you." I'll grant the possibility, as an indirect consequence. But I'm skeptical of grandiose sci-fi scenarios like AGI emailing human biotechnology labs and tricking them into doing what it says, or offering conjectures about the pernicious uses of nonotechnology as motivated by AGI orders. Setting aside the questions of motivation (including unchecked inertia)- I would think that there could be some way to prevent AGI from manifesting its private vision or mission creep via ordering material manifestations that can't be preempted. At least as far as anything beyond its 1st-order interaction with its hardware host, anyway. I can hypothetically speculate on an AI program able to destroy hard drives or commit acts of arson and electrocution, but it seems to me that any operation more ambitious- "build some nanobots"; "construct this genomic sequence"- would require considerable cooperation from human interlocutors. AI isn't just going to take over factories, build inventions for its own purposes, and then set them loose on the world. Even "automated" factories require considerable input from the humans who run their operations and do the scut work. Without the human role, no dice.

(Similarly, AI theoreticians with ambitions of "fully self-driving cars" seem to have lacunae in considering the reality that there's anything more to passenger transportation and automobile operation than the act of driving.)

The actions I've referred to above as possible results of pernicious A are drawn from the sci-fi visions of Eleizer Yudkowsky; he seems to think that there would be little challenge in tricking human operators into granting that assent to the pivotal actions that would put humanity on course for extinction. I'm unconvinced. It seems to me that the paperclip problem scenario of human extinction doesn't get very far without humans agreeing to acts of furtherance, at crucial points in the process.

That's a plot hole in the narrative, as far as I can tell. I'd venture that those vulnerabilities could be obviated with a regime of requirements for authentication of requests before proceeding further, including real-life human meetings, in person, before proceeding. EY is positing humans as helpless pawns of a superintelligence that has all countermeasures to avoid extinction gamed in advance. I'm dubious about that. We aren't confined to a game board. Novelty and unpredictability are not solely reserved for the future capabilities of machine learning programs; humans can think outside of the box, too- quite likely, to an extent that even the most advanced AI cannot follow. The human/animal capacity for dynamic, self-aware consciousness presents something of a foolproof firewall against AI machinations, if we can figure out how to deploy it properly. Although humans also have the ability to default to bot predictability, and if humans of that mindset are the ones making the critical decisions, AI can conceivably outsmart them all.

I'll grant that less extravagant scenarios are possible, such as AGI taking down vital infrastructure. But I also wonder how a truly intelligent ML program wouldn't manage to develop an overview sufficiently comprehensive to consider the advantages of retaining human interaction as a productive symbiosis with its mission, rather than seeking to supersede it. What for?

Expand full comment

"Arguing that GPT-4 "lacks symbol grounding" is essentially just blinding oneself to the obvious right now. "

No, it's simply not being appallingly ignorant.

Expand full comment

Can you define what "symbol grounding" is? How do you distinguish something that has it from something that doesn't have it? If symbol grounding is not necessary to something having the ability to make sense of natural language, coherently answer, grasp the relations between parts of it, and generally having a very high level grasp of the whole thing, what good is even the concept?

At one point, one needs to make empirical predictions and try to falsify them, or resign themselves to the fact that they're just claiming that humans are made special by some unprovable immaterial "essence" that nothing can ever imitate anyway. I've tried to think of actual experiments to perform on GPT-4 to prove or disprove its use of symbol grounding, but I can't really think of an obvious one. If you do have better ideas, do enlighten me, and let's go run the test.

Expand full comment

> LLMs, by their nature, are not and cannot be AGI or on the path to AGI.

> They do no cognition of any sort and completely lack symbol grounding.

disagree on point 1 and agree on point 2.

our definition of AGI may as well be our definition of "consciousness", "sentience", or "intelligence". To me, this degrades rapidly into semantics... and I am bu nature a pragmatist. So, *pragmatically*, the real turning point is not about "human assessment of AI intelligence"... its about "AI's ability to accumulate, hold, and exert power [in human affairs]". If you have a "stupid" AI that can assert sufficient power upon commerce / infrastructure / humanity, and it is a "breakout" scenario that has transcended human control & management, then it really doesn't matter if its AI, AGI, or ASI. It's AI that we have to deal with. So I'm really advocating to stop placing limited definitions of "intelligence"... OK, I'll agree, hypothetically, that AI will never be the *same* as homo sapiens... but that doesn't make it any less of a *force* in our world.

to this point:

https://gregoreite.com/ai-control-doesnt-need-robots-it-already-has-humans/

Expand full comment

The “AI's ability to accumulate, hold, and exert power” is exactly zero! It’s the humans, perhaps well-meaning, who connect the inputs and the outputs to these AI programs who grant AI the appearance of power. A dead rat’s brain, with a couple of electrodes going in and a couple going out, that decides your mortgage, would be more on the path to AGI than any current AI!

Expand full comment

Um, no. you state that "it's the humans... who grant the AI the appearance of power." You may as well say "it's the humans, who grant PoTUS the appearance of power." When AIs are granted power in decision making ecosystems that directly effect humans (such as mortgage approvals, to your point), and that power essentially becomes locked in due to the forces of competition & capitalism, then the "appearance" of power equates to "real" power. And enough humans & corporations will be complicit in this to make it, quite possibly, irreversible.

For example:

https://gregoreite.com/all-the-ways-in-which-ai-control-forces-our-hands/

Expand full comment

The fact that these generative models are flawed and that humans are flawed too is somewhat of a weird argument for saying they are comparable. A typical syllogistic fallacy, I think.

Though I must admit we people aren't that smart and thus there is a point here...

Expand full comment

The point isn't "humans are flawed, LLMs are flawed, therefore LLMs are human". Rather, it's that the definition of AGI must use criteria that allow for some degree of fallibility, because humans are fallible too. If our definition of AGI requires perfection, then it can only be met by something that isn't just human, but is already superhuman (and we usually call that ASI instead).

Expand full comment

Correct. One sort of fallibility does not equate to another sort. GAI (Generative AI) has issues that prevent it from becoming AGI (and I would say 'truly intelligent'), the fact that humans have (and they do) other issues with how their intelligence works is more or less irrelevant.

For instance, it is very unlikely (I would agree with 'bs') that our linguistic intelligence comes from 'finding the best next tokens for a stream of tokens'. But that doesn't matter for GAI if the outcome is acceptable.

The problem is: what is the definition of 'generic intelligence'? What definition — which must indeed leave some room for error — should we use? It is a highly controversial question and not one we will easily solve. Our problem is not "the criteria should include fallibility" but "we do not have good criteria". Take the general gist from https://en.wikipedia.org/wiki/Intelligence as a starting point. Humans have been measuring intelligence mostly by looking at logical reasoning, ironically not because we are good at it but because it is the part of our intelligence that we find hard, that we're bad at. Our idea of intelligence is being shattered as we speak, both by how unintelligent we act and how unintelligent systems based on our mistaken idea of what intelligence is so far have been.

GAIs are on the right track with respect to the mechanism, but people are righht that you need more than just 'estimation', you also need 'symbolic logic'. I think this is Gary's point if I understand it: you need both (just like we humans have). I would add: you also need other hardware. Digital doesn't scale for 'estimation', it is only very efficient at symbolic logic (because more or less logic is what it is)

Expand full comment

I take Iain M. Banks’ view on intelligence: we don’t need a definition of intelligence (and I think it may in fact be better if we don’t have one). Rather as a principle, it’s sufficient that if it insists on it being intelligent, we give it the benefit of the doubt.

Expand full comment

There is a problem, here, though. We find ourselves intelligent but much more than we really are.

Expand full comment

Absolutely agree on what qualifies as AGI. The day we have an AI that performs as the top humans on all tasks, and also has the (already incredibly deep) amount of knowledge that a model like GPT-4 carries baked into its weights is the day we already have ASI, and not merely AGI. Being that consistently good is superhuman.

Expand full comment
Apr 12, 2023·edited Apr 26, 2023

The most interesting thing about the current thread is to see the discussion itself.

It's not surprising that the "general" public is fooled by a machine that is able to generate more or less coherent text (just in case, I do know the Turing Test, but I also know how a Transformer works), if people who, I guess, know a little more about Machine Learning (let's not even talk about Artificial Intelligence, I think that label is already too pompous) disagree.

Now, what surprises me is that "scientists" use shady terms like "spark" and "emergence" to say that the world is at the beginning of the AGI, because it passed some tests. Well, I expected a little more from such people, that's what the Scientific Method is for, isn't it?

It could be assembled a multidisciplinary group of scientists, and then, they could proceed to experiment with the hypothesis, until a reasonable consensus can be reached, which I suspect can be done beyond our vague notions of intelligence. Probably we don't know how to define it, but we know quite well what it can do, and also the mistakes it cannot make.

That the Scientific Method, plus peer-review of the results and replicability, is not being done, and the rush to get the "paper" out, makes me quite suspicious of the seriousness of these people, and knowing that they are a company motivated for profit, well, if we apply Occam's razor, the most parsimonious explanation would be, effectively, that is a "press release"..

Expand full comment

"AGI just means HLMI. Human Level. Not Einstein-level. Joe Six Pack level. Check-out clerk Jane level. J6 Storm the Capitol level. Normal person level. AGI can, and might, and probably will be highly flawed, JUST LIKE PEOPLE." Then it is useless. Humans have a Four Pieces Limit - we need to break that limit to solve the problems that face us.

Expand full comment

"useless"? That's a bit of a strong characterization. I find GPT4 *immensely* useful, if *only* for its lively "conversational" interface. it already has helped me solve two riddles of my youth that I've been patiently googling for 20 years now. AND it debugs code that I am struggling with, *while* explaining what my mistakes were, like a pretty decent TA. It's not always right, and its not always genuis... but it is *certainly* "useful... at least to me, good sir.

Expand full comment
Mar 25, 2023·edited Mar 25, 2023

No one said it had to be "useful" in a "solves all our problems" way. If you're arguing that AGI doesn't do us much good then agreed, but that doesn't change the meaning of the definition. The reason why the definition exists is because even making an artificial Joe Six Pack mind is scientifically an incredibly difficult problem, not because it'd be particularly useful to have millions of virtual Joe Six Pack instances running on a server (note that it would actually be useful, in a perverse way, to the owners of those servers: they have super cheap unskilled workers! Just not great for the rest of us non-capitalists).

Expand full comment

"an incredibly difficult problem" - we have a Four Pieces Limit, so while we may be able to build a functioning brasin, we lack the ability to consciously understand it - most of our effort is concentrated unconsciously. We need to spend our time emulating the processing power of the Unconscious Mind. A good example is Lane Following - a sign says "Use Yellow Lane Markings" (there are white ones as well). Our unconscious mind finds the location of our mental lane following software and puts in a temporary patch for about an hour. This is so far from regurgitation as to seem incredible. Increasing our reliance on regurgitation of what someone once said lessens our ability to function in a changing world, in the way that mental arithmetic disappeared when calculators became prevalent. To some, not having to think is comforting. To others, it is deeply worrying.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

For a budding AGI, gpt-4 is kind of dumb. The other day I was using Bing chat to learn about the relationship between study time and LSAT scores. After several mostly useless replies, bing got confused and talked about the correlation between LSAT scores and first year law school performance. When I thought it had gotten back on track, it got confused again and talked about the relationship between study time and SAT scores.

I tried to use it to troubleshoot a code. It told me "ah, you wrote 5 words here instead of 6." That was irrelevant to the problem (it was a code to divide text into 3 word sentence fragments and bing implied the code required the number of words in the text had to be multiples of 3). When I showed bing that the sample text in fact had 6 words (1. This 2. Sentence 3. Has 4. Exactly 5. Six 6. Words) bing told me I was wrong and that punctuation doesn't count as a word.

Most maddingly, I was playing a trivia with bing, which got boring fast because it asks the same 7 or 8 questions over and over again, but on one question, where the correct was a, I answered "a" and bing responded, "Incorrect! The correct answer is a."

Expand full comment

well, shame on you. Your "a" was clearly in quotes. The correct answer was a lowercase letter a, without quotes, as Sydney tried to tell you. Please be precise when you answer trivia questions. Violations of test protocol in the "guise" of correct answers will be punished, and your rule deviations, along with your username, will be included in the next 10 AI Training Datasets.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

FYI: a version of the sparks of AGI paper is available through arxiv that shows more evaluation in the commented out portions (h/t https://twitter.com/DV2559106965076/status/1638769434763608064). The comments show some preliminary eval for the kinds of safety issues you've been worried about Gary---yes, there are still issues (for toxicity, whatever version this paper discusses does worse than GPT3).

Granted, this version of GPT4 is an "early" version. Not sure what exactly that means, but field-internal gossip suggests it is post-RLHF but pre-rule based reward modeling.

I think this means there's a worm in the apple, and simply shining the apple's skin isn't gonna fix it.

Expand full comment

The phrase 'early version' in AI historically means 'papering over the problems'. Maybe this time is different, but I would not bet the farm on it.

Expand full comment
Mar 24, 2023Liked by Gary Marcus

From an email to a friend:

The current situation pisses me off. The technology is definitely interesting. I’m sure something powerful can be done with it. Whether or not it WILL be done is another matter.

The executives are doing what executives do, pushing stuff out the door. I have no idea what these developers think about all this. But my impression is that they know very little about language and cognition and have an unearned disdain for those things.

You know, "They didn’t work very well in the past, so why should be care?" Well, if you want to figure out what these engines are doing – perhaps, you know, to improve their performance – perhaps you should learn something about language and cognition. After all, that’s what these engines are doing, “learning” these things by training on them.

The idea that these engines are mysterious black boxes seems more like a religious conviction than a statement of an unfortunate fact.

Expand full comment
Mar 25, 2023·edited Mar 25, 2023Liked by Gary Marcus

Oh, I've audited the transformer code, and examined how these things are built, and I can pretty solidly concur that "The idea that these engines are mysterious black boxes" is a statement of a (?unfortunate?) fact. A model with 100+ billion parameters and a 4096 dimensional vectorspace of tokens, auto-navigated/optimized to select "next token" every time in context of the entire stream... its simply not comprehensible by humans what the process is. I've even seen *numerous* examples of *identical* "cold prompts" (prompts without priming / pre-prompting) giving radically different responses from the same base model at "temperature 0" (in playground and API), which is "theoretically" impossible. Oops?

Now, this situation is currently somewhat exclusive to LLMs. For instance, as Gary pointed out months ago, Meta's Cicero (Diplomacy-winning AI) used an broad mesh of several human-comprehensible purpose-built AI components. These integrated with a lightweight LLM for text / conversational I/O. And the final / next gen of AI / AGI may move towards a totally novel, unforeseen, non-opaque approach. But as long as we are using Deep Learning LLMs based on the GPT, we're squarely in the land of Big Black Mystery Boxes. Call it the Dark Crystal of Language. Whatever it is, we can't see inside.

Expand full comment

People are actively working on mechanistic understanding of LLMs. If more people did this kind of work, we'd learn more. If you've seen Stephen Wolfram's recent long article on ChatGPT you know he's thinking about it as a dynamical system. Others are moving in that direction as well. These "black boxes" aren't forever closed. But if we think they are, then we act that way. That's a mistake.

See David Chapman's list of ideas under "Task-relevant mechanistic understanding with reverse engineering" here: https://betterwithout.ai/science-engineering-vs-AI

Expand full comment

Wonder if the problem is the things are so damn big that statistically some numerical errors are bound to happen at some point inside them and then propagate to generate completely different results.

That, or OpenAI's playground/APIs are full of shit and don't actually zero the temperature, or use some non-deterministic algorithm to sort out the logits of the tokens that doesn't settle ties reproducibly.

Expand full comment

Ah, here the corporate PR machine goes again...The smokescreens are getting thicker by the day and gaslight shines brighter by every Tweet. Thank you for keep speaking up!

Expand full comment
Mar 24, 2023Liked by Gary Marcus

Such breathless self-promotion is misleading and damaging. Feynman's famous quote about nature vs PR, applies here as well.

Thank you for continuing to point out the reality!

Also: https://rodneybrooks.com/what-will-transformers-transform/

Expand full comment

As a total nonexpert here, I appreciate all of the commentary! Thanks for posting the Brooks article too!

Expand full comment

We commenters do have a lot to say (as does Gary), lol! And, YW about the article, many more here: https://rodneybrooks.com/category/essays/

Expand full comment

Thank you for this! I remember that I stumbled upon Rodney Brooks a while back and will take a look here! Love this substack and appreciate Gary’s take on all of this!

Expand full comment

YW. Indeed, Gary's is a minority voice for sure (as is Rodney's), but needs to be out there for the record.

My own belief is that grounded intelligence isn't achievable without a body that provides agency, (non-symbolic) experience etc.

Cheers!

Expand full comment
Mar 24, 2023Liked by Gary Marcus

"When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.

We should not accept this. That means all of us, computer professionals as well as those of us who just use computers. If we don’t, then the future of computing will belong to biology, not logic. We will continue having to use computer programs that we don’t understand, and trying to coax them to do what we want. Instead of a sensible world of computing, we will live in a world of homeopathy and faith healing."

Leslie Lamport, "The Future of Computing: Logic or Biology", 2003.

Expand full comment
Mar 24, 2023Liked by Gary Marcus

As a programmer I greatly admire Leslie Lamport's achievements, but this is silly. As programming systems become more and more complex they become less and less comprehendible, even when they are written in logic rather than neural nets. And programs like AlphaGo are nothing like homeopathy and faith healing.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Gary Marcus

The context for me quoting this passage is not AlphaGo, which uses neural networks to optimize a tree search algorithm, but GPT-4, a system where its own creators fully embrace the use of biological and anthropomorphic metaphors to describe it while engaging in deliberate obscurantism about even the most general details of its inner workings.

What is "prompt engineering" if not the rituals by which someone discovers the right incantations and spells for these systems to "to coax them to do what we want?"

Expand full comment

I don't disagree with this, but it's not germane to my criticism of Lamport's comment.

Expand full comment

"And since biological systems are too complex to understand"

Typical CompSci argumentum ad ignorantiam

Expand full comment

The full context of the original talk indicates that Lamport himself is not engaging in that fallacy - but the critique of computer science as a profession in the talk is indeed an attempt to resist the appeal to ignorance you're describing.

http://lamport.azurewebsites.net/pubs/future-of-computing.pdf

Here's the passage that I think directly addresses your point:

"Biology is very different from logic—or even from the physical sciences. If biologists find that a particular mutation of a gene is present in 80% of people suffering from a certain disease, and missing from 80% of a control group, then that’s a significant result.

If physicists tried to report an experiment in which 80% of the events supported their theory, they’d be required to explain what happened in the other 20%. And imagine if a mathematician submitted a theorem that he claimed was true because it was correct on 80% of the examples he tried.

I don’t mean to put down biology or biologists. It’s a difficult field because the systems they study are so complex. The human body is a lot more complicated, and hence a lot harder to understand, than an automobile. And a lot harder to understand than logic."

Lamport wants to remind computer scientists that the systems they build are not as complex as those found in biology. He argues that when computer scientists use biological metaphors they are not doing so with the curiosity or education of a trained biologist, but rather as a way to stop themselves from thinking more clearly.

Expand full comment

Great article - also loved Rebooting AI, which I picked up just before this latest AI frenzy. It armed me brilliantly to confront some of the wild-eyed (frankly embarrassing) enthusiasm I am witnessing in my job and more generally.

My take on all of this so far is that it's as though Nasa scientists sat down in 1961 and agreed that the most common sense way to get to the moon was to glue ladders end to end until we inevitably get there - and then proceeded to shout down anybody who pointed out inconvenient facts like gravity, atmosphere or orbital mechanics.

Expand full comment
Mar 24, 2023Liked by Gary Marcus

Do people still tweet? I'm asking only half jokingly!

Coca Cola can take a page from Microsoft's book and declare sparks of AGI in their drink, as Artificial Goodness Injected!

Besides Gary's good points about AGI claims , I'm wondering what happens in the next few years when this technology will be in widespread use? How many jobs will be lost or transformed?

Expand full comment

Thank you. Clearly argued points, as usual, especially your point about the arrogance and haughtiness of some of the players in the LLM game. But I disagree with the title of your article. I don't think that it's an either-or situation. I expect neither sparks of AGI nor the end of science resulting from this mess. If anything, LLMs are great examples of how not to solve AGI and how not to do science. There is nothing in LLMs that is intelligent. No LLM-based system will ever walk into an unfamiliar kitchen and make coffee. Language is not the basis of intelligence but a communication and reasoning tool used by human intelligence. Heck, no DL-based system will ever be truly intelligent in my opinion. DL is a dead-end technology as far as AGI is concerned.

I was worried about the impact of LLMs before but, lately, I have developed a new understanding. Silicon Valley, and Big Tech in general, get the AI they deserve: fake malevolent AI. They don't have the moral fiber to create anything other than fake AI. Good science is not about profits. More than anything else, good science is grounded in benevolence, and uncompromising honesty and integrity. Yes, AGI is coming but it won't come from that morally bankrupt bunch.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023

"Language is not the basis of intelligence but a communication and reasoning tool used by human intelligence. "

This is somewhat of a false dichotomy. Without language, human intelligence would be a fraction of what it is. Daniel Dennett has made this point at length.

It's worth noting that all of the *apparent* (but illusory) intelligence of GPT-4 comes from recombining and regurgitating the utterances of intelligent human beings.

Expand full comment

Great point about GPT-4. Its apparent intelligence comes from the human beings that it uses as preprocessors. It would not exist otherwise. And I agree that language amplifies intelligence. But I still think it's a tool, an add-on to intelligence. Written language or the use of symbols and metaphors also amplifies intelligence. I believe that intelligence itself is primarily based on generalized perception which gives us the ability to comprehend the world around us. Animals, even lowly insects, can be very intelligent.

Expand full comment

re: " especially your point about the arrogance and haughtiness of some of the players in the LLM game"

Its possible that some of them have that: but I'd suggest the desire to "shut down" something he dislikes (the implication being through government forcing them to shut down) is even more arrogant. People in a free country can disagree: its the height of arrogance when someone feels like they should be able to take away the freedom of others who disagree. There needs to be an incredibly high bar in a free country for that to happen. Emotionally fearful diatribes that seem to indicate a lack of bothering to fully research the consequences of such things (e.g. theories of risks of regulatory capture and government failure) don't seem to come remotely close to clearing such a bar.

Expand full comment

Sorry, freedom does mean everyone is allowed to do as they please. This is why all civilized nations have laws to curtail freedom. I agree with Marcus that LLMs are dangerous for society and should be regulated. I feel the same way about self-driving cars. They should be banned on public roads until they are proven to be safe.

Expand full comment

They also have Bill of Rights to protect against infringements on freedom that are unwarranted. Many objected to the inclusion of the Bill of Rights as if it were the sum total of all rights: so it makes clear it isn't all the rights we have. By default people are free in this country unless there is a demonstrated justification for infringing on it.

AI also provides benefits: and a rational discourse would consider those as well. Benefits need to be weighed against risks. If they improve the research process that leads to new medicines that save lives, help with productivity to invent and create new things, etc, then the benefits need to considered.

Imagine there is a a new variant of covid that bypasses existing vaccines and is more deadly and AI would have helped all the various science, tech, business and logistics processes involved in getting it invented and distributed faster. Yet bans on it prevented the useful AI from being created.

Those who suggest bans need to consider the whole picture of pros and cons: and yet all I see is the peddling of fears from those that seem to not bother engaging in any sort of constructive learning about pros and cons, regulatory capture and government failure theory, etc. Its all one sided fear and then attacks on someone a 'condescending' if they get upset at the simplistic repetition of fears without more detailed exploration of relevant areas of knowledge and tradeoffs. I suspect there would be more productive discussion of the issues with Bing's AI than the writeups on this substack to date.

Expand full comment

Lot's of projection here ... your comments are arrogant from first to last word.

Expand full comment

When an authoritarian expresses a desire to take away the freedoms of others: its unclear that those objecting should be viewed as "arrogant" for daring to not wish it to happen and objecting to the idea that someone else thinks they should have the right to do so.

Expand full comment

Now we're getting somewhere. Here are a few topics related to this article that AI commentators might choose to expand upon.

1) What are the compelling benefits of AI which justify taking on _yet another_ significant, or maybe existential, risk? Are any of these companies willing to answer this? Do they dodge, weave and ignore, or do they have a credible case to make? Can't seem to find a writer who will address this specific question. Help requested.

2) The biggest threat to the future of science is in fact science itself. This article seems to illustrate this principle, in that people we can't trust to be objective are determined to give us ever more power at an accelerating rate, and there is as yet no proof that we can handle either the power, or the rate of delivery. In fact, neither we in the public, nor the AI developers, even know what it is that society is being expected to successfully manage.

3) Apologies for this one. Those working in the AI industry would seem to be the last people we should be asking whether the AI industry should exist. They aren't evil, but how could they possibly be objective about the future of an industry their careers and incomes depend on? This is true of any emerging industry of vast scale, such as genetic engineering.

When considering the future of any technology, what if we're looking for answers from exactly the wrong people?

Expand full comment

MSFT only cares about shareholder returns. They (and others like them) have multiple billions in corporate debt that must be tied to some fabled valuation to keep the game moving. All the Wall Street chatter, Gates’s “blog”, early access reports and “leaks” are all just modern adverts.

Expand full comment

The fact they don’t share the datasets used, besides the limitations it imposes to our capacity to replicate experiments, has another derivative: we cannot determine potential copyright infringements of tools that are basically processing and regurgitating (almost) the whole of human created text.

Expand full comment

The problems of GPT4 do not follow from it being bad at what is does, it comes from being good at what it does.

GPT4 is creating impressive results. Part of that comes from us humans being impressionable. Part may be because of sloppy science with data sets, who knows. The social construct of science is one of few the ways we can try to guard against bad science. But the social system is far from perfect: much bad science slips through the cracks, either because humans (including peer reviewers) are fallible or because the message ends up in a place and form that makes it look like science (e.g. the autism-vaccine nonsense started from something published in The Lancet, the fact that this paper is on arXiv but not peer-reviewed in a real magazine — though I suspect it will sail past peer review easily enough)

So, this seems to be a formidable (and at least disruptive) new tool in the arsenal, and next to useful stuff for which people will want to use it, it is very likely going to produce a lot of 'bad use', 'toxic waste' and (social and physical) 'environmental damage'. It's like the early days of the chemistry revolution. Chemical warfare came from that. Information warfare will get an enormous boost from this.

Oh, and a good example of bad use: if it is good enough for coding, how long before someone loads all open source from github into it and makes it look for weak spots and exploits? How long before it is part of phishing and scamming?

And because it is good at what it does (btw definitely not AGI-like) there will probably be an arms race of using it. Think of asset managers not wanting to lose the competition with other asset managers and having the means to invest. People will strongly feel that they risk 'missing the boat'. See Dave Karpfs story about crypto from last year.

We will probably not have enough realism as humans to prevent the bad things. Science as a social construct made of fallible humans is too weak to prevent these disasters — assuming that it is indeed so that the technology has become powerful enough to affect society and doesn't flame out when it escapes from the marketing-world it now lives in.

Expand full comment

Marcus writes, "Everything must be taken on faith..."

I know you meant this in a more limited way, but your words shine a light on a much larger picture.

The science community wants us to take the philosophical foundation of modern science on faith, just as they do. That foundation is the typically unexamined assumption that our goal should always be ever more knowledge delivered ever more quickly.

The spectacular success of science over the last century has created a revolutionary new environment, which the science community doesn't wish to adapt to. They want to keep on doing what they've always done in the past, the way they've always done it, and they want us to believe on faith that their simplistic, outdated 19th century "more is better" knowledge philosophy is suitable for the 21st century, a very different environment.

THE PAST: A knowledge scarcity environment, where seeking more knowledge without limit was rational.

TODAY AND TOMORROW: A radically different knowledge environment where knowledge is exploding in every direction at an accelerating rate. In this case seeking more knowledge without limit is irrational, because human beings are not capable of successfully managing unlimited power.

If you doubt that claim, please remember, we are the species with thousands of massive hydrogen bombs aimed down our own throat, an ever present existential threat that we typically find too boring to bother discussing. This is who the science community wishes to give ever more power, at an ever accelerating rate. Rational???

It's not just these AI companies who want us to take their products on faith. It's much bigger than that.

You know how during the Enlightenment about 500 years ago a variety of thinkers began to challenge the unquestioned blind faith authority of The Church? It's time for another one of those.

Expand full comment