I was eating popcorn, minding my own business, and watching these four swings as they happen in real time and some more (e.g. Christopher Manning's take), and realized that these takes are all, to varying degrees, "self-certain, hostile, smug," --- and with little substance. And it's sad to see that the supposed elite ranks of academia on the forefront of our technology have abdicated their duties of civil debate on ideas, and resorted to thinly veiled ad hominem attacks (e.g. Aaronson's piece) to one-up each other in a never-ending status game.
I respect Bender's work a lot, so it's disappointing to see her take that offers less substance than an impeachment of NYT's credibility. Same with Sejnowski's and Aaronson's take -- their tones changed remarkably, from civil to hostile, when it comes to everything Noam Chomsky.
There is a reason that Envy is one of the seven cardinal vices.
Granted, I don't agree with Chomsky on a number of things, but he is always willing to engage in civil debates of ideas, and tries to offer substance rather than dog whistles. That NYT OpEd is too writerly, no doubt, but the substance is there and it's up to the debunkers to come up with worthy counterarguments.
Hats off to good ol' Noam -- still slaying it at 94, and hope there will be many more!
I see that the NYT piece wasn't written diplomatically, although I do wonder if the public conversation needs some authoritative figures to really stick the boot in to make it all seem less one-sided. Otherwise, I worry it won't be too long before we have inexpert government panels hearing evidence from morons who claim that LLMs are conscious, with the 'moderate' position being that LLMs 'only' have natural language understanding. Those who would detach us from reality in this way are holding most of the money, so if anyone has an opportunity to trade on reputation, I think it's better to sound too negative than to equivocate.
I have to say, it saddens me that Chomsky's name has become so toxic (though this has been a problem for a long time). I was really expecting the article to be more controversial when I saw that Bender had a problem with it but it was so ordinary. I didn't realize we'd reached such a point that Chomsky's natural allies can't help but debase themselves like this, presumably for reasons of status.
Also, it seems time and again that the critics of Chomskyan linguistic theory really know little of its details. I think this is largely wilful because they're allergic to the complexity of serious linguistic study, while they're thrilled with the toys and baubles of big data processing, so they hope desperately that they're onto something without having to make effortful intellectual commitments. I realize this is a smug take but I was an anti-Chomskyan once until I put in the effort to learn more about what I was quick to dismiss, and then it dawned on me that I'd dismissed it just because I wanted it to be wrong at the time.
Everywhere you look, AI boosters betray their biases about how they want language to behave, while showing next to no knowledge about properties that have been written about since the 60s.
Lets be honest about these attacks. The current crop of chat bots are part of a multi-billion industry trying desperately to find a use for their investment. People have bought into the hype, including academics, and now even the slightest criticism makes them hysterical.
A scenario reminiscent of Searle (but not quite): I’m left alone with a mansion full of books (filled with words, no pics) in Ethiopian, which I don’t read/write/speak. The books supposedly contain all manners of knowledge, from metaphysics to archeology to CS. I’m left alone for several years, locked in the mansion, and am told to absorb it all.
Would I be able?
***
Words by themselves mean jack, same with their ordering. Same with two sets of them in two different languages (eqvt to so called multi-modal AI). LLMs cannot know, the way embodied beings can - which is, wordlessly.
ChatGPT had no trouble with a simple dropping an apple question, it addressed every aspect that the opinion piece claimed it couldn't, and then some. In case anyone is wondering, the apple would fall at 9.8 meters per second square. I didn't ask about gravity, or bounce, or height, or surface, or bruises, or splatter, etc. Chat added the science and a few flourishes without further promoting. Programs process information, and this one does the job extremely well. It's not human, but who actually thinks that? Can't we be happy for 5 seconds about a fun new tool that anyone can use for good not evil (because that's a human choice), which also happens to be a tool representing much awaited progress in AI? Or was this only ever going to be okay if it arrived in a fully evolved form? (No one would live to see that.)
The most interesting thing about ChatGPT is that it has mastered language. While it makes factual errors all the time, I haven't seen any grammatical errors. This suggests that it has either memorized all of the patterns of English grammar or that it has discovered the corresponding rules and can apply them. Either way, it provides an alternative answer to what Chomsky defined as one of the central questions in language.
The Chomsky piece was a disjointed bag of arguments coupled with derogatory rhetorical flourishes. I hope I never write an article like that.
Thanks for your excellent posts -- we need more voices like yours.
At some point I hope you'll give your opinion about Stephen Wolfram's take, that it's true LLMs are "just big statistical models" but that maybe humans are doing something similar whether we realize it or not, and that LLMs may lead the way toward a true understanding of grammar and thinking. (I'm wildly paraphrasing but that's why I'd love to hear your response to this (unfortunately very lengthy) argument: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
It would be really great if such a brilliant mind as Gary Marcus to whose newsletter I have subscribed and whose books I am reading and enjoying would at least at some point in his life stop ranting about how ChatGPT is bad (which I got within the first day of trying it) or how large language models are bad or how Hinton is bad or LeCun is bad, etc and would write interesting things about symbolic AI, and other types of AI, or other scientific things. Even though I agree with most if not all of Gary' points, it is just tiring to read newsletters containing nothing but ranting.
The current mode of "AI" seems to be based on hope and faith: one hopes or supposes that there is something like a Principle of Cognitive Emergence embedded in the structure of reality, and that if you make a transformer model large enough, this Principle will somehow take over and AGI will just self-assemble, without your having to understand what you're doing or for that matter any of what is happening. It's the cognitive-science equivalent of putting "and then a miracle occurs..." in a mathematical proof, though possibly much more dangerous. This is, in a nutshell, what Chomsky is criticizing.
Taking a charitable perspective, we can say all this is an interesting *experiment. After all, maybe there *is a Principle of Cognitive Emergence embedded in the structure of reality, and building bigger LLMs at a superexponential rate is one way of testing for it. This is closely analogous to building bigger and bigger accelerators to test for increasingly obscure types of elementary particles. But so far, there's no sign of anything like sentience or "AGI"--just superficially dazzling but deeply flawed models, cobbled out of completely inscrutable masses of attentional weights.
So the result of the experiment seems to be negative--and soon, again like the accelerator case, we will run out of resources to build the next bigger experiment.
Continuing the particle analogy, developing ChatGPT is like finding the Higgs boson but not supersymmetry or strings; the narrower framework we already had is somewhat validated, but the radical breakthroughs we hoped for are nowhere at hand. Building another "accelerator" to continue the search would take half the planet. Our triumph, paradoxically, marks a setback.
It really becomes more and more clear that the problem is we don't even know what "intelligence" is. As Chomsky points out, how then are we supposed to build the thing? The emergentist miracle has appeared, sort of, but fallen short.
Meanwhile, almost everything we thought about intelligence has been quietly flipped. We now see that the "easy problems are hard", and vice versa. We abandoned rules for statistics; now it appears time for rules to rule again (maybe). After years and years studying cognitive biases and emphasizing all the ways humans are nasty, hopelessly error-prone and even plain stupid, it turns out even this stupidity contains a sort of screwy genius we have no idea how to replicate.
If all this is the case, it is hard to imagine a better moment for a taking-stock of the whole enlightenment project. We say we will rely on reason to order the world, but in truth we cannot figure out even the silliest instinct or common-sense.
In a way, it is difficult to overstate how *humorous the whole situation is. Come! Let us recline, cognac in hand, play some ChatGPT outtakes and bloopers on an infinite loop, and toast to the irrepressible brilliance that, it turns out, peeks out of every meanest human idiocy.
Chomsky's original contention was and still remains that grammaticality is universal. I wrote my own take on ChatGPT as it relates to classical music (Indian), - one which is highly grammatical, but still stringing together text "stochasitcally" to generate something intelligible is hard.
I also believe that models like Bert and GPT don't need to tell us why the world is the way it is. It's not meant to be an Oracle. It's clever engineering and in reality an auto-complete system that's built on human knowledge. That in itself is praise worthy, but also a problem because it's too powerful to be used as a general purpose utlity. Strongly evidenced by folks reporting limits like logic, basic counting, mixing reality with fantasy, randomness etc.
As a computer scientist, I've seen the evolution of generative frameworks (e.g. Ruby on rails) where boiler plate is abstracted away. This allows programmers to not worry about low-level primitives but only worry about expressing their ideas in a way that builds on the boilerplate. GPT will similarly solve a class of problems leading to higher productivity gains but it's in no way AGI like it's marketed.
Just as many of the critiques of Chomsky are of a mischaracterization of his views, it seems both sides are essentially talking past each other.
Of course ChatGPT will make errors, and different kinds of errors from those humans make. Humans will never quickly compute the cubed root of a large integer to arbitrarily many digits (while even a cheap calculator can) and, although calculation ability in general correlates with "intelligence", we discount the cheap calculator as intelligent even though it avails of some specific capability humans are known to lack. Similarly, we don't critique a human toddler who's just learned to walk for being unable to do a marathon. Large Language Models are in their absolute infancy: anyone expecting them to leap into the kind of fully analogical, embodied "thinking" humans can do simply by access to endless text corpora and even more compute cycles is being unreasonable.
Minsky put the key idea, long ago, that the brain is a "meat machine", and it's unclear what is to be gained by suggesting that the "software" running on that corporeal platform is poorly mimicked AT PRESENT by LLMs; or, worse, that there is some ghost in the machine. Of course Chomsky is not arguing this, but people who work in LLMs are agnostic about its being a model for the mind, no more than Deep Blue is a model for how to play chess.
That said, what a lot of serious people in computational linguistics seem to be reacting to is a kind of biological primitivism or even elitism that smacks of magical thinking. I'm a huge admirer of Chomsky and took a course with him many years ago, but his piece is quite weak by his standards, and I don't see much in it or in Gary's remarks that attempt to pinpoint what about the computational approach is necessarily lacking as another path to "thinking" (whatever that may be). If all Chomsky is saying is that LLMs are intrinsically unable to say much of importance about the mind, he's made that point persuasively for a long while. But that's a bit like saying a jet is bad at rock climbing: obviously true and not really the right question.
I haven't checked all the comments, but nobody seems to have remarked that ChatGPT doesn't understand the "tough movement" sentences that Chomsky alludes to. Chomsky's point is that English speakers have reliable intuitions about how the meaning of sentences like:
1. Sam is too stubborn to talk. => Sam is (or should be) talking
2. Sam is too stubborn to talk to Sandy. => Sam is (or should be) talking
3. Sam is too stubborn to talk to. => Sam is the person being talked to, by some unspecified person
That is, the preposition "to" causes an argument structure reversal in the embedded clause in just this context. Chomsky's theory of grammar claims that this happens because of an interaction of several simpler processes. His point was that examples like (3) are so rarely presented in a context that makes the argument reversal apparent that a child would not be able to learn the argument reversal directly from data.
I haven't done a comprehensive study with ChatGPT, but ChatGPT has either incorrectly identified Sam as the talker or says that it is unclear who is talking in (3). It's interesting that ChatGPT does usually get (1) and (2) correct.
If you want to try this yourself, swap the adjective "stubborn" and the names "Sam" and "Sandy" with other adjectives and names, because these examples appear in linguistics papers that ChatGPT may have read during training.
chatGPT is 2 to 0 over Chomsky, with his examples.
Dare I say it? Speaking of large language models, we're all ignoramuses. I will say that the best physics indicates that the only phenomena in reality is relations. Singular objects do not exist. There are only conversations. High time we started conversing more intelligently.
Querying ChatGPT with a couple of examples In the Noam Chomsky: The False Promise of ChatGPT Op Ed that were said to show limitations didn't
me: what happens hold something in my outstretched hand and then let go of it?
chatGPT: If you hold something in your outstretched hand and then let go of it, the object will fall towards the ground due to the force of gravity.
As soon as you release the object, the only force acting upon it is gravity, which causes the object to accelerate towards the ground. The acceleration of the object due to gravity is approximately 9.81 meters per second squared (32.2 feet per second squared) near the Earth's surface.
The speed at which the object falls towards the ground will depend on a number of factors, including the mass of the object, the height from which it was dropped, and the presence of any air resistance or other external forces.
Once the object reaches the ground, it will experience an impact force which may cause it to bounce, deform, or break depending on its material and shape.
me: What happens when query John and John is too stubborn to talk to?
chatGPT: If you try to query John and he is too stubborn to talk to you, you will not be able to get a response or answer from him.
Stubbornness is a trait where someone is determined to stick to their own ideas or beliefs, even if it goes against what others are saying or what is generally accepted as true. In this case, if John is too stubborn to talk to you, he may refuse to engage in a conversation or respond to your queries, even if it is important or necessary.
There could be various reasons why John is being stubborn, such as if he is upset or angry with you, if he feels misunderstood or disrespected, or if he is simply a stubborn person by nature. If you need to communicate with John and he is not responding, you may need to find a different approach or try to address his concerns or issues in a way that he is more receptive to.
I was eating popcorn, minding my own business, and watching these four swings as they happen in real time and some more (e.g. Christopher Manning's take), and realized that these takes are all, to varying degrees, "self-certain, hostile, smug," --- and with little substance. And it's sad to see that the supposed elite ranks of academia on the forefront of our technology have abdicated their duties of civil debate on ideas, and resorted to thinly veiled ad hominem attacks (e.g. Aaronson's piece) to one-up each other in a never-ending status game.
I respect Bender's work a lot, so it's disappointing to see her take that offers less substance than an impeachment of NYT's credibility. Same with Sejnowski's and Aaronson's take -- their tones changed remarkably, from civil to hostile, when it comes to everything Noam Chomsky.
There is a reason that Envy is one of the seven cardinal vices.
Granted, I don't agree with Chomsky on a number of things, but he is always willing to engage in civil debates of ideas, and tries to offer substance rather than dog whistles. That NYT OpEd is too writerly, no doubt, but the substance is there and it's up to the debunkers to come up with worthy counterarguments.
Hats off to good ol' Noam -- still slaying it at 94, and hope there will be many more!
I see that the NYT piece wasn't written diplomatically, although I do wonder if the public conversation needs some authoritative figures to really stick the boot in to make it all seem less one-sided. Otherwise, I worry it won't be too long before we have inexpert government panels hearing evidence from morons who claim that LLMs are conscious, with the 'moderate' position being that LLMs 'only' have natural language understanding. Those who would detach us from reality in this way are holding most of the money, so if anyone has an opportunity to trade on reputation, I think it's better to sound too negative than to equivocate.
I have to say, it saddens me that Chomsky's name has become so toxic (though this has been a problem for a long time). I was really expecting the article to be more controversial when I saw that Bender had a problem with it but it was so ordinary. I didn't realize we'd reached such a point that Chomsky's natural allies can't help but debase themselves like this, presumably for reasons of status.
Also, it seems time and again that the critics of Chomskyan linguistic theory really know little of its details. I think this is largely wilful because they're allergic to the complexity of serious linguistic study, while they're thrilled with the toys and baubles of big data processing, so they hope desperately that they're onto something without having to make effortful intellectual commitments. I realize this is a smug take but I was an anti-Chomskyan once until I put in the effort to learn more about what I was quick to dismiss, and then it dawned on me that I'd dismissed it just because I wanted it to be wrong at the time.
Everywhere you look, AI boosters betray their biases about how they want language to behave, while showing next to no knowledge about properties that have been written about since the 60s.
Lets be honest about these attacks. The current crop of chat bots are part of a multi-billion industry trying desperately to find a use for their investment. People have bought into the hype, including academics, and now even the slightest criticism makes them hysterical.
I posted this on Scott A's blog...
***
A scenario reminiscent of Searle (but not quite): I’m left alone with a mansion full of books (filled with words, no pics) in Ethiopian, which I don’t read/write/speak. The books supposedly contain all manners of knowledge, from metaphysics to archeology to CS. I’m left alone for several years, locked in the mansion, and am told to absorb it all.
Would I be able?
***
Words by themselves mean jack, same with their ordering. Same with two sets of them in two different languages (eqvt to so called multi-modal AI). LLMs cannot know, the way embodied beings can - which is, wordlessly.
ChatGPT had no trouble with a simple dropping an apple question, it addressed every aspect that the opinion piece claimed it couldn't, and then some. In case anyone is wondering, the apple would fall at 9.8 meters per second square. I didn't ask about gravity, or bounce, or height, or surface, or bruises, or splatter, etc. Chat added the science and a few flourishes without further promoting. Programs process information, and this one does the job extremely well. It's not human, but who actually thinks that? Can't we be happy for 5 seconds about a fun new tool that anyone can use for good not evil (because that's a human choice), which also happens to be a tool representing much awaited progress in AI? Or was this only ever going to be okay if it arrived in a fully evolved form? (No one would live to see that.)
Gary, how about providing a link to the mailing list.
The most interesting thing about ChatGPT is that it has mastered language. While it makes factual errors all the time, I haven't seen any grammatical errors. This suggests that it has either memorized all of the patterns of English grammar or that it has discovered the corresponding rules and can apply them. Either way, it provides an alternative answer to what Chomsky defined as one of the central questions in language.
The Chomsky piece was a disjointed bag of arguments coupled with derogatory rhetorical flourishes. I hope I never write an article like that.
Thanks for your excellent posts -- we need more voices like yours.
At some point I hope you'll give your opinion about Stephen Wolfram's take, that it's true LLMs are "just big statistical models" but that maybe humans are doing something similar whether we realize it or not, and that LLMs may lead the way toward a true understanding of grammar and thinking. (I'm wildly paraphrasing but that's why I'd love to hear your response to this (unfortunately very lengthy) argument: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
It would be really great if such a brilliant mind as Gary Marcus to whose newsletter I have subscribed and whose books I am reading and enjoying would at least at some point in his life stop ranting about how ChatGPT is bad (which I got within the first day of trying it) or how large language models are bad or how Hinton is bad or LeCun is bad, etc and would write interesting things about symbolic AI, and other types of AI, or other scientific things. Even though I agree with most if not all of Gary' points, it is just tiring to read newsletters containing nothing but ranting.
Here's a more academically written objection that online people have been pointing to in the last couple of days:
"Modern language models refute Chomsky’s approach to language", Steven Piantadosi. March 2023
https://lingbuzz.net/lingbuzz/007180
As a layperson it's certainly interesting to see the intellectual disagreements unfold.
The current mode of "AI" seems to be based on hope and faith: one hopes or supposes that there is something like a Principle of Cognitive Emergence embedded in the structure of reality, and that if you make a transformer model large enough, this Principle will somehow take over and AGI will just self-assemble, without your having to understand what you're doing or for that matter any of what is happening. It's the cognitive-science equivalent of putting "and then a miracle occurs..." in a mathematical proof, though possibly much more dangerous. This is, in a nutshell, what Chomsky is criticizing.
Taking a charitable perspective, we can say all this is an interesting *experiment. After all, maybe there *is a Principle of Cognitive Emergence embedded in the structure of reality, and building bigger LLMs at a superexponential rate is one way of testing for it. This is closely analogous to building bigger and bigger accelerators to test for increasingly obscure types of elementary particles. But so far, there's no sign of anything like sentience or "AGI"--just superficially dazzling but deeply flawed models, cobbled out of completely inscrutable masses of attentional weights.
So the result of the experiment seems to be negative--and soon, again like the accelerator case, we will run out of resources to build the next bigger experiment.
Continuing the particle analogy, developing ChatGPT is like finding the Higgs boson but not supersymmetry or strings; the narrower framework we already had is somewhat validated, but the radical breakthroughs we hoped for are nowhere at hand. Building another "accelerator" to continue the search would take half the planet. Our triumph, paradoxically, marks a setback.
It really becomes more and more clear that the problem is we don't even know what "intelligence" is. As Chomsky points out, how then are we supposed to build the thing? The emergentist miracle has appeared, sort of, but fallen short.
Meanwhile, almost everything we thought about intelligence has been quietly flipped. We now see that the "easy problems are hard", and vice versa. We abandoned rules for statistics; now it appears time for rules to rule again (maybe). After years and years studying cognitive biases and emphasizing all the ways humans are nasty, hopelessly error-prone and even plain stupid, it turns out even this stupidity contains a sort of screwy genius we have no idea how to replicate.
If all this is the case, it is hard to imagine a better moment for a taking-stock of the whole enlightenment project. We say we will rely on reason to order the world, but in truth we cannot figure out even the silliest instinct or common-sense.
In a way, it is difficult to overstate how *humorous the whole situation is. Come! Let us recline, cognac in hand, play some ChatGPT outtakes and bloopers on an infinite loop, and toast to the irrepressible brilliance that, it turns out, peeks out of every meanest human idiocy.
Chomsky's original contention was and still remains that grammaticality is universal. I wrote my own take on ChatGPT as it relates to classical music (Indian), - one which is highly grammatical, but still stringing together text "stochasitcally" to generate something intelligible is hard.
https://www.classicalweekly.org/p/experiments-with-ai-and-indian-classical
I also believe that models like Bert and GPT don't need to tell us why the world is the way it is. It's not meant to be an Oracle. It's clever engineering and in reality an auto-complete system that's built on human knowledge. That in itself is praise worthy, but also a problem because it's too powerful to be used as a general purpose utlity. Strongly evidenced by folks reporting limits like logic, basic counting, mixing reality with fantasy, randomness etc.
As a computer scientist, I've seen the evolution of generative frameworks (e.g. Ruby on rails) where boiler plate is abstracted away. This allows programmers to not worry about low-level primitives but only worry about expressing their ideas in a way that builds on the boilerplate. GPT will similarly solve a class of problems leading to higher productivity gains but it's in no way AGI like it's marketed.
I long to leap over all these details in the direction of a bottom line.
What are the compelling benefits of AI which justify the creation of what could be yet another substantial risk to the stability of the modern world?
Unless we can arrive at a good answer to that question, what's the point of all the rest of it?
Just as many of the critiques of Chomsky are of a mischaracterization of his views, it seems both sides are essentially talking past each other.
Of course ChatGPT will make errors, and different kinds of errors from those humans make. Humans will never quickly compute the cubed root of a large integer to arbitrarily many digits (while even a cheap calculator can) and, although calculation ability in general correlates with "intelligence", we discount the cheap calculator as intelligent even though it avails of some specific capability humans are known to lack. Similarly, we don't critique a human toddler who's just learned to walk for being unable to do a marathon. Large Language Models are in their absolute infancy: anyone expecting them to leap into the kind of fully analogical, embodied "thinking" humans can do simply by access to endless text corpora and even more compute cycles is being unreasonable.
Minsky put the key idea, long ago, that the brain is a "meat machine", and it's unclear what is to be gained by suggesting that the "software" running on that corporeal platform is poorly mimicked AT PRESENT by LLMs; or, worse, that there is some ghost in the machine. Of course Chomsky is not arguing this, but people who work in LLMs are agnostic about its being a model for the mind, no more than Deep Blue is a model for how to play chess.
That said, what a lot of serious people in computational linguistics seem to be reacting to is a kind of biological primitivism or even elitism that smacks of magical thinking. I'm a huge admirer of Chomsky and took a course with him many years ago, but his piece is quite weak by his standards, and I don't see much in it or in Gary's remarks that attempt to pinpoint what about the computational approach is necessarily lacking as another path to "thinking" (whatever that may be). If all Chomsky is saying is that LLMs are intrinsically unable to say much of importance about the mind, he's made that point persuasively for a long while. But that's a bit like saying a jet is bad at rock climbing: obviously true and not really the right question.
I haven't checked all the comments, but nobody seems to have remarked that ChatGPT doesn't understand the "tough movement" sentences that Chomsky alludes to. Chomsky's point is that English speakers have reliable intuitions about how the meaning of sentences like:
1. Sam is too stubborn to talk. => Sam is (or should be) talking
2. Sam is too stubborn to talk to Sandy. => Sam is (or should be) talking
3. Sam is too stubborn to talk to. => Sam is the person being talked to, by some unspecified person
That is, the preposition "to" causes an argument structure reversal in the embedded clause in just this context. Chomsky's theory of grammar claims that this happens because of an interaction of several simpler processes. His point was that examples like (3) are so rarely presented in a context that makes the argument reversal apparent that a child would not be able to learn the argument reversal directly from data.
I haven't done a comprehensive study with ChatGPT, but ChatGPT has either incorrectly identified Sam as the talker or says that it is unclear who is talking in (3). It's interesting that ChatGPT does usually get (1) and (2) correct.
If you want to try this yourself, swap the adjective "stubborn" and the names "Sam" and "Sandy" with other adjectives and names, because these examples appear in linguistics papers that ChatGPT may have read during training.
chatGPT is 2 to 0 over Chomsky, with his examples.
Dare I say it? Speaking of large language models, we're all ignoramuses. I will say that the best physics indicates that the only phenomena in reality is relations. Singular objects do not exist. There are only conversations. High time we started conversing more intelligently.
Querying ChatGPT with a couple of examples In the Noam Chomsky: The False Promise of ChatGPT Op Ed that were said to show limitations didn't
me: what happens hold something in my outstretched hand and then let go of it?
chatGPT: If you hold something in your outstretched hand and then let go of it, the object will fall towards the ground due to the force of gravity.
As soon as you release the object, the only force acting upon it is gravity, which causes the object to accelerate towards the ground. The acceleration of the object due to gravity is approximately 9.81 meters per second squared (32.2 feet per second squared) near the Earth's surface.
The speed at which the object falls towards the ground will depend on a number of factors, including the mass of the object, the height from which it was dropped, and the presence of any air resistance or other external forces.
Once the object reaches the ground, it will experience an impact force which may cause it to bounce, deform, or break depending on its material and shape.
me: What happens when query John and John is too stubborn to talk to?
chatGPT: If you try to query John and he is too stubborn to talk to you, you will not be able to get a response or answer from him.
Stubbornness is a trait where someone is determined to stick to their own ideas or beliefs, even if it goes against what others are saying or what is generally accepted as true. In this case, if John is too stubborn to talk to you, he may refuse to engage in a conversation or respond to your queries, even if it is important or necessary.
There could be various reasons why John is being stubborn, such as if he is upset or angry with you, if he feels misunderstood or disrespected, or if he is simply a stubborn person by nature. If you need to communicate with John and he is not responding, you may need to find a different approach or try to address his concerns or issues in a way that he is more receptive to.