ChatGPT gives one answer. If you ask it about its sources - it can't tell you. It also has some subjects that have obviously been manipulated to parrot woke nonsense. If you use it to write something - if someone suspects it came from ChatGPT - they can submit a part of the writing to ChatGPT and ask if it wrote it. It will answer Yes if it did. Maybe this is not universal - but it was for several cases.
Finally - and most important - Google gives pages of results for a search. One word can get you started. I typically scan the results - and look at several. As noted above - ChatGPT gives one answer.
Thank you for continuing to pound much needed sense into this dangerous LLM phenomenon. Like self-driving cars, this technology is not yet ready for widespread public distribution and should be banned on the internet until it becomes mature and reliable. ChatGPT is what I call fake AI.
ChatGPT is arguably worse than self-driving cars, in that it's immediately obvious when your Tesla does something bone-headed; in fact you're told repeatedly to keep your hands on the wheel and be on guard, it's really easy to override the thing, and so on.
The bot, on the other hand, frequently asserts that it's correct when it's in fact dead wrong, can't separate fact from fiction, you cannot convince it otherwise, and it will not learn from its "mistakes" because mistakes require a domain to be wrong in – a concept that the OpenAI people apparently think would fall from the sky when they train a network that's deep enough to have them.
Surprise, that doesn't work. Nontrivial neural networks need pre-existing structure and iterative feedback and refinement while training to arrive at something useable. ChatGPT is a prototypical demonstration of the GIGO principle pushed to extremes.
I second that statement Matthias. I'd say it's a lot worse than self-driving cars—when you're in a car, you're operating in the real world. You are physically moving in and through the real world. You can feel the forces of acceleration, deceleration, turns, and stops. You can see (at least you're supposed to, if you're paying attention) that stopped vehicle ahead of you, and take over from the car if it doesn't respond in time.
Interacting with a chat bot gives you none of that real-world context.
What if the whole civilization is careening towards some kind of Biblical scale catastrophe fueled by out of control technological development, but we can't focus on that threat because we're so distracted by these shiny new toys? What if we're children playing with digital crayons?
I'm not a Luddite either, though I think we might learn a few things from people like the Amish who, generally speaking, seem to have turned their back on modern technology without ill effect. I don't want to be Amish, just learn from them.
I'm not a Luddite, I would just like to see us learn how to control the knowledge explosion. I would like to see us try. Hey, I'll settle for just talking about it.
It's not optional. I'm convinced that if we don't learn this essential skill, much of the rest of what we're doing and talking about is could very well be pointless.
Are we confusing noise with information? I made the statement that ChatGPT was "unreliable" to a friend. He disagreed because "It scans the Internet...".
So is the issue the lack of a reliable "noise filter"? Is a noise filter even possible?
- I bet Google is scrambling next week to add conversion of hours/minutes/seconds format to their search code. Perhaps there's some syntax ambiguity that it would cause or, more likely IMHO, it is just an oversight.
- Mentioning ChatGPT as an author might have one benefit. It will cause smart readers to double-check the paper's results.
This reminds me of the old garbage-in, garbage-out principle.
The expression "4:21 min" isn't a standard (in science) representation of time interval, so it's ambiguous.
Is it supposed to be 4.21 min = (4.21*60) = 252 seconds?
Or is it 4 min 21 sec = (4*60 + 21) = 261 seconds?
ChatGPT is clearly making assumptions and improvising, when it should (at least) be asking the user to clarify or rephrase the question, which is what Google is doing implicitly by not giving a calculation.
Google doesn't need a full simplification to SI units; just a bit less ambiguity (even if there's still implied precedence) e.g.
"4min 21sec/km in min/mi" => 7.00 min/mi
As it happens, 4.21 min/km is 6.78 min/mi but ChatGPT hasn't given that answer either, so its failure clearly runs deeper than picking the wrong one in just this pair of interpretations.
We may need a revised version of the principle for LLMs: "anything-in, something-out"?
The problem is that since GPT3 , there are two things open AI and others have added: use of huge number of custom instructional tasks with labeled human data and using human feedback to further fine tune initial models. Don’t you think it appears more intelligent because of that ?
Appearance is the key word here - it may well deepen the *illusion* of intelligence to fiddle like this.
The fact that humans are "fine tuning" already-trained models in the first place is pretty good sign the focus is not on actually giving them true intelligence. That would require working at a more conceptual level.
It's a complex tool. On the one hand it's essentially a language calculator and would you credit e.g. your graphing calculator for the calculations in a paper? Of course not.
On the other hand, it is capable of novel language constructs, willingly writing to order, e.g. poetry of all kinds, even critical of itself. To wit, when prompted to write a limerick on the dangers of using ChatGPT to write scientific papers it aptly produced this:
What is most worrying isn't ChatGPT being anthropomorphized or being credited with results which are wrong.
What is more worrying is that Microsoft just poured another $10 billion into OpenAI and some other big money is following it in to almost taking full control.(49%)
Apparently MS has plans to integrate it into MS's software ecosystem like the Bing search engine and their Office suite of products.
Whenever there is massive funding involved with Big Tech you know something will screwup down the line!
My biggest worry on the ethical front is that apps like ChatGPT will be exposed to extremely sensitive personal/private information which normally should not be given to any software program owned by a megacorporation, especially the likes of Microsoft!
Already news stories of crisis counselors and some therapists using this with their patients to aid in their treatment is extremely dangerous. Not only because of the inherent inaccuracies of this limited model but that such information will be collected by a 3rd party.
The ultimate fear I have is that chatbots like ChatGPT are the perfect "prison warden" for humans, the CCP would love such a software agent which can keep an eye on the activities over every citizen 24/7 and report to the authorities behavior and actions which are deemed forbidden. Such as searching for information on a taboo political topics.
Like him or hate him there is a good reason why Elon Musk left OpenAI and I suspect he knew which companies would come knocking on the door sooner rather than later.
Why people keep asking ChatGPT math questions, when he clearly can't do any kind of math, and complain all around Twitter that he's fake?? ChatGPT is a great compiler of various texts, like simple essays, emails, brainstorm ideas etc. And he's perfect at that.
Fact is, the code should recognize that math questions are outside its area of expertise. As are any other factual questions. Ask it for restaurant recommendations in NYC and it will happily give you some, except that none of the establishments it recommends ever existed.
I assume co-writing an article with human supervision might still be fine? Academic writing style can be quite difficult for someone new to the field (or international ESL)! And being able to present a series of points and have it rewrite into a form that would be acceptable prose for an academic paper and academic reviewer standards seems quite valuable.
The premise behind your piece would seem to be that relying on ChatGPT would undermine the credibility of science, and thus presumably scientific advancement. Typically such a development is seen as an undesirable negative.
What if the key challenge of the 21st century is to somehow gain control of the knowledge explosion, that is slow it down, so that it proceeds at a pace which we can confidently manage? What if maturity would involve ending the simplistic teenager mindset that assumes without questioning that we ought to create as many powers of vast scale as we can as fast as we possibly can?
If there is any truth in that last paragraph, then any factor which interferes with uncontrolled scientific advancement might have a silver lining?
ChatGPTs falsehoods can nevertheless be useful (a) as a simple test for human involvement, (b) as a zeroth-order approach to the superimposed underlying truth (c) to build-upon its output, and get things done on a higher level. I called this superposition "Schrödinger Facts" in our last post at the Sentient Syllabus Project (https://sentientsyllabus.substack.com/p/chatgpts-achilles-heel). I absolutely agree that ChatGPT has no ideas "of its own" and can't be a co-author. Thanks!
Forget about the math. ChatGPT didn't catch that I'd inverted the second ratio. When I asked "What is a 4.21 min/km running pace in MI/min?" It very confidently responded that "A 4.21 min/km running pace is equivalent to a 6.6 mi/min pace."
"ChatGPT, could you write me a Willie Nelson themed headline for my piece about the perils of co-authorship with chat bots on scientific papers?" Hilarious 😂😂
ChatGPT gives one answer. If you ask it about its sources - it can't tell you. It also has some subjects that have obviously been manipulated to parrot woke nonsense. If you use it to write something - if someone suspects it came from ChatGPT - they can submit a part of the writing to ChatGPT and ask if it wrote it. It will answer Yes if it did. Maybe this is not universal - but it was for several cases.
Finally - and most important - Google gives pages of results for a search. One word can get you started. I typically scan the results - and look at several. As noted above - ChatGPT gives one answer.
Update - ChatGPT will no longer acknowledge if it wrote something.
Thank you for continuing to pound much needed sense into this dangerous LLM phenomenon. Like self-driving cars, this technology is not yet ready for widespread public distribution and should be banned on the internet until it becomes mature and reliable. ChatGPT is what I call fake AI.
ChatGPT is arguably worse than self-driving cars, in that it's immediately obvious when your Tesla does something bone-headed; in fact you're told repeatedly to keep your hands on the wheel and be on guard, it's really easy to override the thing, and so on.
The bot, on the other hand, frequently asserts that it's correct when it's in fact dead wrong, can't separate fact from fiction, you cannot convince it otherwise, and it will not learn from its "mistakes" because mistakes require a domain to be wrong in – a concept that the OpenAI people apparently think would fall from the sky when they train a network that's deep enough to have them.
Surprise, that doesn't work. Nontrivial neural networks need pre-existing structure and iterative feedback and refinement while training to arrive at something useable. ChatGPT is a prototypical demonstration of the GIGO principle pushed to extremes.
I second that statement Matthias. I'd say it's a lot worse than self-driving cars—when you're in a car, you're operating in the real world. You are physically moving in and through the real world. You can feel the forces of acceleration, deceleration, turns, and stops. You can see (at least you're supposed to, if you're paying attention) that stopped vehicle ahead of you, and take over from the car if it doesn't respond in time.
Interacting with a chat bot gives you none of that real-world context.
What if the whole civilization is careening towards some kind of Biblical scale catastrophe fueled by out of control technological development, but we can't focus on that threat because we're so distracted by these shiny new toys? What if we're children playing with digital crayons?
I am no Luddite but definitely worried
I'm not a Luddite either, though I think we might learn a few things from people like the Amish who, generally speaking, seem to have turned their back on modern technology without ill effect. I don't want to be Amish, just learn from them.
I'm not a Luddite, I would just like to see us learn how to control the knowledge explosion. I would like to see us try. Hey, I'll settle for just talking about it.
It's not optional. I'm convinced that if we don't learn this essential skill, much of the rest of what we're doing and talking about is could very well be pointless.
Are we confusing noise with information? I made the statement that ChatGPT was "unreliable" to a friend. He disagreed because "It scans the Internet...".
So is the issue the lack of a reliable "noise filter"? Is a noise filter even possible?
Proposed guidelines for the use of ChatG in writing academic papers along with a list of papers that have done so.
https://docs.google.com/document/d/1mg5uHT3KXyAbNDo200EdQgYqs7JLg-yf-oCEzLbenP8/edit#heading=h.5nqtknt597v9
Feel free to add my essay to references aefuing against authorship
Um, err, I should think you can do that yourself.
Good article as usual. Two thoughts:
- I bet Google is scrambling next week to add conversion of hours/minutes/seconds format to their search code. Perhaps there's some syntax ambiguity that it would cause or, more likely IMHO, it is just an oversight.
- Mentioning ChatGPT as an author might have one benefit. It will cause smart readers to double-check the paper's results.
This reminds me of the old garbage-in, garbage-out principle.
The expression "4:21 min" isn't a standard (in science) representation of time interval, so it's ambiguous.
Is it supposed to be 4.21 min = (4.21*60) = 252 seconds?
Or is it 4 min 21 sec = (4*60 + 21) = 261 seconds?
ChatGPT is clearly making assumptions and improvising, when it should (at least) be asking the user to clarify or rephrase the question, which is what Google is doing implicitly by not giving a calculation.
Google doesn't need a full simplification to SI units; just a bit less ambiguity (even if there's still implied precedence) e.g.
"4min 21sec/km in min/mi" => 7.00 min/mi
As it happens, 4.21 min/km is 6.78 min/mi but ChatGPT hasn't given that answer either, so its failure clearly runs deeper than picking the wrong one in just this pair of interpretations.
We may need a revised version of the principle for LLMs: "anything-in, something-out"?
My New Year resolution - to co-author an AGI paper with Gary Marcus! I promise to contribute!
The problem is that since GPT3 , there are two things open AI and others have added: use of huge number of custom instructional tasks with labeled human data and using human feedback to further fine tune initial models. Don’t you think it appears more intelligent because of that ?
Appearance is the key word here - it may well deepen the *illusion* of intelligence to fiddle like this.
The fact that humans are "fine tuning" already-trained models in the first place is pretty good sign the focus is not on actually giving them true intelligence. That would require working at a more conceptual level.
It's a complex tool. On the one hand it's essentially a language calculator and would you credit e.g. your graphing calculator for the calculations in a paper? Of course not.
On the other hand, it is capable of novel language constructs, willingly writing to order, e.g. poetry of all kinds, even critical of itself. To wit, when prompted to write a limerick on the dangers of using ChatGPT to write scientific papers it aptly produced this:
There once was a scientist so green
Who used ChatGPT to write his thesis scene
But its knowledge was old
And the facts he was told
Were untrue, so his work was not clean!
What is most worrying isn't ChatGPT being anthropomorphized or being credited with results which are wrong.
What is more worrying is that Microsoft just poured another $10 billion into OpenAI and some other big money is following it in to almost taking full control.(49%)
https://www.theneurondaily.com/p/microsoft-openai-deal
Apparently MS has plans to integrate it into MS's software ecosystem like the Bing search engine and their Office suite of products.
Whenever there is massive funding involved with Big Tech you know something will screwup down the line!
My biggest worry on the ethical front is that apps like ChatGPT will be exposed to extremely sensitive personal/private information which normally should not be given to any software program owned by a megacorporation, especially the likes of Microsoft!
Already news stories of crisis counselors and some therapists using this with their patients to aid in their treatment is extremely dangerous. Not only because of the inherent inaccuracies of this limited model but that such information will be collected by a 3rd party.
The ultimate fear I have is that chatbots like ChatGPT are the perfect "prison warden" for humans, the CCP would love such a software agent which can keep an eye on the activities over every citizen 24/7 and report to the authorities behavior and actions which are deemed forbidden. Such as searching for information on a taboo political topics.
Like him or hate him there is a good reason why Elon Musk left OpenAI and I suspect he knew which companies would come knocking on the door sooner rather than later.
Why people keep asking ChatGPT math questions, when he clearly can't do any kind of math, and complain all around Twitter that he's fake?? ChatGPT is a great compiler of various texts, like simple essays, emails, brainstorm ideas etc. And he's perfect at that.
"He"?
Fact is, the code should recognize that math questions are outside its area of expertise. As are any other factual questions. Ask it for restaurant recommendations in NYC and it will happily give you some, except that none of the establishments it recommends ever existed.
I assume co-writing an article with human supervision might still be fine? Academic writing style can be quite difficult for someone new to the field (or international ESL)! And being able to present a series of points and have it rewrite into a form that would be acceptable prose for an academic paper and academic reviewer standards seems quite valuable.
Not objecting to using the tools under VERY CLOSE supervision. But I am objecting to authorship and blind anthropomorphization.
An actual human contrarian wonders...
The premise behind your piece would seem to be that relying on ChatGPT would undermine the credibility of science, and thus presumably scientific advancement. Typically such a development is seen as an undesirable negative.
What if the key challenge of the 21st century is to somehow gain control of the knowledge explosion, that is slow it down, so that it proceeds at a pace which we can confidently manage? What if maturity would involve ending the simplistic teenager mindset that assumes without questioning that we ought to create as many powers of vast scale as we can as fast as we possibly can?
If there is any truth in that last paragraph, then any factor which interferes with uncontrolled scientific advancement might have a silver lining?
ChatGPTs falsehoods can nevertheless be useful (a) as a simple test for human involvement, (b) as a zeroth-order approach to the superimposed underlying truth (c) to build-upon its output, and get things done on a higher level. I called this superposition "Schrödinger Facts" in our last post at the Sentient Syllabus Project (https://sentientsyllabus.substack.com/p/chatgpts-achilles-heel). I absolutely agree that ChatGPT has no ideas "of its own" and can't be a co-author. Thanks!
Forget about the math. ChatGPT didn't catch that I'd inverted the second ratio. When I asked "What is a 4.21 min/km running pace in MI/min?" It very confidently responded that "A 4.21 min/km running pace is equivalent to a 6.6 mi/min pace."
"ChatGPT, could you write me a Willie Nelson themed headline for my piece about the perils of co-authorship with chat bots on scientific papers?" Hilarious 😂😂
Glad someone got my allusion