For the use of AI chatbot based on LLMs in medicine or any life and death issues, we should be very cautious. There is no way (no matter the technique, convoluted prompt engineering, RAG, LoRA, etc.) to « guarantee » that an LLM's based AI system (not an AI, please let's avoid anthropocentrism) will not confabulate (the poorly named hallucinations which are sensory problems, but the generation of disheveled or insane texts which are confabulations).
Forgot Elon Musk's prediction: if AI is able to perform arbitrary tasks at the level of an adult human of below-average intelligence by the end of next year, I'd call it a miracle.
I don't mean to be rude or unempathic, but have you seen actual stupid people?
I have worked in all levels of education from tutoring fifth- and six-graders in a variety of subjects to freshman physicists up to PhD advisory in AI and the actuarial sciences - I explicitly DO NOT allude to the saddening perils of discalculia or dyslexia here - there are in fact some burdensomely challenged minds out there, and the ongoing complication of life is really challenging everybody with substandard abstraction skills.
That's not to say that AI would be able to surpass them even in the most mundane of tasks - I'm confident that right now it is not - but repetitive and benign automations like creating stuff that is precisely *without* any requirement of creativity[1] is something that will help them greatly, if only to pass exams they shouldn't.
This of course also means that the spark of genius in our own life's is few and far in between: most things we do happen to be on habitually automated, almost mechanical tasks (you're truly lucky if the rate of boring work is low, e.g. like reading this substack or my personal professional work) - so long story short: it's very easy to mistake the fact that humans can be genius with the fact that most of the time, most of us are not.
And *still* AI can't do *none* of the stuff I do to a level I'd be content with.
[1] As an aside, I'm puzzled by the absence of a sensible translation of the German word "Schöpfungshöhe" - there is "threshold of originality" of course, but that is very technical and without grasp of the transcendent subtext of the German denotation.
The education system better pivot to teaching philosophy and cog sci from day one - PDQ. I took philosophy in school in West Germany - never understood why it was not taught here in elementary school.
I happen to know someone who knows Elon personally, and I can say with 99.9% confidence that he's saying that because he's promoting an AI company now. It's not a scientific point in the first place. When Google bought DeepMind, Elon was bidding on it, and after he lost to Google he went to the press and said "AI was summoning the devil," intending (as I've been told) to cast a shadow on the merger. He's not taking the bet because he makes more money not responding.
I don't understand why giving him so much attention. I don't see his opinion as by better than any random person. He just craves for attention and this sort of bets perpetuates that. He's in it for the ego, not to be right.
He's incredibly productive and creative -- a once-in-a-generation talent. His opinion on AGI may not be anything special, but he deserves lots of special attention...like Leo Messi ;)
Cannot compare the two what? Aside from the prior case Elon hasn’t paid commercial lease payments. Hasn’t paid severance at Twitter. He stiffs people. Just like Trump.
There is this myth that successful entrepreneurs are superiour risk/reward estimators. Research has actually shown they're actually less than average capable of estimating risk of failure (they underestimate) and they overestimate success. That is an important reason why they're entrepreneurial in the first place. But for each successful one, we have truck loads of unsuccessful ones (which we hardly take into account).
Elon is an extreme example. Skill and luck have been key to his success. but also a lot of 'entrepreneurial naïveté', which in this atea is on full display.
I'd even go as far as saying that the whole entrepreneur voodoo is all about survivorship bias; in all the (auto-)biographies I haven't found a single commonality that'd be universally applicable besides "working all the time" - the latter I refute not because I think it's useless (it's not), but because it will cost you dearly.
Elon is not being naive about AI. One thing I think many people miss is that Elon is a rather good promoter and marketer. He wears the mantle of engineer and most people see him that way, but most of the things he says publicly (in my opinion) are intended for marketing purposes.
There seem to be two extreme views sometimes: either (1) AI will very soon be very smart and kill us all, or (2) it is all a scam, stealing, parroting.
Yet, the tech marches on. Examples.
Waymo self-driving cars can do freeways, rain, night, and more cities are being added.
Chatbot competition is intense, and vendors will be pressured to do better modeling. AlphaGeometry is a good example for how LLM can work with other methods to fix its issues.
I agree. Furthermore, even poor AI systems, combined with social media, can have serious consequences on our democracies by undermining trust through misinformation, disinformation, fake news and deepfakes. Don't forget that trust is the cement of our societies.
While I don't want to say they cannot, I think getting it right is very hard.
And although it is true that teaching critical thinking human-to-human is (obviously!) flawed as well, I see absolutely no evidence that a chatbot would be better at this.
Maybe Elon Musk has listened carefully to the recent declarations of Yann LeCun. At the Meta AI Innovation Day in London and Paris a few days ago, LeCun heavily criticized LLMs, pointing out their inherent limitations and weaknesses, and stating that this technology is clearly not a way towards AGI. By the way, he said almost the same that you Gary wrote in this blog many, many times. Just one quotation from his speech: “they (LLMs) hallucinate answers... They can't really be factual”. He also declared that the AGI is not to be reached within the next few years and proposes a shift to a new technology called “Objective-Driven AI”. That means a change of paradigm that you advocated since very long.
Superhuman AGI is nigh enough for me as long as it's in the next 10 years plus or minus one or two as a grace period. 3-5 ideally. What's you guyses definitions of nigh?
I think it'll show up about 5-10 years after people start working on self-improving systems (which hasn't happened yet). My question: Will we get compassionate AGI? Or non-compassionate, but still intelligent AI? I think the difference is a big deal, for my part.
A good example of what AGI will not do :produce any non trivial step on tje solution if any if the famous problems unsolved in the list of mathematical famous problems like Catalan ´s or Riemann’s coonjectures
I agree that Elon's prediction seems wildly over-optimistic (although you also seem to be interpreting it in its strongest possible form, whereas some weaker interpretation might be defensible, as your postscript implicitly acknowledges).
However, there are other reasons that Elon should not take the bet. A million dollars is, or should be, almost inconsequential to Elon. It is not worth thirty minutes of his time. Even ten million dollars is not worth taking up his morning. If he has to spend longer than that arguing about what the rules of the bet should be, or what the result of the bet was, it was a waste of his time. (Granted, Elon has arguably wasted time on stupider things, but he shouldn't.)
If I were in Elon's position, I would also worry that accepting one public bet would encourage a hundred other people to try and make public bets with me, which would generally be annoying and a waste of my time.
Really, someone of Elon's wealth should only be making million-dollar bets in cases where he really wants to lose the bet (so he is creating an incentive for someone else to make him lose the bet).
I really doubt Elon's time is spent with that kind of discipline and forethought. He's a genius, hyper-focused, but that doesn't mean he's clever about using his time. For example, buying Twitter ;)
For the use of AI chatbot based on LLMs in medicine or any life and death issues, we should be very cautious. There is no way (no matter the technique, convoluted prompt engineering, RAG, LoRA, etc.) to « guarantee » that an LLM's based AI system (not an AI, please let's avoid anthropocentrism) will not confabulate (the poorly named hallucinations which are sensory problems, but the generation of disheveled or insane texts which are confabulations).
Forgot Elon Musk's prediction: if AI is able to perform arbitrary tasks at the level of an adult human of below-average intelligence by the end of next year, I'd call it a miracle.
I don't mean to be rude or unempathic, but have you seen actual stupid people?
I have worked in all levels of education from tutoring fifth- and six-graders in a variety of subjects to freshman physicists up to PhD advisory in AI and the actuarial sciences - I explicitly DO NOT allude to the saddening perils of discalculia or dyslexia here - there are in fact some burdensomely challenged minds out there, and the ongoing complication of life is really challenging everybody with substandard abstraction skills.
That's not to say that AI would be able to surpass them even in the most mundane of tasks - I'm confident that right now it is not - but repetitive and benign automations like creating stuff that is precisely *without* any requirement of creativity[1] is something that will help them greatly, if only to pass exams they shouldn't.
This of course also means that the spark of genius in our own life's is few and far in between: most things we do happen to be on habitually automated, almost mechanical tasks (you're truly lucky if the rate of boring work is low, e.g. like reading this substack or my personal professional work) - so long story short: it's very easy to mistake the fact that humans can be genius with the fact that most of the time, most of us are not.
And *still* AI can't do *none* of the stuff I do to a level I'd be content with.
[1] As an aside, I'm puzzled by the absence of a sensible translation of the German word "Schöpfungshöhe" - there is "threshold of originality" of course, but that is very technical and without grasp of the transcendent subtext of the German denotation.
Good points!
The education system better pivot to teaching philosophy and cog sci from day one - PDQ. I took philosophy in school in West Germany - never understood why it was not taught here in elementary school.
I happen to know someone who knows Elon personally, and I can say with 99.9% confidence that he's saying that because he's promoting an AI company now. It's not a scientific point in the first place. When Google bought DeepMind, Elon was bidding on it, and after he lost to Google he went to the press and said "AI was summoning the devil," intending (as I've been told) to cast a shadow on the merger. He's not taking the bet because he makes more money not responding.
I'm an electrical engineer and they still can't do my job, despite me trying to offload it onto them frequently. 😅
Working on it, but only with organically coded software.
I don't understand why giving him so much attention. I don't see his opinion as by better than any random person. He just craves for attention and this sort of bets perpetuates that. He's in it for the ego, not to be right.
He's incredibly productive and creative -- a once-in-a-generation talent. His opinion on AGI may not be anything special, but he deserves lots of special attention...like Leo Messi ;)
Elon is no Messi. Messi is once in a generation
Cannot compare the two!
Agree completely
Cannot compare the two what? Aside from the prior case Elon hasn’t paid commercial lease payments. Hasn’t paid severance at Twitter. He stiffs people. Just like Trump.
Can't compare Elon to Leo. Messi is incomparable and stands worlds apart.
Those are excellent challenges that cover a very wide range of capabilities.
I suspect that many of them will not be matched by machine intelligence for decades, not years.
Yes, specifically the ones associated to Moravec's paradox.
If Elon took the bet and lost he would not pay. You would have to take him to court. There is precedent
• Find and fix a subtle bug in a complex computer program.
There is this myth that successful entrepreneurs are superiour risk/reward estimators. Research has actually shown they're actually less than average capable of estimating risk of failure (they underestimate) and they overestimate success. That is an important reason why they're entrepreneurial in the first place. But for each successful one, we have truck loads of unsuccessful ones (which we hardly take into account).
Elon is an extreme example. Skill and luck have been key to his success. but also a lot of 'entrepreneurial naïveté', which in this atea is on full display.
I'd even go as far as saying that the whole entrepreneur voodoo is all about survivorship bias; in all the (auto-)biographies I haven't found a single commonality that'd be universally applicable besides "working all the time" - the latter I refute not because I think it's useless (it's not), but because it will cost you dearly.
I'm a successful entrepreneur, and very optimistic ;) I've failed a lot and gotten better at estimating risk!
Elon is not being naive about AI. One thing I think many people miss is that Elon is a rather good promoter and marketer. He wears the mantle of engineer and most people see him that way, but most of the things he says publicly (in my opinion) are intended for marketing purposes.
Elon is in a category of his own. Most successful entrepreneurs are normal, like heads of Amazon, Microsoft, Google, and OpenAI.
Elon's "crash and burn" approach worked well for rockets, at least with Falcon 9.
His inability to estimate risks has been a big liability at Tesla and Twitter.
There seem to be two extreme views sometimes: either (1) AI will very soon be very smart and kill us all, or (2) it is all a scam, stealing, parroting.
Yet, the tech marches on. Examples.
Waymo self-driving cars can do freeways, rain, night, and more cities are being added.
Chatbot competition is intense, and vendors will be pressured to do better modeling. AlphaGeometry is a good example for how LLM can work with other methods to fix its issues.
I agree. Furthermore, even poor AI systems, combined with social media, can have serious consequences on our democracies by undermining trust through misinformation, disinformation, fake news and deepfakes. Don't forget that trust is the cement of our societies.
They could also be used to teach critical thinking skills...
While I don't want to say they cannot, I think getting it right is very hard.
And although it is true that teaching critical thinking human-to-human is (obviously!) flawed as well, I see absolutely no evidence that a chatbot would be better at this.
This a joke? You want a robot to teach critical thinking ?
Maybe Elon Musk has listened carefully to the recent declarations of Yann LeCun. At the Meta AI Innovation Day in London and Paris a few days ago, LeCun heavily criticized LLMs, pointing out their inherent limitations and weaknesses, and stating that this technology is clearly not a way towards AGI. By the way, he said almost the same that you Gary wrote in this blog many, many times. Just one quotation from his speech: “they (LLMs) hallucinate answers... They can't really be factual”. He also declared that the AGI is not to be reached within the next few years and proposes a shift to a new technology called “Objective-Driven AI”. That means a change of paradigm that you advocated since very long.
https://www.forbes.com/sites/bernardmarr/2024/04/12/generative-ai-sucks-metas-chief-ai-scientist-calls-for-a-shift-to-objective-driven-ai/?sh=78f953b8b82b
https://www.numerama.com/tech/1669388-yann-le-cun-lia-generative-est-50-fois-moins-intelligente-quun-enfant-de-4-ans.html
Superhuman AGI is nigh enough for me as long as it's in the next 10 years plus or minus one or two as a grace period. 3-5 ideally. What's you guyses definitions of nigh?
I think it'll show up about 5-10 years after people start working on self-improving systems (which hasn't happened yet). My question: Will we get compassionate AGI? Or non-compassionate, but still intelligent AI? I think the difference is a big deal, for my part.
A good example of what AGI will not do :produce any non trivial step on tje solution if any if the famous problems unsolved in the list of mathematical famous problems like Catalan ´s or Riemann’s coonjectures
I agree that Elon's prediction seems wildly over-optimistic (although you also seem to be interpreting it in its strongest possible form, whereas some weaker interpretation might be defensible, as your postscript implicitly acknowledges).
However, there are other reasons that Elon should not take the bet. A million dollars is, or should be, almost inconsequential to Elon. It is not worth thirty minutes of his time. Even ten million dollars is not worth taking up his morning. If he has to spend longer than that arguing about what the rules of the bet should be, or what the result of the bet was, it was a waste of his time. (Granted, Elon has arguably wasted time on stupider things, but he shouldn't.)
If I were in Elon's position, I would also worry that accepting one public bet would encourage a hundred other people to try and make public bets with me, which would generally be annoying and a waste of my time.
Really, someone of Elon's wealth should only be making million-dollar bets in cases where he really wants to lose the bet (so he is creating an incentive for someone else to make him lose the bet).
I really doubt Elon's time is spent with that kind of discipline and forethought. He's a genius, hyper-focused, but that doesn't mean he's clever about using his time. For example, buying Twitter ;)
When people afford you a life without many limits, it becomes very difficult to recognize limits.
Especially if they are within, I may add.
Elon is one the most eccentric and optimistic prophets alive. Given the propensity of such people to “off” a bit (?):
It might be wise to offset this pronouncement with some healthy skepticism.
« Great wits are sure to madness near allied, and thin partitions do their bounds divide. » - John Dryden