91 Comments
Apr 12·edited Apr 12Liked by Gary Marcus

For the use of AI chatbot based on LLMs in medicine or any life and death issues, we should be very cautious. There is no way (no matter the technique, convoluted prompt engineering, RAG, LoRA, etc.) to « guarantee » that an LLM's based AI system (not an AI, please let's avoid anthropocentrism) will not confabulate (the poorly named hallucinations which are sensory problems, but the generation of disheveled or insane texts which are confabulations).

Expand full comment

Forgot Elon Musk's prediction: if AI is able to perform arbitrary tasks at the level of an adult human of below-average intelligence by the end of next year, I'd call it a miracle.

Expand full comment

I happen to know someone who knows Elon personally, and I can say with 99.9% confidence that he's saying that because he's promoting an AI company now. It's not a scientific point in the first place. When Google bought DeepMind, Elon was bidding on it, and after he lost to Google he went to the press and said "AI was summoning the devil," intending (as I've been told) to cast a shadow on the merger. He's not taking the bet because he makes more money not responding.

Expand full comment
Apr 13Liked by Gary Marcus

I'm an electrical engineer and they still can't do my job, despite me trying to offload it onto them frequently. 😅

Expand full comment

I don't understand why giving him so much attention. I don't see his opinion as by better than any random person. He just craves for attention and this sort of bets perpetuates that. He's in it for the ego, not to be right.

Expand full comment

Those are excellent challenges that cover a very wide range of capabilities.

I suspect that many of them will not be matched by machine intelligence for decades, not years.

Expand full comment

If Elon took the bet and lost he would not pay. You would have to take him to court. There is precedent

Expand full comment

• Find and fix a subtle bug in a complex computer program.

Expand full comment

There is this myth that successful entrepreneurs are superiour risk/reward estimators. Research has actually shown they're actually less than average capable of estimating risk of failure (they underestimate) and they overestimate success. That is an important reason why they're entrepreneurial in the first place. But for each successful one, we have truck loads of unsuccessful ones (which we hardly take into account).

Elon is an extreme example. Skill and luck have been key to his success. but also a lot of 'entrepreneurial naïveté', which in this atea is on full display.

Expand full comment

There seem to be two extreme views sometimes: either (1) AI will very soon be very smart and kill us all, or (2) it is all a scam, stealing, parroting.

Yet, the tech marches on. Examples.

Waymo self-driving cars can do freeways, rain, night, and more cities are being added.

Chatbot competition is intense, and vendors will be pressured to do better modeling. AlphaGeometry is a good example for how LLM can work with other methods to fix its issues.

Expand full comment
Apr 13·edited Apr 13

Maybe Elon Musk has listened carefully to the recent declarations of Yann LeCun. At the Meta AI Innovation Day in London and Paris a few days ago, LeCun heavily criticized LLMs, pointing out their inherent limitations and weaknesses, and stating that this technology is clearly not a way towards AGI. By the way, he said almost the same that you Gary wrote in this blog many, many times. Just one quotation from his speech: “they (LLMs) hallucinate answers... They can't really be factual”. He also declared that the AGI is not to be reached within the next few years and proposes a shift to a new technology called “Objective-Driven AI”. That means a change of paradigm that you advocated since very long.

https://www.forbes.com/sites/bernardmarr/2024/04/12/generative-ai-sucks-metas-chief-ai-scientist-calls-for-a-shift-to-objective-driven-ai/?sh=78f953b8b82b

https://www.numerama.com/tech/1669388-yann-le-cun-lia-generative-est-50-fois-moins-intelligente-quun-enfant-de-4-ans.html

Expand full comment

Superhuman AGI is nigh enough for me as long as it's in the next 10 years plus or minus one or two as a grace period. 3-5 ideally. What's you guyses definitions of nigh?

Expand full comment

A good example of what AGI will not do :produce any non trivial step on tje solution if any if the famous problems unsolved in the list of mathematical famous problems like Catalan ´s or Riemann’s coonjectures

Expand full comment

I agree that Elon's prediction seems wildly over-optimistic (although you also seem to be interpreting it in its strongest possible form, whereas some weaker interpretation might be defensible, as your postscript implicitly acknowledges).

However, there are other reasons that Elon should not take the bet. A million dollars is, or should be, almost inconsequential to Elon. It is not worth thirty minutes of his time. Even ten million dollars is not worth taking up his morning. If he has to spend longer than that arguing about what the rules of the bet should be, or what the result of the bet was, it was a waste of his time. (Granted, Elon has arguably wasted time on stupider things, but he shouldn't.)

If I were in Elon's position, I would also worry that accepting one public bet would encourage a hundred other people to try and make public bets with me, which would generally be annoying and a waste of my time.

Really, someone of Elon's wealth should only be making million-dollar bets in cases where he really wants to lose the bet (so he is creating an incentive for someone else to make him lose the bet).

Expand full comment

When people afford you a life without many limits, it becomes very difficult to recognize limits.

Expand full comment

Elon is one the most eccentric and optimistic prophets alive. Given the propensity of such people to “off” a bit (?):

It might be wise to offset this pronouncement with some healthy skepticism.

Expand full comment