27 Comments
Apr 9Liked by Gary Marcus

We are constructing a Real Time/Real World experiment to discover just how destructive stochastic parrots spewing word salad joined with the Argumentum ad Populum Fallacy is.

Expand full comment

Re the "scarlet letter": This has become the new normal in science fiction magazines over the last year. The use of AI in SF art or storytelling is toxic. The highest-reputation, best-known, best-paying magazines won't accept AI submissions, with language like this quote from Asimov's: "Statement on the Use of “AI” writing tools such as ChatGPT: We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future." And if they accidentally run an AI-generated cover, SF presses have been known to _withdraw it and apologize._

Expand full comment

I’m very pro AI progress, but I really appreciate the counter balance to the hype that you, @michael Spencer and @alberto Romero are offering recently. Very important to keep a sane, sober perspective on everything.

Expand full comment

Another cracker of a post! I'm helping our academics to deal with AI and to see it from various angles. Posts like these are necessary counterweights to the hyperbolic gushing of the AI companies themselves.

Expand full comment

I'm sure lawyers, judges, investors, and bankers will find screwed up document chronology (say during discovery process of a huge lawsuit) and 85% factual accuracy at best totally okay for summarizing long documents and it totally won't generate liability. NOT!

Expand full comment
Apr 10Liked by Gary Marcus

I'm not a mathematician nor a stastician but it seems to me if you try to scale up to 450 or 4500 incident free minutes - are you just asking to increases the chances of a screw up? Incident free minutes of any duration may only mean you just got lucky. Correct?

Expand full comment

And they are incapable of solving the simplest of cryptograms, an art that can be taught to a 10-year-old. The "make a guess - evaluate the guess - refine the guess" cycle is impossible for a token muncher to achieve.

As far as full self driving goes, it is going to crash and burn the same place the Darpa Autonomous Vehicle did: machine vision. The universe is analog, neural nets are digital and "see" only a sample of what's out there. They are therefore irretrievably vulnerable to spoofing and jamming.

Expand full comment

I want to give a callout to Spaceweather.com which was one of the first with a no-llm message, and I have always cherished them since.

https://spaceweather.com/

"This is an AI Free Zone! Text created by Large Language Models is spreading rapidly across the Internet. It's well-written, artificial, frequently inaccurate. If you find a mistake on Spaceweather.com, rest assured it was made by a real human being."

Expand full comment

Hypothesis regarding genAI summarization of scientific literature: summarization tools such as ScopusAI will be biased to select literature that reflects the biases of non expert internet users if the model was trained on poor quality internet data.

This seems to be true when I put things like “is aluminum in deodorant safe” into the tool (it is almost certainly fine for nearly everyone, but people on the internet think it’s not). Someone with more time than me could write a paper.

Meanwhile, apparently it’s exciting that these tools, which were almost certainly trained on the NYTimes crossword puzzle, can solve the NYTimes crossword puzzle. Please hold your applause.

Expand full comment

Here is a link to the Steven Overly podcast: https://www.politico.com/podcasts/tech.

The image in this post is just an image, and not a link to the podcast.

Expand full comment

I enjoy how you skewer LLMs, but to put self-driving cars in the same category seems odd. For the Can the cars know when they're possibly approaching an edge case, for a large proportion portion of those cases? A car that can mostly self-drive but admits that it needs a bit of help every so often is still an incredibly useful car. If there are a few "false positives" so be it. I don't understand why we are demanding perfection from this when we shrug off numerous deaths caused by human error in cars as the price of doing business. If a self-driving car isn't perfect but causes less deaths, injuries, and general mayhem than an average human driver (per mile driven), than statistically speaking it would be saving lives. Every article I ever read about the subject doesn't answer that question and, in my view, it's the only one that matters.

Expand full comment

@Gary what do you think about the claims made by Ethan here? https://www.oneusefulthing.org/p/what-just-happened-what-is-happening/comments

Expand full comment

I am already getting numb to all the ridiculous BS currently happening in the AI field, and it's going to get a lot worse once they really hit the wall.

Expand full comment
Apr 9·edited Apr 9

Tesla has been misleading people about the abilities of their cars for decades. They should get what they deserve.

As to self-driving cars in general, nobody, including people who do not like current methods, have an easy path forward.

The best approach, as for any very large and complex project, is to divide it into manageable pieces, do as much honest physics modeling as you can, get best sensors, use best methods, practice caution, and scale up gradually. It is a problem well-worth solving.

Expand full comment

Meanwhile, even though self-driving cars are definitely ready by 2017, 2019, 2021, or maybe never, we'll see AGI by next year, according to Musk: https://www.theguardian.com/technology/2024/apr/09/elon-musk-predicts-superhuman-ai-will-be-smarter-than-people-next-year

Wish he could just go back to his boring business; get autonomous self-driving over there, and roll out AI with some subterranean community to find out what is going to be the societal tradeoff in gain vs damage.

Expand full comment