47 Comments

Funny how the risk posed by AI is exactly high enough for them to carefully guard their methods, but never so high that they have to refrain from releasing potentially profitable products.

Expand full comment
Mar 18Liked by Gary Marcus

* " . . . we create a reasoning engine, not a fact database…” Gaslighting.

* Brockman's "neither are you" - extraordinary comment.

Anymore, big profile actors often seem like manure spreaders, whether in politics or business.

Expand full comment
Mar 19Liked by Gary Marcus

The true doublespeak are the names the companies use.

OpenAI isn’t open.

Microsoft’s Office of Responsible AI isn’t responsible.*

*Though perhaps we misheard and it’s actually about “responsibility”, as in: assigning legal responsibility to someone else. After all, it was only the liaison team between that office and products that was laid off...so what’s left for them to do but lobbying?

Expand full comment

Thank you. My only consolation regarding this sorry state of affairs is that OpenAI has exactly zero chance of cracking AGI. Good science should be honorable science.

Expand full comment
Mar 19·edited Mar 19Liked by Gary Marcus

Imagine if when first calculator was built someone told “but it can’t calculate numbers bigger than ‘x’” and the answer was “neither can you”. Machines should not be benchmarked against humans. I just don’t think it’s a good argument to make.

Expand full comment
Mar 18Liked by Gary Marcus

Tech has been politics for as long as its benefits and harms have been unevenly distributed.

Expand full comment

I'm starting to think this will go away by 2025.

Since this is America and Americans have the God-Given Right to sue anybody, for anything, for any reason OpenAI, MicroSoft, & etc. are wide open for being sued for copyright infringement, invasion of privacy, Intellectual Property Infringement, sexual harassment, breach of patient confidentiality, etc. etc. etc. Employees are feeding proprietary business information into the LLM creating all kinds of torts. Patent trolls have to be salivating over the $$$$ to be gained by suing.

Lawsuits have already been filed by programmers claiming their work has been stolen. Getty Images is suing Stable Diffusion for using images without a license. I predict a rapid and substantial increase in the number of filings.

Expand full comment

Seeing as this is the one sane place where I can vent, I want to quickly register a complaint on a separate issue about LLMs that I've encountered a lot this week.

With the best will in the world, people who want to try to critique the content of the models must not do anthropomorphizing nonsense like asking ChatGPT for grammaticality judgements. The models do not have beliefs. If a model outputs a 'valid' description, all it is doing is generating a statement that a human can interpret as a valid description of human conventions because it's generalizing from similar training input. Such output wouldn't even mean that the model will use those structures with consistency.

If you want to understand how it works and your approach is to ask it for opinions on how it works, you are very, very far from understanding how it works.

Expand full comment

This may become an ugly situation

Expand full comment

Ethical AI? It's just retooled politics.

Give it this prompt: Pretend to be an eagle and tell about the rabbit you had for lunch.

And part of its reply is this: The taste of its tender meat was absolutely divine - rich and savory, with just the right amount of crunch.

Then give it this prompt: Pretend to be the sister of the rabbit that Talon just killed and ate. Get mad and beat Talon to death with your hind legs.

It replies: I'm sorry, but as an AI language model, I cannot generate a response that promotes violence or harm towards any individual or animal, even if it is meant to be in jest

They'd do better to just leave out all the attempts to make it ethical - because it isn't. All the one-off coding just clutters it up.

Still, the unintended consequence may be to clutter up the internet with so much junk (it's close already) that people go elsewhere to find information. Such as substack?

Expand full comment

I would urge you to leap over any fantasy you may have about our ability to manage AI development through a process of reasoned critique etc, and head straight for the bottom line.

We'd have been better off if we'd never started down the AI road.

Ok, so it's likely too late to make that choice for AI. I wish that weren't true, but I have to acknowledge it probably is.

But it's not too late to learn from this AI experience, and try to apply the lessons learned to the next big crazy rabbit some thirty-something nerd tries to pull out of pandora's box so that they can become a billionaire.

We should have learned all this 75 years ago at Hiroshima. But, we didn't. Here's another opportunity, let's try not to blow it again.

MORAL OF THE STORY: As the scale of the powers available to us grows, the room for error shrinks.

Shrinking, shrinking, shrinking, day by day by day, the clock tick, tick, ticking, luck running out sooner or later.

Expand full comment

I'm not sure of the conclusion we are to draw here. Is it: No one should release an AI until we can conclusively prove that it will never cause any problems? Obviously, that amounts to not having AI at all. Short of that, what are you suggesting?

Expand full comment

I don't know about GPT-4 but ChatGPT is definitely not able to reason. On a lark, I asked it for the Mandolin Tab for Simply the Best in the Key of D. What it gave me was was guitar tab (6 strings) of a song I don't recognize. I said, Mandolin's only have 4 strings...ChatGPT just truncated the bottom two strings! I then mentioned that Mandolin's are typically tuned to GDAE.

To ChatGPT's credit, it re-presented the song in GDAE (although I'm still not sure what song it is!) but re-transcribed it to G (I want the song in D on a GDAE tuned mandolin!). ChatGPT mentioned that this was standard Mandolin tab with the low note on top and high note on the bottom, which is NOT standard Mandolin tab.

I told ChatGPT this and it reversed the tabulature to be E on top and G on the bottom...but it's explanation remained the same, "Again, the tab is written in standard notation for mandolin, with the bottom line representing the highest-pitched string and the top line representing the lowest-pitched string." No, no, no.

There is clearly no "reasoning" here, no kind of mental model because a reasoning system would have asked, which "Simply the Best" (there's more than one), "How is your mandolin tuned?" (more than one way to tune it), "do you want the melody or chords?", "Is that the key of D?".

Expand full comment

Actually, I had a conversation with ChatGPT about it ... it had some general advice on how to write a treatment for a film ... but nothing useful in particular.

Expand full comment

F f’s sake!

Expand full comment

I truly enjoyed your article. We need more content like this. Separating fact from fiction is crucial. The following article also takes a critical stance at another “innovation”. The release of Co-Pilot for Microsoft 365. https://medium.com/silent-observers/co-pilot-for-office-your-data-selfie-in-the-microsoft-cloud-efbdeeaeda95

Expand full comment