31 Comments
Mar 30·edited Mar 30

Everyone in the corporate world does the same thing. What makes us think that OpenAI should live up to different expectations or have a different moral ground? The fact that they (probably genuinely) want to build good AGI cannot be confused with the view that "everything they say must be seen in this light of purity and 'truth', otherwise they are not building good AGI". Reality is dark and gray and dirty and only sometimes clean. We should not be naive. We are not uncovering anything of value here, by attacking OpenAI. These are just normal human dynamics, and they are not bad by the current standards by which we humans have chosen to operate "modern societies" (or bad compared to those in the "third world").

Expand full comment

Implausible deniability. “It wasn’t welded down, so it can’t be theft.”

Expand full comment

It is sad that most people do not turn the pages on their half truths and lies that they tell openly. the media companies are eating it up and likely in cahoots with them!

Thank you so much for exposing their BULLSHIT!

Expand full comment

I remember being told about them when they started as a possibly good place to work. I looked at them and said no. That non-profit stuff was bogus from the start. I expected legal trouble. Well I was wrong. They avoided that and people became rich. I'm glad I said no regardless. I have a life.

Expand full comment

OpenAI pretended to be "open" and "research-friendly" before they found out they've found something incredibly valuable. Then, with the help of Microsoft, it all became about ROI.

I get it. But they should stop pretending...

Expand full comment
Mar 16Liked by Gary Marcus

Reading this, and the linked convo between Gates and Altman, I can't help rephrasing Conrad's exclamation, "The hubris! The hubris!"

Expand full comment

tl;dr:

Our system won't work, and more importantly we won't make any money, if we can't steal other people's work but we can't admit that because we'd be sued into oblivion.

Expand full comment

This largely reflects the hopeless inefficiency of these learning models. They have to hoover up enormous quantities of data because nobody has yet figured out how to learn this stuff efficiently.

Expand full comment

"Other than that, Mrs. Lincoln, how was the play?"

Expand full comment

Would be interesting to see BitTorrent traffic into OpenAI's IP addresses.

Expand full comment
Mar 15Liked by Gary Marcus

No, we can't tell you what we trained it on, because we know we weren't allowed to take it. And besides, that would make it easier for you to know that the emergent capabilities we are claiming are really just data leakage. Look, shiny!

Expand full comment

Is this the worst example of misdirection and misleading "openness" from modern companies? It certainly seems to be one of the worst for some years, other than perhaps FTX and Theranos.

Expand full comment

I've been wondering a lot lately about what products I use are training these models. So many of their licenses allow for my data/usage to be used for "internal purposes". For example are or when will all our Google Meets be used to train Gemini and would Google even have to notify us if that is the case ? Would training Gemini be equivalent to recording a meeting without consent or do they (likely) see this as different ?

Expand full comment

Follow the money, grasshopper. Because it's a learning curve until it's an invoice, then it's a mistake.

Expand full comment

This emperor, on the other hand, is almost *all* clothes.

Expand full comment

I thought you had a positive one coming next @Gary Marcus, or was that the best one can say at this point? 😁

Expand full comment
author

the positive one got preempted by the OAI news!

Expand full comment

No news - no money. Have to crank the handle.

Looking forward to the bigger piece.

Expand full comment