Everyone in the corporate world does the same thing. What makes us think that OpenAI should live up to different expectations or have a different moral ground? The fact that they (probably genuinely) want to build good AGI cannot be confused with the view that "everything they say must be seen in this light of purity and 'truth', otherwise they are not building good AGI". Reality is dark and gray and dirty and only sometimes clean. We should not be naive. We are not uncovering anything of value here, by attacking OpenAI. These are just normal human dynamics, and they are not bad by the current standards by which we humans have chosen to operate "modern societies" (or bad compared to those in the "third world").
It is sad that most people do not turn the pages on their half truths and lies that they tell openly. the media companies are eating it up and likely in cahoots with them!
I remember being told about them when they started as a possibly good place to work. I looked at them and said no. That non-profit stuff was bogus from the start. I expected legal trouble. Well I was wrong. They avoided that and people became rich. I'm glad I said no regardless. I have a life.
OpenAI pretended to be "open" and "research-friendly" before they found out they've found something incredibly valuable. Then, with the help of Microsoft, it all became about ROI.
Our system won't work, and more importantly we won't make any money, if we can't steal other people's work but we can't admit that because we'd be sued into oblivion.
This largely reflects the hopeless inefficiency of these learning models. They have to hoover up enormous quantities of data because nobody has yet figured out how to learn this stuff efficiently.
No, we can't tell you what we trained it on, because we know we weren't allowed to take it. And besides, that would make it easier for you to know that the emergent capabilities we are claiming are really just data leakage. Look, shiny!
Is this the worst example of misdirection and misleading "openness" from modern companies? It certainly seems to be one of the worst for some years, other than perhaps FTX and Theranos.
I've been wondering a lot lately about what products I use are training these models. So many of their licenses allow for my data/usage to be used for "internal purposes". For example are or when will all our Google Meets be used to train Gemini and would Google even have to notify us if that is the case ? Would training Gemini be equivalent to recording a meeting without consent or do they (likely) see this as different ?
Everyone in the corporate world does the same thing. What makes us think that OpenAI should live up to different expectations or have a different moral ground? The fact that they (probably genuinely) want to build good AGI cannot be confused with the view that "everything they say must be seen in this light of purity and 'truth', otherwise they are not building good AGI". Reality is dark and gray and dirty and only sometimes clean. We should not be naive. We are not uncovering anything of value here, by attacking OpenAI. These are just normal human dynamics, and they are not bad by the current standards by which we humans have chosen to operate "modern societies" (or bad compared to those in the "third world").
Implausible deniability. “It wasn’t welded down, so it can’t be theft.”
It is sad that most people do not turn the pages on their half truths and lies that they tell openly. the media companies are eating it up and likely in cahoots with them!
Thank you so much for exposing their BULLSHIT!
I remember being told about them when they started as a possibly good place to work. I looked at them and said no. That non-profit stuff was bogus from the start. I expected legal trouble. Well I was wrong. They avoided that and people became rich. I'm glad I said no regardless. I have a life.
OpenAI pretended to be "open" and "research-friendly" before they found out they've found something incredibly valuable. Then, with the help of Microsoft, it all became about ROI.
I get it. But they should stop pretending...
Reading this, and the linked convo between Gates and Altman, I can't help rephrasing Conrad's exclamation, "The hubris! The hubris!"
tl;dr:
Our system won't work, and more importantly we won't make any money, if we can't steal other people's work but we can't admit that because we'd be sued into oblivion.
This largely reflects the hopeless inefficiency of these learning models. They have to hoover up enormous quantities of data because nobody has yet figured out how to learn this stuff efficiently.
"Other than that, Mrs. Lincoln, how was the play?"
Would be interesting to see BitTorrent traffic into OpenAI's IP addresses.
No, we can't tell you what we trained it on, because we know we weren't allowed to take it. And besides, that would make it easier for you to know that the emergent capabilities we are claiming are really just data leakage. Look, shiny!
Is this the worst example of misdirection and misleading "openness" from modern companies? It certainly seems to be one of the worst for some years, other than perhaps FTX and Theranos.
I've been wondering a lot lately about what products I use are training these models. So many of their licenses allow for my data/usage to be used for "internal purposes". For example are or when will all our Google Meets be used to train Gemini and would Google even have to notify us if that is the case ? Would training Gemini be equivalent to recording a meeting without consent or do they (likely) see this as different ?
You inspired this poll https://x.com/garymarcus/status/1768623567892291780?s=61
Follow the money, grasshopper. Because it's a learning curve until it's an invoice, then it's a mistake.
This emperor, on the other hand, is almost *all* clothes.
I thought you had a positive one coming next @Gary Marcus, or was that the best one can say at this point? 😁
the positive one got preempted by the OAI news!
No news - no money. Have to crank the handle.
Looking forward to the bigger piece.