110 Comments
User's avatar
Ken Kovar's avatar

OpenAI was a non profit from the start, I guess it will end its life that way 😎

Annie's avatar

This idea has given me warm fuzzy feelings 😅

Eddy Borremans's avatar

non-profit is something different than negative profit

Bill Donahue's avatar

Not to get all accountant-y, but OpenAI is and will likely always continue to be a non-profit ;-)

Aaron Turner's avatar

All mouth and no trousers.

hossboss's avatar

all ass no face.

John Dorsey's avatar

OpenAI turning to erotica is a complete joke. Didn't Altman proclaim all these lofty aspirations for how AI would benefit humanity? How is churning out erotica going to benefit humanity?

Ken Kovar's avatar

You have to ask ????😂

Bruce Cohen's avatar

I wish it was a joke. Just in the last year we’ve seen thousands of ChatGPT users who’ve fixated on the nonexistent person on the other end of the conversation as if they were a therapist, a confessor, or a lover. Now imagine users whose sexual fantasies have been fulfilled, maybe even physically manifested, by internet-enabled sex toys. They’ll be a captive audience, unwilling or unable to untangle themselves from the addiction imposed by the chatbots. I think that’s what Sam has in mind, and I’m not at all sure there’s any way to build guardrails to prevent it.

Bryan Steele's avatar

If social media dedign is about creating product addiction then why should we expect AI to have been built with a different mindset? If mindset-capture is the point, that answers a lot of the questions about how and why AI works the way it does. Maybe these morons thought they could build a product with a more sophisticated kind of addiction to be sold as a cure for social ills. It's not too far of a stretch, it's like using doctors in white lab coats to sell cigarettes.

Bruce Cohen's avatar

They’ve been building towards this addictive capability for a long time, almost 2 decades. When Rosalind Picard published Affective Computing in 2007 she was persuaded to join a startup whose first product was a service to enhance the ability of focus groups to capture the attitudes and opinions of subjects using biometric data. Their longer term goal was to complete the loop by using that data to control attitudes and opinions. And here we are.

Now imagine a chatbot trained for speech recognition and recognizing voice features such as stress, pitch, and cadence and inferring emotional state from them. This idea keeps me up at night.

To her credit, Picard left that startup citing ethical concerns.

Bryan Steele's avatar

Thank you for that context. I just don't understand why people are in denial about human nature and what happens when you allow small groups to concentrate power.

Renato M. E. Sabbatini, PhD's avatar

All this is going to happen. But, again, there will be many competitors, not least the big porn sites that lost no time in adding AI-generated porn, and that have 90% free access and million of subscribers, a base built on decades of presence and investments. So “erotica” (aka porn, another way that Altman pathetically wants to make it more palatable) is not going to be a big revenue for openAI and this will be only marr the brand’s perception by serious users and regulators. Seems to anyone that’s just a desperate, and counterproductive move. It shows that openAI is destroying its most vaunted ethical guardrails.

C. King's avatar

John Dorsey and Bruce Cohen: Yes--they don't call it "addiction" for nothing. (I doubt there is ANYONE in advertising who remains unaware of that little human ditty as something to take advantage of. And BTW, let's get rid of those pesky regulations . . . bribe them, threaten to kill their children, whatever.)

Also, John: Don't I remember some "for the good of all" promises from some years back flowing from the mouth someone named "Zuckerberg"?

The Mask Paradox's avatar

Yes and that sort of went away with the update in June.

Bryan McCormick's avatar

There is a greater fall yet to come. The "F" word - Fraud, is making the rounds in finance. A lot of people that should have known better walked into the propellor blades eyes wide open. Warehouses full of never-to-be-racked h100s while Huang pivots to LLM robots? Good luck with that since China has a 5 year lead on making things that work - fulfilled promises rather than hucksterism. Keep an eye out for talk of "double counting" and "strategic supply shortages". My biggest concern? That politicians get talked into the trillion dollar backstop story and We The People yet again end up footing the tab. Why? The AI Race Gap, which does not exist in any real on the ground way, cannot be tolerated. National priority you see. Get your placards ready.

Oaktown's avatar

How about calling your reps NOW and warning them you will not support them if they vote to bail out the richest sociopaths on the planet with OUR tax dollars. Never again!!!

Cooper's avatar

Seeing Huang on stage the other day with those robots was actually comical. People aren’t buying this time 🤣

Gerben Wierda's avatar

“Slippery Sam”.

Though he turned out to be right when he said hallucinations aren’t a bug, they’re a feature. And he was right when he stated LLMs would become very convincing before they actually would become very good. Ilya is clearly naive (in the mould of his professor Hinton). Same seems a weird slippery mixture of scam artist and naive shaman.

Amy A's avatar

I’m starting think Oleg is sama

Oleg  Alexandrov's avatar

I've been called a chatbot before, so I guess this is progress.

Now, seriously, this is a discussion on AI. If you have counter-points, make them. If you're trying to make wisecracks, do it at your own expense.

Oleg  Alexandrov's avatar

No. Not tone policing. Picking at people. This is a technical discussion. Have technical points.

Oleg  Alexandrov's avatar

Sam has been very bold. Both with introduction of chatbots, the shift to reasoning models, and now with his crazy bet of growing by orders of magnitude a handful of years. The dude is a risk-taker. Maybe this time he'll try to grab more than he can handle. We'll see.

Gerben Wierda's avatar

If you’re a risk taker with ‘other people’s money’, are you really a risk taker?

You know what is funny? People think entrepreneurs are good at taking risks. I’ve read about research years ago (probably in New Scientist) that entrepreneurs are actually very poor at risk assessment. They systematically underestimate them (which is why they take them more easily). That holds too for the risks they take with society. I guess swallowing loads of ketamine or having strong convictions about the antichrist, or the supremacy of pure selfish individualism doesn’t help too much either.

Oleg  Alexandrov's avatar

And yes, risking other people's money is what risk takers have to do. You don't have that money. You have the vision. You sell your vision to other risk takers. I personally believe Altman is overshooting here. But this is how things work.

Oaktown's avatar

OK then. Let the VC investors accept the consequences of their bad investments and Scam Altman accept the consequences for his lies; no way can they make a case for a taxpayer bailout. People who have gobs of money typically spend it wastefully and unwisely because they can afford to lose it. So lose it. Maybe it will improve their judgment.

Oleg  Alexandrov's avatar

The "ketamine" guy is somebody else, whom we know to be a nut.

Investors know what they are getting into. They don't need our sympathy. They usually spread their risks, and end up a lot richer than any of us.

To add, sure, the society suffers too when some people take huge risks. Nothing new, goes back a hundred years. Laws are made on occasion to constrain at least cheating. Tightening things way too much would result in lack of innovation.

Alex Tolley's avatar

"Investors know what they are getting into."

I don't think so. There is herd following (FOMO). Theranos is a fairly recent example of a lack of true due diligence by major investors. WeWork mentioned by Gary, is another. If investors really knew what they were doing, we wouldn't have stock bubbles and crashes.

Oleg  Alexandrov's avatar

Investment is high-risk high-reward. Investors are people too. Some make really dumb mistakes.

There is fraud, there is WeWork, and there are people who get very good return if they play their cards right. Somebody who hangs on for 20 years or more likely got some things right.

Martin Machacek's avatar

I guess investors are humans after all :). They are vulnerable to following fads, some more than others, which (among other things) makes the difference between good (successful) investors and the rest.

Oaktown's avatar

Well, let him take risks on his own dime and at his own expense. He has a lot of nerve trying to make a case for a government bailout. His bad judgment, his BS hype, and he can suffer the consequences for all of it.

Stephen Bosch's avatar

I once asked Sam Ctrl-Alt-Delete-man how much energy was needed to train and operate AI, and where it was going to come from.

He said "one nuclear power plant should suffice."

An unserious answer from an unserious person.

Oaktown's avatar

Wanna bet he's never even considered how to dispose of the toxic radioactive waste from Nuclear power plants?

TK-2042's avatar

Fuck off dawg thats not even a problem. You bury it in a mountain

Oaktown's avatar

You clearly don't know what happened to that idea, nor do you know what you're talking about. Look up Yucca Mountain and read all about it. Better do your research before you insult and swear at people: https://en.wikipedia.org/wiki/Yucca_Mountain_nuclear_waste_repository

hugh's avatar

How about “Sam Slopman”?

Jeremy Harshman's avatar

Can we speed it up please.

Gramsci's avatar

How can all of these people earn such obscene amounts of money to create bullshit. Are they stupid, which invalidates any reasonable defense of such earnings or are they just con men? Since some of them, such as those running Alphabet and Microsoft, have advanced degrees from some of the best universities, con men is the only answer.

C. King's avatar

Gramsci: AND/OR they are naively hopeful with $$$ for eyes, and do not understand (and apparently did not learn) what intelligence and knowledge really are in real people or how they work.

Bill Johnston's avatar

As I commented a few weeks ago on a previous Altman fail, it couldn't happen to a nicer guy. :) Now, if only the market can find a way to encourage the other 'LLM tech geniuses' to be more responsible and less focused on profiteering... As the Nobelist Christian Lous Lange observed more than 100 years ago, 'Technology is a useful servant but a dangerous master.'

C. King's avatar

Bill Johnston: You could say the same about politicians.

Bill Johnston's avatar

True! The more dangerous thing about the tech bros, though, is that the politicians want to follow them (and their money) rather than leading independently.

C. King's avatar

Bill Johnston: Exactly that. And there is a public hazard in the circle of corrupt power going on with Big Money putting "paid lobbyists" in the mix. (aka: BM)

Bill Johnston's avatar

FWIW, here's a link to a story I saw yesterday that suggests there are other ways to attack the situation, pun intended: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/

Belden Menkus's avatar

Love the insightful analysis and countertrend views. Re Sam Altman, he is of a type: charismatic, an amazing ability to get people to like him and go along with his plans, paints an exciting big vision that seems wondrous, slippery on the details, fast and loose with the money. (I suspect others could add to the profile.) The type is easy to spot once you know what to look for - and with very few exceptions - they all go wrong, usually walking away and blaming someone else. Warning notices should be sent to all Boards everywhere, with a legally mandatory reading at the start of each Board meeting.

Cristian's avatar

Can’t wait for OpenAI IPO… when the smoke and mirrors are gone.

Dirk Groot's avatar

Thanks for the interesting read! I was wondering, to what extent does the "making ends meet" problem also apply to other big AI companies like Google, Anthropic, Meta, etc?

Gary Marcus's avatar

google has huge income separately, so not a problem for them or msft or amazon or meta.

anthropic could face challenges

Amy A's avatar

Risk to them in a stock pullback I’d wager.

Cristian's avatar

they are facing the same problem.. but they have deep pockets.

Gerald Harris's avatar

Gary, like you and learning from you, I was one of the early people to be in the cautious crowd on OpenAI and LLMs. I heard Sam speak at the Commonwealth Club in 2024 with Joy Buolamwini from MIT and found his performance troubling. I was also in attendance at an event sponsored by Peter Leyden (the Great Progression Series) at Shack 15 in San Francisco where a Google executive said that LLMs were just " auto-complete on steroids." I questioned in one my posts whether OpenAI could learn from the experience of America Online (which was an early promoter of emails as the new/new thing). It was obvious to me that there were no barriers to entry to compete with OpenAI on LLMs and that there was no reason they would not become a commodity. It was also concerning to me to call LLMs artificial intelligence when the key intelligence involved was that of the user (interpreting the results coming out of an LLM). How much of a financial disaster this turns out to may be buffered by the fact that the power industry could not ramp up as fast as sought and may be the rush to build data centers can be slowed. Trump mentioned that he does not want regular consumers to foot the power bills (already happening). So, we need to continue to be vigilant in watching this unfold.

Aaron G's avatar

Brutal. You missed the Musk lawsuit.

It seems as if Altman had kept OpenAI as a non-profit, none of this would have happened. His own employees and investors would have not scattered to make their own firms...

Lesson -- NGOs should stay as NGOs.

Gary Marcus's avatar

i linked the musk lawsuit

Ken Kovar's avatar

But the field was too hot for OpenAI to stay a non profit organization, too much investment was and is needed to develop the models!

Aaron G's avatar

I do not know with certainty on that. Feeding America raised $3.9 billion in goods, services, $ in 2022. Who knows what a more integrity driven OpenAI NGO would have done within the hot investment climate. Goodwill and Integrity are potentially more valuable, yet underappreciated.

Ken Kovar's avatar

The problem is that tech and nonprofits are not a good mix when money and investment are needed. Open source and things like net neutrality are examples of tech oriented non profits but AI is not. There is too much competition and money to be made 😏 and also would the investors in AI like Microsoft wait patiently until OpenAI said it was ready? They actually fired an OpenAI executive after trying to oust Altman! So no the non profit model was not a good fit 😁

Gramsci's avatar

Someone needs (and I am certainly not capable of it) to turn their eye on AI in medicine. Your MRI doesn't take as long because AI has enhanced whatever the machine has done so far. Will this lead to over diagnosis and over treatment? Both false positives and false negatives? Will it just create lots of needless anxiety since treatments, which may carry high risks of their own, don't start until symptoms occur as opposed to an incidental finding?

What about AI diagnosis? It may be accurate to say that at this moment, AI can diagnose better than physicians. However, AI is "trained". Who will advance that training if new research obsoletes the original AI training. Doctors won't be able to since they abdicated that skill to AI and it atrophies without use.

Ken Kovar's avatar

That’s a real problem if the doctors really stop using their skills 🤔

Aaron G's avatar

FDA is watching. I am waiting for the grey space that has occured with the user using it as a medical device. That line as been crossed.

Oaktown's avatar

The FDA has been gutted by worm brain.

Aaron G's avatar

Summaries... they unintentionally cross the diagnosis line.

Amy A's avatar

AI is only more accurate in highly synthetic interactions that don’t reflect real world interactions. People keep repeating it as a fact, and it isn’t. Otherwise the AI salesmen would have fired their doctors.

Jonathan Grudin's avatar

Is there convincing evidence that AI can diagnose better than physicians? And in what areas? In 2016, Geoff Hinton predicted that radiologists would very soon be out of business. He was wrong, and in 2023 acknowledged that and said “I believe that in 10 years they'll be routinely used to give a second opinion and maybe in 15 years they'll be so good at giving second opinions that the doctor's opinion will be the second one." And radiology was an area that looked most promising.

Oleg is not very convincing on the data, but I agree that it is preferable to be civil, stick to data, and not exude hatred toward others. Three or four people dumped on today, not because their views are exceptional but because they have been famous. Historical symbolic AI leaders that Gary admires were duplicitous in hyped successful efforts to get government funding. They too became famous. But in my experience they weren't bad people, their karmic balances were in the normal range.

Gramsci's avatar

(Who is Oleg?) From what I have read, AI can read better supposedly due to human tendency to be biases. But to be honest, I'm beginning to distrust a lot of things. Have a look at retraction watch and see how much science has become junk. Or search "Implausible results in human nutrition research" by John Ioannidis. Now you can find research that says consuming high cholesterol foods doesn't matter as your body can regulate cholesterol. Those people that don't may need a statin. Saturated fats may increase LDL. It turns out for the most part that everyone's mileage may vary. In addition, most studies now talk in terms of risk factors, risk ratios and give confidence intervals. I'd rather take my chances in a Casino, fair dice will always have just one way to roll box cars regardless of who is rolling.