36 Comments

Gary, The "non-profit" that owns a for-profit (allowed by naive US law) has to reach a $2M revenue to trigger additional audited financial statements. See OpenAI's non-profit arm reported revenues of just $44,485 in its latest US tax filing, despite its for-profit business likely making billions from ChatGPT --

https://regmedia.co.uk/2023/12/12/openai_non_profit_irs_2022.pdf

For God's sake a poor US family making $44,485 pays taxes!

https://www.cnbc.com/2023/12/12/openai-nonprofit-arm-45000-in-2022-revenue-company-worth-billions.html

We need an Open Letter to the IRS, the FTC,Congress .. sign me up! The story needs to move from LLM mistakes to swindling the State and US government.

Expand full comment

Now imagine when they replace millions and millions of jobs. Can they really make 45k a year anymore when working $16 an hour ? Can they even buy a house anymore ? Can they afford rent for a family ?

Software companies will move away from employees to AI ones doing the work, why ? Just to drive revenue, profits. That's it, no other reason. Money is the end of us all.

Expand full comment

Quite interested to see what the lawsuits and the investigation into why Altman got (temporarily) fired reveal. The more I learn about him, the more I form the impression he's a master manipulator. Of course, the thing about being even a very skilled liar-once you're under enough scrutiny, the lies come to light.

Expand full comment

Gary, it is a value that you are highlighting OpenAIs hypocrisy. Though in the end it wont be any worse than any other for-profit company in placing its profits above humanities well being.

But your focus on this issue can help all see that for-profit companies will always do this even as they explain how they are being reasonable. For me that is the message that needs amplification.

great stuff!

Expand full comment

Liking this without reading first because of how perfect the cover gif is

Expand full comment

OpenAI's mission is one of Goliath AI-washing + tax exemption-washing by a private non-profit institution that we've never before encountered. We got lucky that we've learned in 14 months just how shaky a foundation a “mission” alone can be, and how Altman and the boys frankensteined a dozen separate entities and shell companies together to get away with the biggest software heist in modern history.

Expand full comment

AI does little to augment humans: they want it to replace humans and drive us extinct. Its poison

Expand full comment

I find it interesting that Andrew Ng spoke at a WEF conference in Davos recently. To me, AI, and the people that are pushing all this garbage (though I don't necessarily think all AI applications are garbage) are related to the transhumanist agenda that WEF promotes. If you follow Yuval Harari who is a or the WEF philosopher, he keeps saying that people lost their purpose. It's an agenda that tries to discard the humanity and the human potential, and the fact that these AI leaders lie with impunity is no surprise, as well as their utmost disregard for copyright. Truth is not valued.

I think AI is a tool, and currently, those that pump money heavily in this technology don't have nice and fluffy interests ("for the good of mankind"), and actually they promote a violent and anti-human use of it under the pretense that this technology "enhances" our lives.

Expand full comment

Surely there are tax implications here?

Presumably they have spent 7 years getting the tax benefits of being a ‘non-profit’ when internally they knew they were building a for profit product, will the (or should the) Feds & California be doing a stringent review of their tax filings since that email 3 weeks after launch?

Expand full comment

They got away with it — and still getting away with the goods.

The "non-profit" that owns a for-profit (allowed by naive US law) has to reach a $2M revenue to trigger additional audited financial statements. See OpenAI's non-profit arm reported revenues of just $44,485 in its latest US tax filing, despite its for-profit business likely making billions from ChatGPT --

https://regmedia.co.uk/2023/12/12/openai_non_profit_irs_2022.pdf

Expand full comment

Audit them anyway

Expand full comment

While I liked most of this article, I have to call BS on the Microsoft "whistleblower." It's bad that AI is making copyrighted material, but we already know it was doing that. Most of the other ways that it is being "harmful" that the whistleblower described are only harmful if your conception "harm" is seriously deranged.

Apparently, the image generator will produce sexualized and violent images. So what? How is that harmful? I can go to any website and stream or rent gruesome slasher movies, raunchy 80s comedies with tons of nudity, or ultraviolent underground comic books. When I was a child I loved drawing gory images of dinosaurs fighting each other. Why is it harmful for an AI to make images that human beings have been making for centuries? Why is it bad that humans will be able to generate type of image that many people enjoy drawing and looking at? What if instead of trying to prevent generation of "harmful" images, we instead stopped listening to the people who keep telling us these images are harmful? They clearly don't know what they are talking about.

Apparently the image generator will produce image of teenagers using guns. Again, so what? I took marksmanship classes when I was a pre-teen. Is "Red Dawn" a harmful movie that needs to be banned now? What about Star Wars? Luke and Leia were teenagers when they first took on the Empire with blasters.

The image generator will produce images of teenagers using drugs too? Again, so what? Teenage drug use is an important social issue that many artists and filmmakers have tackled over the years. Is "Basketball Diaries" a harmful movie now?

It is kind of funny that apparently someone fed some extreme pro-life political cartoons into the image generator, so it produces gory images when it is prompted with "pro-choice." That seems like a standard LLM problem that you get when you feed it the entire Internet. But why does it matter that it can be used to generate images of Disney characters taking sides in the Israeli-Hamas conflict? I can do that in two minutes with Photoshop.

These "guardrails" that are not working are not actually preventing "harm." They are preventing the image generator from making images that upset a certain type of easily angered, easily offended, and highly neurotic person. They are "harmful" in the sense that that type of person claims anything that upsets them is "harmful" because they want an excuse to force the entire world to accommodate their emotions. Such people are vile tyrants, it is a great thing that the guardrails created to allow them to force their neuroses on other people are not working.

Expand full comment

You raise interesting points and it’s definitely an important topic. But I think the whistleblower was not taking issue with whether this usage is okay or not from an ethics perspective, but instead that the model guardrails are defined so as to prevent this kind of images from being generated and the guardrails are very easy to bypass, thus casting doubt on the controllability of the technology as a whole.

If there were no guardrails at all, the model would probably be deemed unsafe for the general public. The issue here as I see it is that Microsoft SAYS there are guardrails, thus creating an illusion of safety, whereas the truth is the guardrails are very ineffective.

Expand full comment

Windows also comes with a web browser pre-installed.

Both can be used by children to access unlawful and disturbing content.

One comes with parental controls and whitelisting.

The other comes with an "E for Everyone" rating.

Expand full comment

It's good to call out the two-facedness. Someone needs to do it.

Perhaps I'm just in a pessimistic mood, but one could argue that there is no way for a non-profit to realistically, over the long term, have the kind of extremely deep pockets needed to develop AI (in the supposed AGI direction)? The other option is for a government program to develop it (like the first moon shot), and we know that's not going to happen, except in secret perhaps for the NSA. So in an unfortunate sense, the overall trend as far as funding, is not that surprising. That being said, I still think Altman is a slippery devil (so to speak), and shit rolls downhill from management's attitude.

The rest of the dumpster fire is the result of putting their "huge bet", as Sutskever put it, behind LLMs as the path to AGI (and not caring about copyright issues, being too big a gorilla to).

Again, just my two cents.

Expand full comment
Comment removed
Mar 7
Comment removed
Expand full comment

That's why I said "supposed". :)

Expand full comment

Modern free market economies and 'evolution' are closely linked. Evolution is a term that even came from the economy side (following Adam Smith etc., Darwin disliked it even) before it got adopted on the biological side.

What both have in common is that nature is 'red in tooth and claw' and the same is true for unconstrained free markets. Or: if you want ethics, don't look at the free market, look at politics. As Adam Smith observed in the 18th century, in the end entrepreneurs are in it for the money, period., society be damned. If you want society, you need to constrain entrepreneurs.

Expand full comment

I suspect both Musk and Altman believe most in themselves, so much so that they are convinced that whatever is in their own best interest is for the greater good. Not a criticism, necessarily, just a reality for people with excessive power and little to no accountability.

And then we treat everything as a binary, instead of realizing that balance comes from creative business and effective government regulation. It’s exhausting.

Expand full comment

Ask ChatGPT to complete this sentence:

“to use AI to improve people’s lives and unlock a better future”

You get this:

"To use AI to improve people’s lives and unlock a better future, we must prioritize ethical considerations, promote inclusivity, and ensure transparency in its development and deployment."

Ask Chat who the "we" is and you get this:

The "we" in the sentence refers to a collective group of individuals, organizations, and stakeholders involved in the development, deployment, regulation, and use of artificial intelligence (AI) technologies. This collective "we" could include AI researchers, developers, policymakers, industry leaders, ethicists, advocates, and members of the broader society who are engaged in discussions and actions related to AI. The use of "we" implies shared responsibility and collective action toward ensuring that AI is used in ways that benefit humanity and contribute to a better future.

You would think that ChatGPT was trained by the spin-masters who write political speeches. Let's all get together around the campfire and parrot the narrative, just don't stop clapping...

Seems to me that OpenAI believes that they should be the "we". The only we.

It's just like the reality of "ethical AI" - it always turns out to be the regurgitated politics of the developers.

The strength of open source has always been the Bazaar. It's exactly what makes Freedom of Speech so vital.

OpenAI does believe that AI will dictate the future. By rejecting open source they are saying they know best what the future should be.

Expand full comment

Better, perhaps, if "we" ask it what "better people's lives," "benefit humanity," and "contribute to a better future," mean.

Expand full comment

Exactly. It's something like laws against hate speech - the problem is who defines what hate speech is.

Expand full comment

Money corrupts everything.

Expand full comment

The article is very good. You can visit my website following: https://bk8.coupons/

Expand full comment

In general, I am both drawn to and repulsed by Gary's determination to deny all happiness while the world builds around him. It's a tough job to be the curmudgeon. The ratio of every "This sucks" article to "Written By Gary Marcus" is pretty high. As an exercise, I'd love to see an "Gary's Optimism" column. Once a week... something that made Gary experience joy or wonder. I know it's in there. And I think that is why Gary is tough on his own peers. He sees something and doesn't think it's happening correctly. I'll stick around and keep reading. But it's not easy.

Expand full comment

This is a very unfair perspective. You are cherry-picking statements and situations that fit your narrative. What you already want to feel and believe: that you do not trust or like OpenAI or the evolution of its mission. What about the stories of the thousands or millions of people whose lives have changed for the better with the help of GPT-4 (ChatGPT)? That is, learning new things, escaping the psychological trap of modern recommendation systems, spending less time on social media, not being exposed to toxic or harmful content (or being hypertargeted with obnoxious or coercive recommendations or ads) when searching or navigating the web. ChatGPT has improved my life and the lives of others. I do not blame them for moving from a fully open source perspective to a hybrid or even fully closed one. Just because you want to profit from your research and knowledge does not mean you want to harm the world. No one turns a blind eye to internalized bias, and you should know that it is extremely difficult to deal with. It's important to approach criticism of OpenAI and GPT-4 with a balanced perspective. While the evolution of OpenAI's mission from a fully open source model to a more nuanced approach raises legitimate concerns, it's important to recognize the broader context of technological and ethical challenges in AI development. OpenAI's transition reflects a complex landscape in which innovation must be balanced with safety, ethical considerations, and financial sustainability. The positive impact of GPT-4 on individuals and society is undeniable. From educational advancements to mental health support and improved productivity, GPT-4 has enabled significant benefits. The path to responsible AI is iterative, requiring constant vigilance and adaptation. Criticism of OpenAI's profit motive within a capitalist framework overlooks the need for financial resources for innovation and impactful societal benefit. Ethical AI development and the pursuit of profit are not mutually exclusive. Finally, the role of AI in society transcends the binary of AI versus human value; AI technologies such as GPT-4 serve as tools to augment, not replace, human creativity and intelligence; the goal seems to be to harness the potential of AI to complement and enhance the human condition.

Expand full comment