39 Comments
Mar 7·edited Mar 8Liked by Gary Marcus

Gary, The "non-profit" that owns a for-profit (allowed by naive US law) has to reach a $2M revenue to trigger additional audited financial statements. See OpenAI's non-profit arm reported revenues of just $44,485 in its latest US tax filing, despite its for-profit business likely making billions from ChatGPT --

https://regmedia.co.uk/2023/12/12/openai_non_profit_irs_2022.pdf

For God's sake a poor US family making $44,485 pays taxes!

https://www.cnbc.com/2023/12/12/openai-nonprofit-arm-45000-in-2022-revenue-company-worth-billions.html

We need an Open Letter to the IRS, the FTC,Congress .. sign me up! The story needs to move from LLM mistakes to swindling the State and US government.

Expand full comment

Quite interested to see what the lawsuits and the investigation into why Altman got (temporarily) fired reveal. The more I learn about him, the more I form the impression he's a master manipulator. Of course, the thing about being even a very skilled liar-once you're under enough scrutiny, the lies come to light.

Expand full comment
Mar 7·edited Mar 7Liked by Gary Marcus

Gary, it is a value that you are highlighting OpenAIs hypocrisy. Though in the end it wont be any worse than any other for-profit company in placing its profits above humanities well being.

But your focus on this issue can help all see that for-profit companies will always do this even as they explain how they are being reasonable. For me that is the message that needs amplification.

great stuff!

Expand full comment

Liking this without reading first because of how perfect the cover gif is

Expand full comment
Mar 7·edited Mar 7Liked by Gary Marcus

OpenAI's mission is one of Goliath AI-washing + tax exemption-washing by a private non-profit institution that we've never before encountered. We got lucky that we've learned in 14 months just how shaky a foundation a “mission” alone can be, and how Altman and the boys frankensteined a dozen separate entities and shell companies together to get away with the biggest software heist in modern history.

Expand full comment

AI does little to augment humans: they want it to replace humans and drive us extinct. Its poison

Expand full comment
Mar 7·edited Mar 7

I find it interesting that Andrew Ng spoke at a WEF conference in Davos recently. To me, AI, and the people that are pushing all this garbage (though I don't necessarily think all AI applications are garbage) are related to the transhumanist agenda that WEF promotes. If you follow Yuval Harari who is a or the WEF philosopher, he keeps saying that people lost their purpose. It's an agenda that tries to discard the humanity and the human potential, and the fact that these AI leaders lie with impunity is no surprise, as well as their utmost disregard for copyright. Truth is not valued.

I think AI is a tool, and currently, those that pump money heavily in this technology don't have nice and fluffy interests ("for the good of mankind"), and actually they promote a violent and anti-human use of it under the pretense that this technology "enhances" our lives.

Expand full comment

Surely there are tax implications here?

Presumably they have spent 7 years getting the tax benefits of being a ‘non-profit’ when internally they knew they were building a for profit product, will the (or should the) Feds & California be doing a stringent review of their tax filings since that email 3 weeks after launch?

Expand full comment
Mar 7·edited Mar 7

While I liked most of this article, I have to call BS on the Microsoft "whistleblower." It's bad that AI is making copyrighted material, but we already know it was doing that. Most of the other ways that it is being "harmful" that the whistleblower described are only harmful if your conception "harm" is seriously deranged.

Apparently, the image generator will produce sexualized and violent images. So what? How is that harmful? I can go to any website and stream or rent gruesome slasher movies, raunchy 80s comedies with tons of nudity, or ultraviolent underground comic books. When I was a child I loved drawing gory images of dinosaurs fighting each other. Why is it harmful for an AI to make images that human beings have been making for centuries? Why is it bad that humans will be able to generate type of image that many people enjoy drawing and looking at? What if instead of trying to prevent generation of "harmful" images, we instead stopped listening to the people who keep telling us these images are harmful? They clearly don't know what they are talking about.

Apparently the image generator will produce image of teenagers using guns. Again, so what? I took marksmanship classes when I was a pre-teen. Is "Red Dawn" a harmful movie that needs to be banned now? What about Star Wars? Luke and Leia were teenagers when they first took on the Empire with blasters.

The image generator will produce images of teenagers using drugs too? Again, so what? Teenage drug use is an important social issue that many artists and filmmakers have tackled over the years. Is "Basketball Diaries" a harmful movie now?

It is kind of funny that apparently someone fed some extreme pro-life political cartoons into the image generator, so it produces gory images when it is prompted with "pro-choice." That seems like a standard LLM problem that you get when you feed it the entire Internet. But why does it matter that it can be used to generate images of Disney characters taking sides in the Israeli-Hamas conflict? I can do that in two minutes with Photoshop.

These "guardrails" that are not working are not actually preventing "harm." They are preventing the image generator from making images that upset a certain type of easily angered, easily offended, and highly neurotic person. They are "harmful" in the sense that that type of person claims anything that upsets them is "harmful" because they want an excuse to force the entire world to accommodate their emotions. Such people are vile tyrants, it is a great thing that the guardrails created to allow them to force their neuroses on other people are not working.

Expand full comment

It's good to call out the two-facedness. Someone needs to do it.

Perhaps I'm just in a pessimistic mood, but one could argue that there is no way for a non-profit to realistically, over the long term, have the kind of extremely deep pockets needed to develop AI (in the supposed AGI direction)? The other option is for a government program to develop it (like the first moon shot), and we know that's not going to happen, except in secret perhaps for the NSA. So in an unfortunate sense, the overall trend as far as funding, is not that surprising. That being said, I still think Altman is a slippery devil (so to speak), and shit rolls downhill from management's attitude.

The rest of the dumpster fire is the result of putting their "huge bet", as Sutskever put it, behind LLMs as the path to AGI (and not caring about copyright issues, being too big a gorilla to).

Again, just my two cents.

Expand full comment

Modern free market economies and 'evolution' are closely linked. Evolution is a term that even came from the economy side (following Adam Smith etc., Darwin disliked it even) before it got adopted on the biological side.

What both have in common is that nature is 'red in tooth and claw' and the same is true for unconstrained free markets. Or: if you want ethics, don't look at the free market, look at politics. As Adam Smith observed in the 18th century, in the end entrepreneurs are in it for the money, period., society be damned. If you want society, you need to constrain entrepreneurs.

Expand full comment

Ask ChatGPT to complete this sentence:

“to use AI to improve people’s lives and unlock a better future”

You get this:

"To use AI to improve people’s lives and unlock a better future, we must prioritize ethical considerations, promote inclusivity, and ensure transparency in its development and deployment."

Ask Chat who the "we" is and you get this:

The "we" in the sentence refers to a collective group of individuals, organizations, and stakeholders involved in the development, deployment, regulation, and use of artificial intelligence (AI) technologies. This collective "we" could include AI researchers, developers, policymakers, industry leaders, ethicists, advocates, and members of the broader society who are engaged in discussions and actions related to AI. The use of "we" implies shared responsibility and collective action toward ensuring that AI is used in ways that benefit humanity and contribute to a better future.

You would think that ChatGPT was trained by the spin-masters who write political speeches. Let's all get together around the campfire and parrot the narrative, just don't stop clapping...

Seems to me that OpenAI believes that they should be the "we". The only we.

It's just like the reality of "ethical AI" - it always turns out to be the regurgitated politics of the developers.

The strength of open source has always been the Bazaar. It's exactly what makes Freedom of Speech so vital.

OpenAI does believe that AI will dictate the future. By rejecting open source they are saying they know best what the future should be.

Expand full comment

The OpenAI mission is to make money. End of story. Anything else is sugar-coating.

Elon Musk's goal is precisely the same.

Expand full comment

Great shit!

These guys including Elon need to be guard railed.

FSD - Fully Self Delusional

AGI - Ain't Going to Instantiate

I watch a lot of intelligent animal videos on YouTube and marvel how these carnival barkers (Altman, Musk, and many more) can believe we will believe they're hiding intelligence in their tent.

Expand full comment

Money corrupts everything.

Expand full comment

In general, I am both drawn to and repulsed by Gary's determination to deny all happiness while the world builds around him. It's a tough job to be the curmudgeon. The ratio of every "This sucks" article to "Written By Gary Marcus" is pretty high. As an exercise, I'd love to see an "Gary's Optimism" column. Once a week... something that made Gary experience joy or wonder. I know it's in there. And I think that is why Gary is tough on his own peers. He sees something and doesn't think it's happening correctly. I'll stick around and keep reading. But it's not easy.

Expand full comment