27 Comments

Self-regulation of potentially the most powerful technology in existence. How the fuck did we get here?

Expand full comment

1. Capitalism becoming more powerful than politics (and thus setting the rules). 1.a HGI (I.e. people believing the free market is best for everything, including ethics).

Expand full comment

Usually this is fine - or at least "we can muddle through", but in this situation, definitely not.

Expand full comment

A repost for those who are not satisfied with the current situation:

Please join us in #PauseAI if you want to take action - our Discord is https://discord.gg/qh2f2FwB

And you are needed, right now, because our leaders are not being the adults in the room. Despite having 80% of Americans supporting AI regulation, thanks to seeing how Sam Altman is, we are seeing DC fast-track AI racing instead, with nary a thought given to regulation and in fact, preemptive anti-regulation clauses being inserted into upcoming bills.

This is completely against us, and completely for the tech companies, for people like Altman. This is literally your disempowering happening RIGHT NOW(imagine how we might be treated after we are further disempowered).

But you are not powerless. Take action with us.

Call your representatives. Send them emails. Let them hear us, and not just the big tech money.

At this point, we need effective grassroots action. Again, our Discord is here and together we can make a difference:

https://discord.gg/qh2f2FwB

Expand full comment

Finally, the objective analysis we all needed. Keep it coming

Expand full comment

I just drilled down through the links to discover the entire list of members of the DHS "AI Safety & Security Board". Here is the link: https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders

Not a single person from academia on the board. I am shocked but not surprised.

Expand full comment

"Not a single person from academia on the board" - you say that like it's a bad thing.

Expand full comment

Actually I was wrong, there is one. Of the 19 others, it’s a mixture of profound technical ignorance and deep conflicts of interest. Like having fossil fuel executives and lobbyists at a climate change conference. Or allowing the accused to be on their own jury.

Expand full comment

Yeah, that's pretty bad, I agree. We need more ethicists and hardcore scientists (with deep expertise in how technology affects lives, not just the "freaky lab coat and mathematical aspect" of things) on these kinds of committees. I would have chosen CTOs and CSOs (chief security officers) instead of CEOs. The CSO aspect is also arguable, since a lot of them come from military and federal backgrounds, and it would be hard to weed out those with an itch to kill or harm (not to overgeneralize the CSO role, but isn't that what Hollywood and Netflix tell us, that CSOs tend to be spies? and then other people who actually work in government tell us "whatever you see in Hollywood or Netflix about XYZ is literally like that in real life", so I'm just saying).

I am not sure that seeing only scientists is a good thing. I mean, as much as you know about AI safety or deep reinforcement learning, it's the product and ethics and policy people who tend to know how technology affects real life.

It needs to be a mix of people, that's the point, not just powerful CEOs. It's actually shameless to pick only or mostly CEOs for these kinds of committees.

Expand full comment

Toner is believable because her story has been consistently backed up by others.

Altman, meanwhile, sure likes to play dumb whenever he looks bad. I’m sure he’ll be shocked shocked shocked if he does have equity.

And Swisher can make hay when she finally has her revelation that Altman has changed and is no longer worthy of trust (that’s been her ark with Musk, why fix what working).

Expand full comment

I think it’s worth linking to the full podcast episode. Not only did the board learn about the launch of ChatGPT on Twitter, according to Toner Altman also forgot to tell the board that he owned the OpenAI startup fund.

Link: https://open.spotify.com/episode/4r127XapFv7JZr0OPzRDaI?si=9kCQAEzMTFa7EaypeH0zjQ

Expand full comment

You’re a good man, Gary Marcus. Thanks for your solid reporting on the ongoing mess at Open AI and head honcho, Mr. Altman.

Expand full comment

Yeah, this is all really fucked up. And we are now fucked as a result -- no safety team at OpenAI and full steam ahead on AGI. Yup. Fucked.

Expand full comment

Luckily, there’s some reason to believe that OpenAI, in fact all the current players, are n’t going to be able to build AGI. Well, first they have to figure out what it is.

Expand full comment

I suppose Sam’s no good, very bad news week continues.

In terms of regulation, ironically, Sam is a big champion of more AI regulation, though (I expect) largely because it’ll shut out competitors by raising costs and barriers. I’ve been told that it’s too cynical of a take on him, but there’s a lot of stuff you’ve covered that suggests he isn’t as earnest as he makes himself out to be.

On a slight tangent, unfortunately, I can’t really see regulators striking the right balance—including not doing dumb stuff like cementing in big incumbents. So, among the choices, self-regulation may be the best we’ll get.

Expand full comment

I sometimes feel, in watching a sports event, "is it possible for both teams to lose?"

That's how I feel watching Helen Toner vs. Sam Altman. I despise them both.

For Toner, I picture the hieratic Harvard professors described by Taleb:

https://albertcory50.substack.com/p/lecturing-birds-on-how-to-fly

She's a parasite on the world of technology. In Taleb terms, she can tawk and that's it.

Altman would require a much longer Note, so maybe some other time.

Expand full comment

she seems like a good egg to me, and her TED talk was totally reasonable. i have seen no reason to doubt her good faith.

Expand full comment

I wasn't impugning her character as much as saying she's one of those "elite" persons who think they're born to run things, and have never actually accomplished anything. Exactly the sort of person Taleb was talking about.

Expand full comment

Frankly, the November 2023 events at OpenAI highlight significant shortcomings in board governance and competence. The board demonstrated a lack of inquisitiveness and seemed to expect complete transparency from the CEO without proper engagement, which is not how effective oversight works. This is especially true given that OpenAI operates with a unique structure: a non-profit board overseeing a for-profit entity. In my view, Sam Altman's primary accountability should be to Microsoft and the investors involved in the capped-profit arm, as they have a direct stake in the commercial success of the organization.

In addition, the board's behavior suggests a disconnect and a lack of relationship building. It seems their interactions have been primarily virtual, through platforms like Google Meet, rather than face-to-face meetings, which are critical to nurturing trust and open communication. Expecting Sam to be forthcoming with a board that seems disengaged is unrealistic. Furthermore, we know that Ilya Sutskever, a key figure in the company until the misstep, was aware of ChatGPT's development; he even commented on its potential reception, indicating prior knowledge. So it's implausible that the board first heard about ChatGPT through social media. More plausibly, Sam and Ilya may have underestimated the impact of ChatGPT and therefore did not prioritize informing the board about it.

A proactive board would routinely ask critical questions at every meeting: "Are there any safety and security issues we should be concerned about? Are there any upcoming product launches we need to know about or at all? Sam, you mentioned X before, but now you're saying Y -- could you clarify that for us? These questions are essential to ensure alignment and understanding within the organization. Instead, the board's decision to abruptly terminate Sam without clear communication or justification, especially given OpenAI's high valuation and the potential backlash, indicates a significant governance failure AND INCOMPETENCE.

It is puzzling how individuals who seem to lack the competence (firing Sam through Google Meets, what cowardice sorry but yeah) rose to such influential positions on the previous OpenAI board. I would say Ilya did have the competence but was coerced by other board members into this decision.

Sam Altman is widely respected and had the support of 95% of his team, a testament to his leadership and vision. Ilya Sutskever, despite his misstep, remained an important asset to the organization until before the misstep. The board's unrealistic expectations and misunderstanding of their role given OpenAI's unique structure contributed to this debacle. Moving forward, it is critical that Sam address the concerns of Microsoft and other key investors to restore stability and confidence.

Man, I want to have the fearless and high-agency attitude that Sam has. I need a mentor like that in my life. He is not the bad guy here. This is what it takes to develop AGI in my opinion. I mean the guy survived a coup, he is a great leader, and must be a good person for everyone to stand up for him like they did (irrespective of financial motives, if he were bad, people would not have stood up for him).

Maybe we should be listening to Sam instead of the people who got terminated or quit their mission.

Expand full comment

A machine world like the Metaverse run by Zuckerberg and a Machine AI to inhabit it run by Altman. What could go wrong for any of us? sigh

Expand full comment

I used to be a big fan of Kara and met her a couple of times, but as time goes on I see her more as an enabler for the "tech-bro" culture that she processes to hate. And a big part of the hype machine for AI and other tech that is probably not needed given the problems in the world.

Expand full comment

A focus on particular people, companies, and technologies is a loser's game. If we were to fix Altman, and fix OpenAI, and fix AI overall as a technology....

The knowledge explosion will keep right on going, giving us ever more things we have to fix. More, and more, and more. Faster, and faster, and faster. Bigger, and bigger, and bigger.

A focus on particular people, companies and technologies is not a solution, it's just another problem, because such a limited focus distracts us from confronting the larger picture which will decide our fate.

Expand full comment

You’re a good man, Gary Marcus. Thanks for your upright coverage of all this!

Expand full comment

Honestly, the more likely explanation for why you were blocked by Swisher is that X and Substack force your content into the feed whether we follow you or not. I have the same issue just haven’t bothered to block you, though I have tried everything else I can think of to stop your tweets from ending up at the top of my feed and alas there you are whenever I open the X app. Also, I am fairly confident Kara doesn’t care one bit about what you think of her so don’t worry she isn’t “punishing” you but it’s funny to think you actually worry/think she is 😂😂

Expand full comment

If you had done 30 seconds of research on X you would have seen that this was wildly out of line and ignorant, to boot. another irresponsible comment like this and you won’t be welcome here anymore.

Expand full comment

Kara blocking Marcus is not consistent with your contention that she doesn't care about what he thinks of her. She's reports on AI and he is an AI guru, like what he says or not. My guess is that Kara Swisher is trying to transition from reporter to pundit and she sees Marcus as an obstacle.

Expand full comment

We were in the middle of a discussion on the matter when she blocked me; the motive seemed very clear. It wasn’t a random moment

Expand full comment