Kara Swisher, Sam Altman, and the OpenAI Board
Helen Toner finally explains what the board was thinking
I’ve written here before about Altman and his apparent lies, and why I thought the (nonprofit!) board was right to call him out, and occasionally about how Kara Swisher blocked me on X for saying that. (Her view was that Sam was basically innocent and the board was cloddish, writing “A clod of a board stays consistent to its cloddery”—and that they had no legitimate reason to question Sam’s candor.) Every bit of evidence that came out since has seemed to support my take and undermine Kara’s.
But until now we never heard directly from anyone on the board.
Helen Toner, fired from the board in the post-Sam firing fallout, has finally spoken, on the TED Talks AI podcast, and it’s a doozy. I’ve transcribed one bit:
For years, Sam had made it really difficult for the board to do that job [of following the non-profit mission for humanity] by, you know, withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.
I can’t share all the examples, but to give an example of the kind of thing that I’m talking about, it’s things like when ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter.
That’s insane. And underscores everything I have said about the company straying from its nonprofit mission.
But it’s not all:
On multiple occasions he gave is inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible to know how well those safety processes were working or what might need to change.
That’s pretty scary too, given what OpenAI aspires to build.
Toner continues, bearing out reporting from multiple outlets
I wrote this paper which has been, you know, way overplayed in the press.. The problem was that after the paper came out Sam starting lying to other board members in order to push me off the board. So it was another example … that just like really damaged our ability to trust him.
She also revealed that the board had contemplated firing Sam over trust issues before that. When the board said Sam was not consistently candid what the board meant was “Sam was not consistently candid”.
Small wonder that a lot of the safety team left over the last couple months.
Toner was pushed out for her sin of speaking up (so was McCaulay, and perhaps Sustkever as well). I was punished by Swisher for my sin of speaking up for Toner and the other members of the board.
All of that is, to use precise technical language, “fucked up.” Prominent journalists like Swisher beating up whistleblowers and those that simply ask that we reserve judgement is not cool.
And it’s problematic not just for the individuals involved, but for all of humanity. Because we can’t have the CEO of a company that potentially powerful behaving that way.
And we can’t have “independent” media running interference for their powerful friends. As Paris Marx pointed out, Swisher is chummy enough with Altman that Altman interviewed her on her book tour; she should have disclaimed her friendship and recused.
To my knowledge, Swisher hasn’t commented publicly on Toner’s revelations. I hope she will take the interview as a chance to rethink her position.
§
Fortunately Swisher’s conflicted-by-friendship performance was not typical; others at The New York Times, Business Insider, The Wall Street Journal, The Information, and The New Yorker dug more deeply, eventually unearthing and documenting Sam’s deceitful moves against Toner.
Putting Toner’s disclosures together with the other lies from OpenAI that I documented the other day, I think we can safely put Kara’s picture of Sam the Innocent to bed.
When the board said Sam was “not consistently candid”, they meant, shockingly, that Sam was not consistently candid.
End of story.
§
But wait, that’s not quite all. The thing that has really stuck in my craw (part of my list from the other day) was the way Altman lied by omission while standing next to me under oath at the Senate, saying he had no equity in OpenAI and that he just loved his job, even as he held a stake in YCombinator that held a stake in OpenAI and owned an investment firm called OpenAI Startup.
Breaking news item 2 (counting Toner’s statement as 1) is that (perhaps under pressure) Sam has now divested his stake in that investment firm (under what terms I don’t know—did he profit?).
Hooray for that divestment, both of control and equity! And kudos to Axios’s Dan Primack who first noted the issue in February. Probably Sam would still own it if not for Primack’s digging, and we would have one more conflict of interest to worry about.
But there’s another item I just spied this afternoon in The Information, in an article called OpenAI CEO Cements Control as He Secures Apple Deal.
Reading between the lines, Sam may well wind up with direct equity in the end, after all:
Perhaps little with Sam is ever quite as it seems.
Gary Marcus would prefer not writing about Altman, but Altman is consolidating more and more power and seeming less and less on the level. Particularly as long as the US is relying essentially on self-regulation and Sam is aspiring to build ever more powerful models with less and less independent oversight, that should concern us all, deeply.
Self-regulation of potentially the most powerful technology in existence. How the fuck did we get here?
Finally, the objective analysis we all needed. Keep it coming