Discussion about this post

User's avatar
Aaron Turner's avatar

So many questions! (A) Should the public be worried? -- The public should be worried (in particular) about any AI system that is deployed at scale, so whatever the major AI labs such as OpenAI are doing should be top of this list. All contemporary AI (including the major LLMs such as ChatGPT) is at best only minimally aligned with human values. BY DEFINITION, a minimally-aligned AI system will inexorably inflict societal harm (in parallel with any benefits) when deployed. Therefore a minimally-aligned AI system that is deployed at global scale (such as ChatGPT) will necessarily inflict societal harm at global scale (although it's difficult to predict in advance exactly what form that harm will take, we will doubtless gradually find out over the next 10-20 years). The only good news in this regard is that contemporary LLMs are simply too dumb by themselves to represent a significant x-risk. (B) Will the (new) board at OpenAI take note? -- Internally, for sure; these are not stupid people. (C) Will they do anything to address the situation? -- Only if they perceive it to be a problem relative to their (highly rationalised) objectives. It may take external pressure, such as robust legislation, to significantly temper their (and thus the company's) behaviour (ditto re the other AI labs of course). (D) Will OpenAI’s status as a nonprofit remain in good standing? -- In my assessment, OpenAI is already firmly under the control of profit-motivated interests. It's entirely possible however that key people within the company, even board members, have not quite worked this out yet. (E) Will [this] help Elon Musk’s case? -- Quite possibly. (F) Does Sam care? -- I believe he cares deeply. I also believe HE believes he's doing the right thing, which (when Kool-Aid is involved) is not necessarily the same thing as actually doing the right thing. (G) Is this what he wanted? -- I suspect not, but HE needs to work this out (ditto re the rest of the board). (H) Is OpenAI’s push to commercialization coming at the expense of AI safety? -- 10,000%.

Expand full comment
Joy in HK fiFP's avatar

One has to look long and hard with a magnifying glass to find examples of a profitable business willing to offset revenue for safety concerns. Ford Pinto, anyone?

Expand full comment
31 more comments...

No posts