20 November 2023
Dear European leaders,
The recent events at OpenAI are likely going to lead to considerable, unpredictable instability.
The schisms on display there highlight the fact that we cannot rely purely on the companies to self-regulate AI, wherein even their own internal governance can be deeply conflicted.
Please don't gut the EU AI Act; we need it now more than ever.
Sincerely,
Gary Marcus
Gary Marcus is a leading expert on AI who testified to the US Senate Judiciary Subcommittee. An Emeritus Professor at NYU, he is the author of five books, and CEO Founder of two AI companies, one acquired by Uber.
People have been warning about global warming since 1896 (https://en.wikipedia.org/wiki/History_of_climate_change_science), and yet today the UN reported that we're now on track for 3 degrees of warming by the end of this century (https://www.reuters.com/sustainability/climate-energy/climate-track-warm-by-nearly-3c-without-greater-ambition-un-report-2023-11-20/).
The tendency for humans (and tribes thereof) to be primarily motivated by short-term self-interest is so deeply ingrained in human nature that (I strongly suspect that) we're going to make all the same idiot mistakes with AI/AGI, no matter the frequency or strength of the warnings.
The alignment problem doesn't just extend to technology --- humans are misaligned with humans.
The big question is what are specific, sensible, useful rules at this stage that will help keep the tech safe while not being rules for rules sake.