11 Comments
Nov 20, 2023Liked by Gary Marcus

People have been warning about global warming since 1896 (https://en.wikipedia.org/wiki/History_of_climate_change_science), and yet today the UN reported that we're now on track for 3 degrees of warming by the end of this century (https://www.reuters.com/sustainability/climate-energy/climate-track-warm-by-nearly-3c-without-greater-ambition-un-report-2023-11-20/).

The tendency for humans (and tribes thereof) to be primarily motivated by short-term self-interest is so deeply ingrained in human nature that (I strongly suspect that) we're going to make all the same idiot mistakes with AI/AGI, no matter the frequency or strength of the warnings.

The alignment problem doesn't just extend to technology --- humans are misaligned with humans.

Expand full comment

The big question is what are specific, sensible, useful rules at this stage that will help keep the tech safe while not being rules for rules sake.

Expand full comment

Gary is right to sound the alarm to governments and in particular in Europe, since the U.S. is largely making only tentative moves (Biden last week or so). The Nov/Dec issue of Foreign Affairs has an article by James Manyika and Michael Spence. Manyika is at Google and Stanford, and Spence is a Nobel Prize winning Economist. Similar to what Mustafa Suleyman argues in his book, "The Coming Wave," there is a need for active government regulation and oversight of the use of AI systems. Manyika and Spence go as far as to point out that left of capitalism alone, AI will be used to cut labor and jobs and enrich a small elite leading to more income inequality (which might lead to dangerous forms of social activism). The recent comments by the Biden Admin on Elon Musk's support for hating members of the Jewish community is an example of how weak government can be against tech giants with power in the market place and in the minds of consumers/voters. Suleyman expresses deep concern about how slow governments and legislation can be and thus suggest more consumer activism and civil engagement. By Altman going to Microsoft and leaning toward the capitalist/money making priority side of AI development, the need for regulation has been heightened. (Altman may believe that there is no way to stop the AI/capitalist juggernaut, so he may as well join it and try to limit the damage as he becomes richer. One thing that is unclear in his career decisions is what AI-related investments are already in his private portfolio, and how those are influencing his choices.) Also, forms of resistance are needed by those likely to be affected, like artists, authors and others whose intellectual property is being sucked up without compensation or credit.

Expand full comment

the EU can't be relied upon to even wipe its own ass nowadays

Expand full comment

Politicians determining technology does NOT amount to an improvement. I suspect the main problem with OpenAI was that the odd non-profit/profit combo was never likely to work.

Expand full comment

According to Perplexity AI, the EU AI act could result in total regulatory capture, freezing the present rate at which real AGI could be achieved before our competitors on the world stage.

https://www.perplexity.ai/search/16593d70-46a2-42f6-aae8-bf1a79c1c6bd

Moreover, a “risk vs. safety” scale is one that lends itself to bureaucratic paralysis—a sickness which has prevented larger institutions like Microsoft and Google from arriving at a LLM comparable to GPT 3 or 4 on their own in the last 6 years.

The stifling of open-development of AI will only force the emergent AGI underground, and outside of any scrutiny whatsoever.

Expand full comment

Some thought should also be given to what the Chinese government will do. There movement against crypto currencies was in my mind one of the starting events that eventually led to an unraveling of that game (though it is still being played due to the criminal and war-related activity it can support). The Chinese have kept ChatGPT out and have made it clear that some key tech decision will be made in the interest of the State and not by private industry. Benefit will have to be shared with all of the citizens (of course people can disagree on how they are doing this and its sincerity). But China is a key player for obvious reasons--population, size its market, its tech industry. The Chinese put it foot down on experiments with bio-engineering a few years ago when one of its scientist experimented with human embryos. Putting a break on capitalist methods is what the Chinese are good at.

Expand full comment

I think it is immediately obvious that AI companies cannot regulate themselves. I went over in detail OpenAI's proposal for "responsible AI" and found their principles laughable. AI can never be regulated and it's ridiculous to think that it can be.

While I applaud your stance of caution, I think we need more than caution. We need to step up and demand more severe actions towards AI, with the ultimate aim of killing it entirely.

Expand full comment
Nov 21, 2023·edited Nov 21, 2023

Europe making its own rules, even good rules, or any other regional block in some part of the world making its own rules is not satisfactory. Global, worldwide solutions between blocks are necessary. I will now repeat partly a comment I have already made. A wild development of AI driven systems by private companies according to their own particular targets is the worst prospect possible. The only way to keep AI applications under some control is to submit them to governments’ supervision, to the states’ agencies. This is not a very good prospect, because of the various political-national-ideological-religious aspects of ruling in different countries, but it is the less bad one in my opinion. Malevolently, improperly or recklessly used advanced AI systems can become a weapon of mass destruction, not in the purely material sense but in the social and economic ones. So they must be handled as such a potential weapon, by international regulations and treaties and be discussed by governments on a global forum as like they negotiate about trade and other critical geo-political issues. The states governments monitoring and controlling each other over IA, in order to maintain a geo-political equilibrium and a global socio-economic stability, which is in fact beneficial to all countries, seems to me the only realistic long term solution.

Expand full comment