42 Comments
Sep 23, 2023Liked by Gary Marcus

“Society is placing more and more trust in a technology that simply has not yet earned that trust.”

The society does not place trust, the society simply gives way, abandon itself to this technology. Because of the fundamental weaknesses underlying our society: greed of the companies, indolence of authorities, less effort and comfort seeking of users. I am afraid, globally, our society will be quickly very happy with AI and will not want to hear about the critical threats. Companies will make big money, governments will have the ultimate control tool over their populations, ordinary people will feel supported and smarter with it. People will be pleased by AI systems, will get used to them and depending on them. The game seems to be over yet.

Expand full comment

"As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats."

This is the real existential threat in my opinion. It's impossible to detect that a superintelligent AI is imminent. Scientific breakthroughs do not forecast their arrival. It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies. A highly distributed AGI in the cloud would be impossible to recognize. I lose sleep over this.

Expand full comment
Sep 22, 2023Liked by Gary Marcus

The sheer volume of the LLMs (GPT 175 billion parameters) trained on trillions of elements of human generated text, could as well be a dead end. Retentive Networks may solve the 'cost of generating' problem that transformer architectures have, but that would require a completely new extremely expensive pre-training and fine-tuning runs. My guess is that that is not really economically feasible (which is more likely an argument for no GPT-5), so its seems possible that what there is now is what this phase will produce. It is not unlikely that GPT-fever will break, but it might still take years given the amount of 'convictions' out there.

I am going to pay attention to this in my upcoming talk in London at EAC Europe 2023

Expand full comment

Thanks for helping to keep such concerns alive. A few thoughts...

You write, "As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks."

There is a plan, it's just not been made conscious and explicit.

Here, once again, we can learn from the history of the first technology of existential scale, nuclear weapons. Nukes present the most serious and imminent risk to our civilization and, generally speaking, we're largely ignoring them. So the plan for the nuke threat, a plan which is not conscious and explicit, is to wait for the next detonation to blow up our denial disease. And then maybe we'll pay attention.

My guess is that we will follow this same path with AI. We won't take the AI threat seriously until some real world event converts our relationship with that threat from abstract and merely intellectual, to the emotional realm where we actually live and make decisions. Here's evidence to support this claim...

All the talk about AI governance gives the impression that we are starting to take the AI threat seriously. Please observe how easily this fantasy can be demolished.

If you will, imagine that I run an AI service that America and the EU decide should be illegal. Here's my response. I pay off some third world politician to give me cover, move my headquarters and servers to that impoverished country where there basically is no rule of law, and then keep right on providing my AI service to all of humanity over the Internet.

How do those drafting AI laws in the West intend to enforce whatever AI regulations they come up with on nuclear weapons states who decline to participate? How do they intend to enforce their AI regulations on Mexican drug cartels who are largely beyond the reach of any government? How do they intend to enforce their regulations on millions of their own citizens who will increasingly flood the Net with their own AI creations?

Whatever form AI governance may take, it will be like the lock on your front door. The lock keeps your nosy neighbors out of your house. It's worthless against anyone willing to break a window.

Expand full comment

Here's what we should try to govern. The knowledge explosion process that gave birth to AI. Here's why.

Please note the following pattern...

1) Nuclear weapons - no idea how to make them safe

2) Genetic engineering - no idea how to make it safe

3) Artificial Intelligence - no idea how to make it safe

4) And AI isn't the end of history. It can't be the last power of vast scale which we will discover. More is surely coming.

What is the predictable logical outcome of a pattern of developing powers of vast scale, which we have no idea how to make safe? Once this question is answered honestly, then our governing strategy should become clear. We need to focus not so much on particular threatening technologies one by one by one, but on the process generating more threatening technologies than we know how to make safe. How do we govern THAT?

While this is a historic question none of us can answer at this time, there is a place to start. Most of the threat these technologies present arises from a single easily identified source. Violent men. That who is most likely to convert technologies which could be very useful in to dangerous threats.

Here's another way to look at the source of the threat. We insist on developing revolutionary new powers as fast as we possibly can. But we refuse to adapt to the new environment created by that process by making revolutionary changes in the way we think. This is a classic wishful thinking "we want our cake and eat it too" human logic failure.

Governing body's, experts, commentators and the general public will all say that meeting the threat presented by violent men is too hard, and we can't be bothered to think that much.

And nature will say, "Ok, no problem, it's your choice. If you don't want to adapt to a changing environment, we the natural world have a plan for that."

Apologies for making this general point repeatedly, here and elsewhere. The way to stop that from happening again is to either meet and defeat these claims, or ban me.

Without attending to the process that gives birth to technologies of vast scale...

AI governance is meaningless.

Expand full comment

Are you worried about the potential for AI to misinform *you*? or are you just worried about it for other people? The first case sounds like something that should concern the rest of us. The second just sounds arrogant and elitist. Passing legislation to protect *you* is one thing. Passing it to protect the rest of us seems patronizing.

Expand full comment

(Sure, misinformation is just one of the AI risks that are reasonable to worry about, but it's the one that most of the people talk about when the topic comes up)

I stopped worrying as much about misinformation when I realized I could ask an LLM for a list of factual claims and opinions in an article or youtube video transcript. Making irrelevant the emotional tone and production of an article or video is a huge step towards a healthier personal diet on information.

The AP and news orgs need to make plugins and data sets for "things that actually happened" and you can wipe away at least one class of misinformation.

Are we going to do this? Probably not until there's a huge misinformation event with real consequences, but then we'll adjust because human brains are learning/don't-touch-that-hot-stove machines.

Expand full comment

What is your opinion on the European approach? A complete legal framework. You want to do business here? Comply please

Expand full comment
Sep 22, 2023·edited Sep 22, 2023

Bard is indeed erratic. It could access my drive and answer basic questions, but it gets easily confused.

However, I think we should still be incredibly excited.

LLM is the first serious attempt at a very large-scale flexible reasoning engine. It is in a whole new league compared to image classifiers, language translation, symbolic reasoning, and any pure software solutions.

This will take time. 5-10 years. The same as we see with self-driving cars. AGI is not around the corner.

Yet think of the potential. Companies can and do hire millions of workers to tune its solutions. Purely via demonstrations, based on text, images, video. LLM can learn from all existing knowledge (video is next), and it can co-opt existing software as tools. It can write code to verify itself, and can learn to find citations to support its claims.

This is all very cool, and the paradigm has shifted, compared to just 5 years ago.

Expand full comment

View from Australian defense official about negative effect of ai.

https://cosmosmagazine.com/technology/ai-truth-decay-policy-general/

Maybe cohesive, down-to-earth, non-greed hog places like Australia will chose to regulate and protect themselves against LLM’s

Expand full comment

Marcus writes, "Next time: what should we do?"

ANSWER: We could shift our focus from AI (and any other particular technology) to the knowledge explosion process generating all threatening technologies of vast scale. We could address the SOURCE of the problem, instead of SYMPTOMS of the problem.

EXAMPLE: We might imagine that we are standing at the end of an Amazon warehouse assembly line, and the products rolling down the line keep getting bigger and bigger, and coming faster and faster. At first we can try to meet this challenge by working smarter and harder. But if the assembly line keeps accelerating, sooner or later we would have to shift our attention from the individual products to the assembly line itself.

Expand full comment

The EU regulation will indeed become law though, in all of the union, the moment it is finished though. Which is pretty close?

Expand full comment

Thanks for this article. It's an excellent rundown of the state of things, especially regarding policy and internal industry development.

Inspired me to write this article evaluating AI from a UX perspective. Includes nuts-and-bolts recommendations that real product managers can add to real backlogs. Would love your thoughts.

https://open.substack.com/pub/lotusrose/p/improving-the-ai-experience?r=1x82u&utm_campaign=post&utm_medium=web

Expand full comment

AI is new Internet, ChatGPT is new Google, and actually, AI adds nothing to what humans have created.

The point is the same for people - is there on the Internet something useful stored that helps people be more happy and communicate with others with joy and effectively in their languages - that's the goal for humankind the same one as 30 years ago when Internet was created.

General Intelligence System - https://www.linkedin.com/pulse/general-intelligence-system-michael-molin-2f/

Expand full comment

Why not use a concurrent data source that is reliable, say Wikipedia, to allow the model to fact check its outputs to make sure that its outputs are coherent with objective reality as written by humans? The model inferences an output and then double checks its information against reliable data? Something like retrieval-augmented-generation? If the model makes up a citation to a non-existent paper, it has to cite to a paper that is publicly available on the internet or in a published journal? Or are you saying, no, that is simply not possible with any LLM that uses a transformer architecture, due to the way they fundamentally operate?

See https://research.ibm.com/blog/retrieval-augmented-generation-RAG

Expand full comment

I am grateful that you signed the pause letter.

Expand full comment