36 Comments

“Society is placing more and more trust in a technology that simply has not yet earned that trust.”

The society does not place trust, the society simply gives way, abandon itself to this technology. Because of the fundamental weaknesses underlying our society: greed of the companies, indolence of authorities, less effort and comfort seeking of users. I am afraid, globally, our society will be quickly very happy with AI and will not want to hear about the critical threats. Companies will make big money, governments will have the ultimate control tool over their populations, ordinary people will feel supported and smarter with it. People will be pleased by AI systems, will get used to them and depending on them. The game seems to be over yet.

Expand full comment

“Society is placing more and more trust in a technology that simply has not yet earned that trust.”

To put it more broadly...

Society places it's trust in a "more is better" relationship with knowledge which made sense in the long era of knowledge scarcity, but which has become irrational in a very different modern era characterized by knowledge exploding in every direction at an ever accelerating pace.

We want our 21st technology to race ahead, but we want the way we think to remain in the 19th century.

Expand full comment

Our basic way of thinking goes back farther than 19th century. I think that human kind main guideline from the start of the first civilization can be expressed with the following motto: “To be as much comfortable as possible with as little effort as possible.” That is why we are so vulnerable to technology, we can’t resist it even if it comes with very serious, even potentially catastrophic drawbacks. Human intelligence was during all times directed to make the work less hard and more beneficial. AI responds perfectly to this orientation and appears as the possible ultimate solution for our demand. Ultimate not only in the sense of best but also last. If AI really improves to some kind of AGI, this will be our last major achievement as humans. After, the artificial intelligence will replace progressively the human one, with or without our consent, and the machine will become the creating force, shaping a new world. But what will be the main guideline of the machine?

Expand full comment

Yes, agreed, the "more is better" relationship with knowledge goes back much farther than the 19th century. Perhaps the 19th century was the last century in which such a simplistic philosophy made sense.

My guess is that if AGI comes, it will amplify both the best and worst of we it's parents, in the same way we humans scale up the best and worst of our ape ancestors.

Expand full comment

"As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats."

This is the real existential threat in my opinion. It's impossible to detect that a superintelligent AI is imminent. Scientific breakthroughs do not forecast their arrival. It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies. A highly distributed AGI in the cloud would be impossible to recognize. I lose sleep over this.

Expand full comment

"It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies."

That's not unlikely. When you hang out on AGI forums, you notice all kinds of people, some that sound like lunatics believing this one "my new super neural network is ALL you need", but some others seem way more grounded with a sane scientific doubt and good technical skills who seem to intuitively see a path forward and are working on it. And that's the vocal and gregarious ones, there must be more. Sure, intuition mislead many in the 1960s, but it seems like the newcomers have some awareness of the wrong assumptions of back then. Like, a simple difference with before : video games players are naturally trained in ontology engineering, as video games represent simple ontologies that players have to deconstruct/parse to beat the game.

And that's only the beginning, the AGI problem will probably attract more and more minds as time goes by, if only because of the media coverage.

In my opinion, reality is not that complicated, the first layers of the onion are thin but tough to get, yet the first layers determine the subsequent layers. AGI is coming, and it might be coming faster than we thought.

Expand full comment

Yes. There is excellent reason to believe that the fundamental principles of AGI are simple and relatively easy to implement. It can probably be demonstrated on a small scale with a desktop computer. Scaling it is a mere engineering problem with known solutions. It can happen later.

There is no reason in my opinion that a single individual (an AGI Newton if you will) cannot crack AGI with a small budget. And it can happen at any time. One thing is certain: AGI will not come from the deep learning community.

Expand full comment

Agree on the whole, but I wouldn't be so certain about the deep leaning community not achieving AGI, because whatever the path to AGI is, their combined pressure is so strong that once they'll hit a wall, they'll naturally flow into different directions and perhaps spill on the symbolic spectrum (think Yann LeCun). I don't think they should be underestimated, listen to Ilya's technical talks, he's seems to be a monster of intelligence and insight.

And let's not forget that neural networks in the human brain achieved AGI. I see (neuro)symbolic architectures as shortcuts and, in a way, potentially above human level architectures.

Expand full comment

Well, I don't want to get into a long discussion due to lack of time but I am convinced that the continued success of deep learning is the main reason that the mainstream AI community will never crack AGI. They are stuck in a local optimum of their own making. The only way to be free of this optimum is to refuse to go in. Deep learning will not be a part of the AGI solution in my opinion.

Expand full comment

The sheer volume of the LLMs (GPT 175 billion parameters) trained on trillions of elements of human generated text, could as well be a dead end. Retentive Networks may solve the 'cost of generating' problem that transformer architectures have, but that would require a completely new extremely expensive pre-training and fine-tuning runs. My guess is that that is not really economically feasible (which is more likely an argument for no GPT-5), so its seems possible that what there is now is what this phase will produce. It is not unlikely that GPT-fever will break, but it might still take years given the amount of 'convictions' out there.

I am going to pay attention to this in my upcoming talk in London at EAC Europe 2023

Expand full comment

Thanks for helping to keep such concerns alive. A few thoughts...

You write, "As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks."

There is a plan, it's just not been made conscious and explicit.

Here, once again, we can learn from the history of the first technology of existential scale, nuclear weapons. Nukes present the most serious and imminent risk to our civilization and, generally speaking, we're largely ignoring them. So the plan for the nuke threat, a plan which is not conscious and explicit, is to wait for the next detonation to blow up our denial disease. And then maybe we'll pay attention.

My guess is that we will follow this same path with AI. We won't take the AI threat seriously until some real world event converts our relationship with that threat from abstract and merely intellectual, to the emotional realm where we actually live and make decisions. Here's evidence to support this claim...

All the talk about AI governance gives the impression that we are starting to take the AI threat seriously. Please observe how easily this fantasy can be demolished.

If you will, imagine that I run an AI service that America and the EU decide should be illegal. Here's my response. I pay off some third world politician to give me cover, move my headquarters and servers to that impoverished country where there basically is no rule of law, and then keep right on providing my AI service to all of humanity over the Internet.

How do those drafting AI laws in the West intend to enforce whatever AI regulations they come up with on nuclear weapons states who decline to participate? How do they intend to enforce their AI regulations on Mexican drug cartels who are largely beyond the reach of any government? How do they intend to enforce their regulations on millions of their own citizens who will increasingly flood the Net with their own AI creations?

Whatever form AI governance may take, it will be like the lock on your front door. The lock keeps your nosy neighbors out of your house. It's worthless against anyone willing to break a window.

Expand full comment

There could be international cooperation and talks about the uses of these technologies. We don’t know what forms agreements would take but doing nothing isn’t an answer either.

Expand full comment

Here's what we should try to govern. The knowledge explosion process that gave birth to AI. Here's why.

Please note the following pattern...

1) Nuclear weapons - no idea how to make them safe

2) Genetic engineering - no idea how to make it safe

3) Artificial Intelligence - no idea how to make it safe

4) And AI isn't the end of history. It can't be the last power of vast scale which we will discover. More is surely coming.

What is the predictable logical outcome of a pattern of developing powers of vast scale, which we have no idea how to make safe? Once this question is answered honestly, then our governing strategy should become clear. We need to focus not so much on particular threatening technologies one by one by one, but on the process generating more threatening technologies than we know how to make safe. How do we govern THAT?

While this is a historic question none of us can answer at this time, there is a place to start. Most of the threat these technologies present arises from a single easily identified source. Violent men. That who is most likely to convert technologies which could be very useful in to dangerous threats.

Here's another way to look at the source of the threat. We insist on developing revolutionary new powers as fast as we possibly can. But we refuse to adapt to the new environment created by that process by making revolutionary changes in the way we think. This is a classic wishful thinking "we want our cake and eat it too" human logic failure.

Governing body's, experts, commentators and the general public will all say that meeting the threat presented by violent men is too hard, and we can't be bothered to think that much.

And nature will say, "Ok, no problem, it's your choice. If you don't want to adapt to a changing environment, we the natural world have a plan for that."

Apologies for making this general point repeatedly, here and elsewhere. The way to stop that from happening again is to either meet and defeat these claims, or ban me.

Without attending to the process that gives birth to technologies of vast scale...

AI governance is meaningless.

Expand full comment

It seems like a society of control/surveillance is inevitable, i'm sure lots of resources, chemicals, & stuff will be forbidden of access to everybody in the far future as more and more accessible destructive technologies are uncovered.

Just in case, are you aware of the vulnerable world hypothesis ?

Expand full comment

I wasn't aware of this hypothesis until your comment, so thanks. A quick google search revealed...

=============

https://nickbostrom.com/papers/vulnerable.pdf

"The vulnerable world hypothesis (VWH) is the view that there exists some level of technology at which civilization almost certainly gets destroyed unless extraordinary preventive measures are undertaken."

=============

That makes sense to me.

What I imagine happening, at some date I certainly can't predict, is the same thing that happened with the Roman Empire. The Roman Empire dominated the ancient world for centuries. Those living at that time probably thought it would last forever. But eventually the Roman Empire collapsed under the weight of it's own internal contradictions. A period of darkness followed. And from that darkness emerged a new more advanced world order.

My guess is that this process of collapse and renewal will be repeated many times over thousands of years before we finally learn how to live on this planet.

Expand full comment

Actually, that part is a bit extreme and pessimistic, the best part of the paper to me is introducing with great imagery the notion of "black balls" : (rough summary from rough memories) each technology we develop are balls we pick up from a bag, and these can be white balls or black balls (and probably shades in between). Black balls would be very destructive technology that requires very little material/knowledge to implement. Like a bomb made of soap and water in the future. We haven't found any black ball yet, but that doesn't mean black balls don't exist. What happens when we find a black ball ? Are we ready to tackle the challenge of preventing almost 10 billions humans from realizing that black ball ?

AI should exponentially accelerate the number of balls coming out of the bag, and AI is also catalyst to changing white balls to black balls (for example, giving you a step by step tutorial on how to create your own deadly viruses in your garage).

The paper puts it better for sure.

Expand full comment

It's not really that extreme to propose that this civilization will someday collapse (if that's what you meant) given that every civilization ever created has eventually gone away.

As to black balls, your chosen example is pretty good. CRISPR is heading in that direction.

Expand full comment

I'm really dubious about generalizing the past to the present, the world is qualitatively different now for so many reasons (education, the internet, less wars, losing a war doesn't necessarily involved the whole culture burned to ashes...), there's not many scenarios that could lead to a civilization collapse i believe. Am i wrong ?

Expand full comment

Imho, yes, I'm afraid you are wrong about this. And in this case we needn't reference the past, the present will do.

Thousands of massive hydrogen bombs that are hundreds of times more powerful than the Hiroshima bomb are currently standing patiently in their silos ready to bring down this civilization in under an hour based upon the decision of a single human being.

Let's translate that in to something easier to grasp.

Imagine that I've been walking around with a loaded gun in my mouth all day every day for months. The gun hasn't gone off. So I'm not interested in the gun. How do you rate my chances for survival?

There's no proof of anything here. No guarantee of any outcome. But the odds are not looking too good. If you were to meet me in this situation you might very well dial 911.

On top of the nukes we are now adding more powers of vast scale, that we also don't know how to make safe. We seem to have no interest in taking control of an accelerating knowledge explosion, so it's somewhere between likely and certain that more powers of vast scale that we can't currently imagine will be born as this century continues.

So let's talk upside.

If I'm correct and the Roman Empire pattern repeats itself, that would seem to be part of natural cycle of renewal seen through out nature. That pattern of renewal is what brought us to the miracle of today's modern world. If this pattern continues, the collapse of this society sounds really bad, but maybe that's what's needed to get humanity to an even better society, like what happened after the Roman Empire collapse.

One way to escape the relentlessly gloomy predictions above, an ancient way embraced by so many of our ancestors, is to put them in a larger context.

The apparent gloominess of the above analysis is based on an assumption that death is bad, the ultimate tragedy etc. Where is the proof that this assumption is true? There is none. There are lots of theories and speculation on the subject, but no proof of anything. We simply don't know what death is.

And in a situation where we are facing something both inevitable and unknown the rational act is to lean in to whatever encouraging stories we can craft, thus putting things like civilization collapse and our own personal demise in to a positive larger context.

As example, if those reporting near death experiences are encountering something real, some hint of what lies beyond this life, then civilization collapse is far less of a tragedy.

Expand full comment

Are you worried about the potential for AI to misinform *you*? or are you just worried about it for other people? The first case sounds like something that should concern the rest of us. The second just sounds arrogant and elitist. Passing legislation to protect *you* is one thing. Passing it to protect the rest of us seems patronizing.

Expand full comment

It doesn't matter if it sounds arrogant, elitist, or patronizing if it's the correct thing to do.

Expand full comment

It's not elitist nor patronizing. The guy is discussing these laws here on substack, with the public-in an attempt to inform them about things that is in their interest to know about. I believe your take is quite uncharitable. It isn't like he's proposing these laws without public consent either, (suppose we imagine Gary Marcus is president and could do that).

Expand full comment

(Sure, misinformation is just one of the AI risks that are reasonable to worry about, but it's the one that most of the people talk about when the topic comes up)

I stopped worrying as much about misinformation when I realized I could ask an LLM for a list of factual claims and opinions in an article or youtube video transcript. Making irrelevant the emotional tone and production of an article or video is a huge step towards a healthier personal diet on information.

The AP and news orgs need to make plugins and data sets for "things that actually happened" and you can wipe away at least one class of misinformation.

Are we going to do this? Probably not until there's a huge misinformation event with real consequences, but then we'll adjust because human brains are learning/don't-touch-that-hot-stove machines.

Expand full comment

What is your opinion on the European approach? A complete legal framework. You want to do business here? Comply please

Expand full comment

it’s a pretty good start, pending details

Expand full comment

Respectfully, EU (and other) regulations is not a good start, as such laws just help prop up the illusion that AI is governable.

Our thinking on such matters is stuck in the 20th century. In the 21st century, in an interconnected planet wide culture ever more united by the Internet and other globalized forces, national borders are increasingly irrelevant. As example of that irrelevancy, the national borders of the US and EU are being routinely violated by some of the poorest most powerless people on the planet.

Almost all the discussion of AI governance focuses on the U.S. and Europe, who together represent something like 10% of the global population. Regulations established in the West have no jurisdiction upon the other 90% of humanity. Much of that 90% of humanity is imprisoned by ruthless psychopaths who have few concerns other than holding on to the power they have accumulated.

A key problem with the concept of AI governance is that it maintains the fantasy that we can keep on creating powers of vast scale, and then make them safe one by one by one. There is no evidence to support this wishful thinking delusion. What's happening instead is that we are accumulating powers of vast scale which we have no idea how to make safe. What is our plan for making nuclear weapons and genetic engineering safe? There is no credible plan. We don't have a clue, not a &^$@!! clue.

And, if it's true that knowledge development feeds back upon itself resulting in ever faster further knowledge development, then over coming decades we should expect more powers of vast scale to come online at an ever accelerating pace. If we're unwilling to examine the process which is giving birth to these emerging powers of vast scale, there's really little point in discussing AI governance.

Please, let's look at this holistically. It's a loser's game to try to deal with emerging powers of vast scale one by one by one, because they are coming online faster than we can figure out how to make them safe. Remember, making AI safe is not success. Success requires us to make ALL powers of vast scale safe.

Expand full comment

View from Australian defense official about negative effect of ai.

https://cosmosmagazine.com/technology/ai-truth-decay-policy-general/

Maybe cohesive, down-to-earth, non-greed hog places like Australia will chose to regulate and protect themselves against LLM’s

Expand full comment

Marcus writes, "Next time: what should we do?"

ANSWER: We could shift our focus from AI (and any other particular technology) to the knowledge explosion process generating all threatening technologies of vast scale. We could address the SOURCE of the problem, instead of SYMPTOMS of the problem.

EXAMPLE: We might imagine that we are standing at the end of an Amazon warehouse assembly line, and the products rolling down the line keep getting bigger and bigger, and coming faster and faster. At first we can try to meet this challenge by working smarter and harder. But if the assembly line keeps accelerating, sooner or later we would have to shift our attention from the individual products to the assembly line itself.

Expand full comment

The EU regulation will indeed become law though, in all of the union, the moment it is finished though. Which is pretty close?

Expand full comment

Thanks for this article. It's an excellent rundown of the state of things, especially regarding policy and internal industry development.

Inspired me to write this article evaluating AI from a UX perspective. Includes nuts-and-bolts recommendations that real product managers can add to real backlogs. Would love your thoughts.

https://open.substack.com/pub/lotusrose/p/improving-the-ai-experience?r=1x82u&utm_campaign=post&utm_medium=web

Expand full comment

AI is new Internet, ChatGPT is new Google, and actually, AI adds nothing to what humans have created.

The point is the same for people - is there on the Internet something useful stored that helps people be more happy and communicate with others with joy and effectively in their languages - that's the goal for humankind the same one as 30 years ago when Internet was created.

General Intelligence System - https://www.linkedin.com/pulse/general-intelligence-system-michael-molin-2f/

Expand full comment

Why not use a concurrent data source that is reliable, say Wikipedia, to allow the model to fact check its outputs to make sure that its outputs are coherent with objective reality as written by humans? The model inferences an output and then double checks its information against reliable data? Something like retrieval-augmented-generation? If the model makes up a citation to a non-existent paper, it has to cite to a paper that is publicly available on the internet or in a published journal? Or are you saying, no, that is simply not possible with any LLM that uses a transformer architecture, due to the way they fundamentally operate?

See https://research.ibm.com/blog/retrieval-augmented-generation-RAG

Expand full comment
Comment removed
September 22, 2023
Comment removed
Expand full comment

the question is how to make any of that work, given that LLMs don’t output inspectable intermediate representations

Expand full comment

I am grateful that you signed the pause letter.

Expand full comment
Comment removed
September 22, 2023Edited
Comment removed
Expand full comment

LLMs don't reason, and never will. They are a dead end towards AGI. It is baffling how many smart people can't see what's at the core of LLMs and instead choose to attribute them some kind of human (or magical) features.

Expand full comment
Comment removed
October 6, 2023Edited
Comment removed
Expand full comment

> I will say AGI itself is a dead-end.

While I'm inclined to believe that, I'm not 100% sure, and there is still way too much development ahead before the term even makes any sense.

> LLM is the first serious attempt at a very large-scale flexible reasoning engine.

It is not a "reasoning engine" even with the most generous definition of that concept.

> LLM solves the seamless language generation problem.

Only partially for that use because it does not reason. All the "reasoning" must be provided by the user. On the other hand, as a creative *assistant* it is very valuable, but that's different from solving the problem you stated.

> LLM can be integrated with other approaches to solve its issues and lack of depth and modeling (verification, tools, simulators). It is easy to train based on text recipes created by low-skill cheap labor.

That may solve (with a high degree of reliability) very narrow cases, and of course it is helpful.

> It can write code to verify itself, and can learn to find citations to support its claims.

I can't disagree strongly enough with that statement.

My opinion is that the current generation of LLMs are interesting but certainly not worth the hype, just like cryptocurrencies weren't. I find that Excel-like software had a much bigger impact in the world than any of those.

Expand full comment