27 Comments
User's avatar
Roman Peczalski's avatar

“Society is placing more and more trust in a technology that simply has not yet earned that trust.”

The society does not place trust, the society simply gives way, abandon itself to this technology. Because of the fundamental weaknesses underlying our society: greed of the companies, indolence of authorities, less effort and comfort seeking of users. I am afraid, globally, our society will be quickly very happy with AI and will not want to hear about the critical threats. Companies will make big money, governments will have the ultimate control tool over their populations, ordinary people will feel supported and smarter with it. People will be pleased by AI systems, will get used to them and depending on them. The game seems to be over yet.

User's avatar
Comment deleted
Sep 24, 2023
Comment deleted
Roman Peczalski's avatar

Our basic way of thinking goes back farther than 19th century. I think that human kind main guideline from the start of the first civilization can be expressed with the following motto: “To be as much comfortable as possible with as little effort as possible.” That is why we are so vulnerable to technology, we can’t resist it even if it comes with very serious, even potentially catastrophic drawbacks. Human intelligence was during all times directed to make the work less hard and more beneficial. AI responds perfectly to this orientation and appears as the possible ultimate solution for our demand. Ultimate not only in the sense of best but also last. If AI really improves to some kind of AGI, this will be our last major achievement as humans. After, the artificial intelligence will replace progressively the human one, with or without our consent, and the machine will become the creating force, shaping a new world. But what will be the main guideline of the machine?

Rebel Science's avatar

"As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats."

This is the real existential threat in my opinion. It's impossible to detect that a superintelligent AI is imminent. Scientific breakthroughs do not forecast their arrival. It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies. A highly distributed AGI in the cloud would be impossible to recognize. I lose sleep over this.

Peter's avatar

"It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies."

That's not unlikely. When you hang out on AGI forums, you notice all kinds of people, some that sound like lunatics believing this one "my new super neural network is ALL you need", but some others seem way more grounded with a sane scientific doubt and good technical skills who seem to intuitively see a path forward and are working on it. And that's the vocal and gregarious ones, there must be more. Sure, intuition mislead many in the 1960s, but it seems like the newcomers have some awareness of the wrong assumptions of back then. Like, a simple difference with before : video games players are naturally trained in ontology engineering, as video games represent simple ontologies that players have to deconstruct/parse to beat the game.

And that's only the beginning, the AGI problem will probably attract more and more minds as time goes by, if only because of the media coverage.

In my opinion, reality is not that complicated, the first layers of the onion are thin but tough to get, yet the first layers determine the subsequent layers. AGI is coming, and it might be coming faster than we thought.

Rebel Science's avatar

Yes. There is excellent reason to believe that the fundamental principles of AGI are simple and relatively easy to implement. It can probably be demonstrated on a small scale with a desktop computer. Scaling it is a mere engineering problem with known solutions. It can happen later.

There is no reason in my opinion that a single individual (an AGI Newton if you will) cannot crack AGI with a small budget. And it can happen at any time. One thing is certain: AGI will not come from the deep learning community.

Peter's avatar

Agree on the whole, but I wouldn't be so certain about the deep leaning community not achieving AGI, because whatever the path to AGI is, their combined pressure is so strong that once they'll hit a wall, they'll naturally flow into different directions and perhaps spill on the symbolic spectrum (think Yann LeCun). I don't think they should be underestimated, listen to Ilya's technical talks, he's seems to be a monster of intelligence and insight.

And let's not forget that neural networks in the human brain achieved AGI. I see (neuro)symbolic architectures as shortcuts and, in a way, potentially above human level architectures.

Rebel Science's avatar

Well, I don't want to get into a long discussion due to lack of time but I am convinced that the continued success of deep learning is the main reason that the mainstream AI community will never crack AGI. They are stuck in a local optimum of their own making. The only way to be free of this optimum is to refuse to go in. Deep learning will not be a part of the AGI solution in my opinion.

Gerben Wierda's avatar

The sheer volume of the LLMs (GPT 175 billion parameters) trained on trillions of elements of human generated text, could as well be a dead end. Retentive Networks may solve the 'cost of generating' problem that transformer architectures have, but that would require a completely new extremely expensive pre-training and fine-tuning runs. My guess is that that is not really economically feasible (which is more likely an argument for no GPT-5), so its seems possible that what there is now is what this phase will produce. It is not unlikely that GPT-fever will break, but it might still take years given the amount of 'convictions' out there.

I am going to pay attention to this in my upcoming talk in London at EAC Europe 2023

Richard Sprague's avatar

Are you worried about the potential for AI to misinform *you*? or are you just worried about it for other people? The first case sounds like something that should concern the rest of us. The second just sounds arrogant and elitist. Passing legislation to protect *you* is one thing. Passing it to protect the rest of us seems patronizing.

Forrest's avatar

It doesn't matter if it sounds arrogant, elitist, or patronizing if it's the correct thing to do.

Douglas Renwick's avatar

It's not elitist nor patronizing. The guy is discussing these laws here on substack, with the public-in an attempt to inform them about things that is in their interest to know about. I believe your take is quite uncharitable. It isn't like he's proposing these laws without public consent either, (suppose we imagine Gary Marcus is president and could do that).

Ian [redacted]'s avatar

(Sure, misinformation is just one of the AI risks that are reasonable to worry about, but it's the one that most of the people talk about when the topic comes up)

I stopped worrying as much about misinformation when I realized I could ask an LLM for a list of factual claims and opinions in an article or youtube video transcript. Making irrelevant the emotional tone and production of an article or video is a huge step towards a healthier personal diet on information.

The AP and news orgs need to make plugins and data sets for "things that actually happened" and you can wipe away at least one class of misinformation.

Are we going to do this? Probably not until there's a huge misinformation event with real consequences, but then we'll adjust because human brains are learning/don't-touch-that-hot-stove machines.

Ton Veldhuis's avatar

What is your opinion on the European approach? A complete legal framework. You want to do business here? Comply please

Gary Marcus's avatar

it’s a pretty good start, pending details

Robert W Murphree's avatar

View from Australian defense official about negative effect of ai.

https://cosmosmagazine.com/technology/ai-truth-decay-policy-general/

Maybe cohesive, down-to-earth, non-greed hog places like Australia will chose to regulate and protect themselves against LLM’s

Victor Bernhardtz's avatar

The EU regulation will indeed become law though, in all of the union, the moment it is finished though. Which is pretty close?

Lotus Rose's avatar

Thanks for this article. It's an excellent rundown of the state of things, especially regarding policy and internal industry development.

Inspired me to write this article evaluating AI from a UX perspective. Includes nuts-and-bolts recommendations that real product managers can add to real backlogs. Would love your thoughts.

https://open.substack.com/pub/lotusrose/p/improving-the-ai-experience?r=1x82u&utm_campaign=post&utm_medium=web

Michael Molin's avatar

AI is new Internet, ChatGPT is new Google, and actually, AI adds nothing to what humans have created.

The point is the same for people - is there on the Internet something useful stored that helps people be more happy and communicate with others with joy and effectively in their languages - that's the goal for humankind the same one as 30 years ago when Internet was created.

General Intelligence System - https://www.linkedin.com/pulse/general-intelligence-system-michael-molin-2f/

Earth's avatar

Why not use a concurrent data source that is reliable, say Wikipedia, to allow the model to fact check its outputs to make sure that its outputs are coherent with objective reality as written by humans? The model inferences an output and then double checks its information against reliable data? Something like retrieval-augmented-generation? If the model makes up a citation to a non-existent paper, it has to cite to a paper that is publicly available on the internet or in a published journal? Or are you saying, no, that is simply not possible with any LLM that uses a transformer architecture, due to the way they fundamentally operate?

See https://research.ibm.com/blog/retrieval-augmented-generation-RAG

User's avatar
Comment removed
Sep 22, 2023
Comment removed
Gary Marcus's avatar

the question is how to make any of that work, given that LLMs don’t output inspectable intermediate representations

Forrest's avatar

I am grateful that you signed the pause letter.

User's avatar
Comment deleted
Sep 22, 2023Edited
Comment deleted
Peter's avatar

It seems like a society of control/surveillance is inevitable, i'm sure lots of resources, chemicals, & stuff will be forbidden of access to everybody in the far future as more and more accessible destructive technologies are uncovered.

Just in case, are you aware of the vulnerable world hypothesis ?

User's avatar
Comment deleted
Sep 24, 2023
Comment deleted
Peter's avatar

Actually, that part is a bit extreme and pessimistic, the best part of the paper to me is introducing with great imagery the notion of "black balls" : (rough summary from rough memories) each technology we develop are balls we pick up from a bag, and these can be white balls or black balls (and probably shades in between). Black balls would be very destructive technology that requires very little material/knowledge to implement. Like a bomb made of soap and water in the future. We haven't found any black ball yet, but that doesn't mean black balls don't exist. What happens when we find a black ball ? Are we ready to tackle the challenge of preventing almost 10 billions humans from realizing that black ball ?

AI should exponentially accelerate the number of balls coming out of the bag, and AI is also catalyst to changing white balls to black balls (for example, giving you a step by step tutorial on how to create your own deadly viruses in your garage).

The paper puts it better for sure.

User's avatar
Comment deleted
Sep 24, 2023
Comment deleted
Peter's avatar

I'm really dubious about generalizing the past to the present, the world is qualitatively different now for so many reasons (education, the internet, less wars, losing a war doesn't necessarily involved the whole culture burned to ashes...), there's not many scenarios that could lead to a civilization collapse i believe. Am i wrong ?

User's avatar
Comment deleted
Sep 22, 2023
Comment deleted
Ro's avatar

There could be international cooperation and talks about the uses of these technologies. We don’t know what forms agreements would take but doing nothing isn’t an answer either.

User's avatar
Comment removed
Sep 22, 2023Edited
Comment removed
Germán Larraín's avatar

LLMs don't reason, and never will. They are a dead end towards AGI. It is baffling how many smart people can't see what's at the core of LLMs and instead choose to attribute them some kind of human (or magical) features.

User's avatar
Comment removed
Oct 6, 2023Edited
Comment removed
Germán Larraín's avatar

> I will say AGI itself is a dead-end.

While I'm inclined to believe that, I'm not 100% sure, and there is still way too much development ahead before the term even makes any sense.

> LLM is the first serious attempt at a very large-scale flexible reasoning engine.

It is not a "reasoning engine" even with the most generous definition of that concept.

> LLM solves the seamless language generation problem.

Only partially for that use because it does not reason. All the "reasoning" must be provided by the user. On the other hand, as a creative *assistant* it is very valuable, but that's different from solving the problem you stated.

> LLM can be integrated with other approaches to solve its issues and lack of depth and modeling (verification, tools, simulators). It is easy to train based on text recipes created by low-skill cheap labor.

That may solve (with a high degree of reliability) very narrow cases, and of course it is helpful.

> It can write code to verify itself, and can learn to find citations to support its claims.

I can't disagree strongly enough with that statement.

My opinion is that the current generation of LLMs are interesting but certainly not worth the hype, just like cryptocurrencies weren't. I find that Excel-like software had a much bigger impact in the world than any of those.