47 Comments

The fact that important scientists with divergent views share deep concern about where AI is headed is both concerning…and inspiring. Thank you both for continuing to speak up publicly.

Expand full comment

Imho, enough with the deep concern. That doesn't appear to lead to anything.

Some AI professionals will express their deep concern. They'll write papers, give Ted talks, attend conferences, discuss vague governance schemes, appear on TV, get in to debates with each other, and make money doing all this etc. Meanwhile AI development will continue racing forward.

Enough with the vague hand wringing. Do those expressing concern support putting AI development on hold while we sort out these issues? Or not? Yes? Or no?

No more cake and eat it too please.

Expand full comment

Bad actors are much more dangerous, leading to an internet filled with uncertainty regarding origins of the information we consume. Presumably software will be developed to identify human vs nonhuman origins, but the free market being what it is, such software may be priced at levels only well-off people can afford. Result a trusty reliable internet for some, a junked up internet for the rest of us. Two-tier internet.

Expand full comment

It is important to look at bad actors. But as history tells us, there will always be bad actors. We need to also take a systemic view and look at changing incentives as well.

Expand full comment

Incentives... Can you expand on your ideas.. I'm not certain what are the incentives, who or what we target with them, to what end. Sounds interesting.

Expand full comment

Yes, happy to discuss more ... a bit in a hurry right now, so I link sth I wrote on another blog ... certainly explains the direction I am thinking in ... https://jonathanhaidt.substack.com/p/ai-will-make-social-media-worse/comment/15653357

Expand full comment

I'd like to think that AI propaganda and misinformation would speed up a learning curve of resistance to its effects, as a byproduct of an increasing number of the humans reading and viewing examples repeatedly. Most of the tactics would be the same as those of traditional propaganda- appeals to the vanity of preconceptions found in the groups intended to be targeted, underpinned by the resort to the usual array of classic logical fallacies that have such a track record of success at persuading the unwary, or confirming their prejudices. Or reliance on outright falsification of events and the roles of the actors alleged to have been involved- a problem that's most often vulnerable to being counteracted by independent fact-checking with keyword searches.

I'm hoping that more and more people will simply tire of the bombardment, and burn out on having themselves sent up, triggered, tricked, ego-massaged, and all the rest of it. AI propaganda and false dealing might speed up the process, by overplaying the hand. Animals eventually figure out how to detect traps and spring them; the more times the trick is played on them, the more chance of them learning how to defeat the tactic.

So it will be interesting to find out how AI media manipulation will play out; it may lose its kick after a while, as a byproduct of saturation.. But I don't know, and hesitate to project an outcome. We'll be contending with it soon enough, I expect. It would be a shame if AI fallacies proved to be more effectively persuasive than the humans counseling our fellows to sober up, snap out of it, and learn some elementary counterintelligence skills.

Expand full comment

I agree. But I also think that we should reinforce this learning by making philosophy and democracy two of the important subjects in school.

One of the many things to learn in philosophy is that before you critize your opponent you listen and present their argument in the strongest possible terms. (Then your own argument will be even stronger.)

A course in democracy should teach students not only the variety and history of democracy, but also how to participate in democracy. For example, I had already grown up children when I first spoke at City Council or lobbied my Member of Congress. Kids could learn to do these things in school.

Moreover, kids are tech savvy and tech does offer huge new possibilities in democratic participation.

Expand full comment

I'm not sure I agree with this comment but it certainly is something worth considering. For example, the abuse of ad banners has made people actually "skip" them automatically when browsing a web page. Perhaps a visual element is easier to identify (and then learn to ignore) than (mis)information disseminated through multiple channels? For example, some demonstrably false facts in American politics have actually been embraced by groups of significant size. Psychological and sociological aspects like cognitive dissonance play a significant role in this topic

Expand full comment

What are the compelling benefits of AI which justify taking on more risk right now, at a time when we already face a number of serious risks that we have little to no idea what to do about??

Why is there never an answer to this question? Why are we taking on YET ANOTHER risk?

Why are so many experts endlessly waffling, wringing their hands, and making utterly vague statements about global governance schemes and so on? What is so hard about simply saying...

"We aren't ready for AI at the moment, so let's put a hold on it for now, and shift our focus to addressing the unresolved questions."

Here's an alternate suggestion to what Marcus offers:

1) Get AI experts out of the room. People who make their living developing AI can hardly be expected to be objective on the question of whether AI development should continue.

2) Get scientists out of the picture too, for the same reason, lack of objectivity. The science community is hopelessly trapped within an outdated 19th century "more is better" relationship with knowledge. That philosophy is a blind faith holy dogma to them. Few of them seem to even realize this. Scientists are great at science, and largely clueless about our relationship with science.

We already know that AI presents risks in both the short and long term. We the public need to decide whether we feel it's rational to take on more risk at this point in time.

If someone should argue that it's worth taking on the risk with AI, please tell us how many more risks you feel we should also accept. Is there any limit to that? Should we be mindless drone slaves and just blindly take on any risk, no matter how many, no matter how large, that some engineer somewhere decides is a cool idea?

Artificial intelligence exists because human intelligence doesn't.

Expand full comment

Your points 1&2 are correct. To the people who think regulation is the cure - maybe in a different time suggesting regulation would be a possibility. But, consider the status of the CDC, NIH and WHO. Run by scientists.

Here's what I think the problem is.

Science tells us what we can do, while religion tells us whether we should. But it seems that the scientists in charge aren't always interested in what religion (or even ethics) has to say about their work.

So we have the NIH that funded gain-of-function - and probably still is.

We have the CDC whose very name says "Control". Millions of dollars and years of research - and they still were clueless when the pandemic hit. They used censorship to cut off any debate.

Finally the WHO.

I suspect a large majority of the public feels this way.

Expand full comment

You touched right on what I've been thinking about for a while. No one involved with these systems that I can tell has any grounding in serious religious beliefs. These systems *are* their religion and aside from that, many people involved are completely unmoored from a belief system that would inform them whether something is a good idea or not. That is terrifying and means to me that their work should be forcibly at once by external means, because they cannot be trusted with what are inherently moral questions.

Expand full comment

I agree that we cannot separate science from ethics.

This will be a challenge. We teach our kids to separate facts from opinion in every essay they have to write. But, while an important idea, this is not so simple. To given just one example, the definition of GDP is not value free. Some things are accounted for, others are not; and the definition also changed over time. But politicians talk about the need to grow GDP. The real debate should be about what we want to grow. I guess nobody wants to grow pollution, for example. I got off a tangent here but the point is that even scientific notions such as GDP are not value free.

Instead of blaming scientists I think everybody could be part of the cultural change that is needed. What is a good life? And how can we achieve it?

Science can play a part in answering these questions, but only a part. We need to do this all together.

Btw, this is why Aristotle calls the science of politics the most authoritative science, see http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0054%3Abekker+page%3D1094a%3Abekker+line%3D20

Expand full comment

Thank you for your thoughtful response. I would add while agreeing with your comment that bodies of thought such as Catholic Social Teaching, starting with the encyclical "Rerum Novarum" (1891) and continuing to the present, have an enormous amount to say and add to this conversation. We are not automatons nor we should we be made slaves in service of other automatons or their economic masters. Humans have been made for more, and economic systems must exist to serve us and to help us flourish, not the other way around. Both socialism and capitalism, Leo XIII said in Rerum Novarum, have as their end states this condition, of man's complete subservience to economic necessity and they depend on our reduction to effective slavery for their own perpetuation. We must have thinkers operating from socio-religious frameworks such as this for there to be any hope that anyone involved in these questions might be asking anything like the right questions.

Expand full comment

To me, this is the reason that freedom of speech is the most important right. The ability and responsibility to speak precisely about issues that affect us all.

Thanks for your comment.

Expand full comment

Thanks ... the key sentence for me is "But the power that’s being wielded now by powerful people and firms I estimate will just wreak more havoc. AI can provide advantages to those who own it. They can be more productive and get more wealth.".

But my solution would be different: Limiting the power of those who own the AI.

Expand full comment

Yeah that's a tricky one. How would enforcement work in that regime?

Expand full comment

It could work without enforcement. Just by making them pay taxes. The reason the investment in AI has been so big is that big tech is awash with cash to spend. Since much of this cash, one could argue, is rent extracted from ordinary citizens (from those who consume the adverts and who create the content), it would only be fair to pay it back, possibly as a dividend.

Expand full comment

Daniel: Yes, it is the new useful myth, a belief in something that anticipates benefits in the future. But like Rush said in a song, "We will pay the price, but we will not count the cost." Consequences materialize eventually, even if some come true via prediction.

Expand full comment

*forcibly halted at once

Expand full comment

"Science tells us what we can do, while religion tells us whether we should. But it seems that the scientists in charge aren't always interested in what religion (or even ethics) has to say about their work."

I am a scientist and I have a different view. We scientists have very little influence. Who lobbies the government for regulation? Who finances the campaigns of our politicians? Why do Members of Congress spend so much time on fund raising? How many of our laws are actually written by corporate lawyers?

To make any progress at all, in my opinion, the civil society first needs to claw back power from corporations. We need to invent new ways of governance. We need to try to become a democracy again, with all sectors of the society involved in decision making. Sure, corporate lawyers will have a seat at the table, as will have scientists and religious leaders and unions and everybody else.

Expand full comment

I think I went too far in that comment. I was thinking about the Manhattan Project and I was wrong. Your comment makes the point. It was government that drove that project. They felt they had good reason but maybe not.

Interesting to think about in relation to the point we seem to be at with AI.

As I recall they weren't really sure what to expect when the exploded the first bomb. That seems to be where we are at with AI. That just makes this all the more worrisome.

Rumor has it that ChatGPT developers were surprised at what it could do.

How many more surprises can we tolerate?

Thanks so much for your insight. Perhaps my comment we be better if I said "Science tells us what we can do, Religion should tell us if we should."

I wonder if the Manhattan Project would have been canceled if ethics had been considered. But we had already fire-bombed Tokyo, something really unpleasant. Can whole cultures go crazy?

Expand full comment

Heidegger said that what is most thought-provoking about our (modern, scientific) age is that "we are still not thinking". I would say nothing has made this observation feel more apt than the headlong rush to build out these AI systems--we have no idea how they work, or why we are doing it, yet we press on, autonomically as it were. If we are already this thoughtless, how hard can it be for the AI to surpass us?

Expand full comment

Yes, the problem seems to be that those developing AI are very smart, but only in one narrow direction.

Expand full comment

"What are the compelling benefits of AI which justify taking on more risk right now, at a time when we already face a number of serious risks that we have little to no idea what to do about?? Why is there never an answer to this question?"

I tried to address this in my reply. The answer is: In a market economy, we follow the incentives, whether they are longterm beneficial or not. If we want to align our economy to what benefits humanity, we need to change the incentive structure (eg via taxes).

Aside: The German word for tax is "Steuer" which denotes a device used for steering, as eg in steering wheel (Steuerrad).

Expand full comment

Difficult conversations: And still I wonder, have we asked or are we interested in what God the creator has to say?

That's not an issue of "religion", by the way, but something much deeper. For do we really believe that there's no spiritual component in this matter? All technical, nothing else?

I understand GNC pretty well, yet: where is our guidance coming from, wh or what is navigating, and who or what is "in control"?

Don't think we need any other wisdom? By all means, steady as she goes, proceed as before.

I remain in prayer as well as technical problem solving. Because they're not mutually exclusive.

Peace

Expand full comment

Misinformation has been around since humans invented language. People are good at generating it and they’re not very good at spotting it. You don’t need AI to be able to generate a lot of it, and you don’t need AI as the tool at fault in order to blow concern about it out of proportion to the threat. https://www.synthcog.blog/p/complexity-misinformation-bias

@Swag Valence — given that both Gary and Geoff are scientists in the field and all the major AI companies have been discussing ethics of AI for awhile, I’m not sure how you draw this conclusion.

Expand full comment

Agreed ... but society changes not so much because a new technology makes impossible things possible (we had cars before the combustion engine), but because things that were expensive before get now much cheaper (maintaining a motor car is cheaper than maintaining a horse).

Economic progress proceeds by lowering transaction costs. (The rise of the internet is another good example, mail and email maybe the most prominent example.)

Given how much cheaper misinformation will be in times of AI, I would predict that this will have profound effects on society.

And misinformation is just one of the areas in which AI will lower transaction costs dramatically.

Expand full comment

Yes, transaction costs to spread misinformation widely is less, but people previously lived in much smaller networks and it wasn’t necessary to spread it far to do damage. So now it’s cheaper to reach millions when before it was incredibly expensive. But previously, you could do a lot of damage just by spreading misinformation to your neighbors or your village.

Expand full comment

Good point. It was just not called cancel culture back then. But was it the same or different? One thing that is different today is that misinformation now threatens democracy as a whole. Maybe the closest analogy we have is when the printing press became popular and Europe experienced some of the most devastating wars in history (30 years war, English Civil War, etc). As an aside, thinkers like Descartes and Locke developed our modern philosophy in a reaction to these wars. I think that similarly today we need a new philosophy.

Expand full comment

I have to say, I don't think misinformation now threatens democracy as a whole. What's deemed misinformation frequently seems to depend a lot on one's perspective of the world. I think a much bigger threat is one or more groups, whether the government or large companies, getting to decide what is misinformation and effectively banning it. This sort of thing seems fine when you agree with the group in power and not so good when you don't. And the groups in power have a habit of changing over time.

Expand full comment

I agree. In my mind, I was including what you describe as part of the problem.

I was wondering whether one factor playing into this is that it never has been cheaper to produce and distribute content. But our ability to consume content (by paying attention) did not increase.

So how to deal with a situation where production is increasing exponentially and our ability to consume stays constant?

The way we filter, amplify, curate and moderate content must become more and more important.

By definition, there cannot be an unbiased way of filtering.

Is there even a way to filter, amplify, curate and moderate without censoring? What would that look like?

Finally, to come back to your point, which players will have the power to decide what is filtered, amplified, curated, moderated?

Expand full comment

All good points!

Expand full comment

An even more fundamental core issue (IMO) is that we need to actually understand (at a mathematical, scientific, and engineering level) the AI systems that we are building and (shudder) deploying. (And I must say, despite how unfashionable it might be to do so, that symbolic AI is decades ahead of connectionist AI in that regard.) Only then will we have any chance of being able to "guarantee that we can control future systems" as you have highlighted.

Expand full comment

At this point, if I were to take a stab at a mathematical paradigm that might be able to tackle these systems, I would probably have to go with Chaos Theory. And given how poor we still are at predicting the weather - an intractable time-horizon problem of emergent complexity that Computer Scientists once thought could be "solved" through computation - entrusting the future of our societies to an inherently chaotic mechanism that results in unexpected bursts of emergent behaviour does seem rather reckless.

So far, for true believers, the hope seems to be that if we put enough safety rails in place, or filter results through intermediary plugins via "modularity", then we can just pretend it doesn't really matter that we don't understand how or why these models, as they scale, seem capable of unreasonably complex tasks alongside concurrently stunning levels of stupidity. It could almost be called anti-scientific - but I suspect profit motives have a larger amount to do with the lack of curiosity being displayed by many.

Expand full comment

Agreed ... but ... combining connectionist and symbolic AI may just be around the corner. The idea is simple:

Connect an generating connectionist AI with a symbolic verifying AI and let them play "ping pong".

In a way this is already happening. For example big companies like Facebook run symbolic AI automatically to validate/verify the software generated by their human coders. And code generating AI is already doing amazing things. Of course, first there will still be humans in the loop. But I dont think that it is a stretch to say that over time the human involvement will get less and less. In fact, as the software is getting more and more complicated, it will be more and more difficult for humans to interfere in the *connectionist-symbolic AI ping pong*.

In the light of scenarios such as above, do you see any chance to "guarantee that we can control future systems" without at the same time adjusting the economic incentives?

Expand full comment
May 7, 2023·edited May 7, 2023

It's not just economic incentives that are the problem. It's the fact that 8 billion people are fractured across ~300 million profit-motivated companies (9 of which are tech giants with multi-billion-dollar R&D budgets), 193 strategically-motivated UN nation states, and many other different "tribes" (defined in various ways), all furiously competing against each other (i.e. pursuing their own short-term self-interest, while rationalising doing so), trapped in a potentially existential Molochian prisoner's dilemma [https://www.youtube.com/watch?v=KCSsKV5F4xc, https://slatestarcodex.com/2014/07/30/meditations-on-moloch/], all of which makes coordinating a rational, global AI strategy in the long-term best interests of all parties (i.e. all of mankind) almost impossible.

That said, I do believe there's a way through, primarily because (in my assessment) current approaches (LLMs etc) are fundamentally flawed, and will therefore plateau well short of super-intelligence, however much money you throw at scaling them (and Sam Altman more-or-less admitted as much last week), meaning that they will never be smart enough to pose an existential threat. Having plateaued, they will nevertheless generate massive wealth for the many thousands of AI opportunists out there, while at the same time inflicting massive (albeit less than existential) societal harm at global scale. With any luck, that harm (an existential near-miss, as it were) will finally induce global policymakers to properly regulate potentially super-intelligent AGI very much more strictly than would otherwise have been the case. That will then (hopefully) give us the necessary time (50-100 years IMO) to design, develop, and deploy super-intelligent AGI that is both maximally safe and maximally benevolent, and provably so.

Expand full comment

Thanks for linking the video on the Moloch with Schmachtenberger. I didnt know about them. It is pretty much what I have been thinking about as well. In terms of solutions, even if we dont know what will work best, it is not too hard to see what would lead in the right direction. We should curate a list with proposals. Here are mine to start with.

Carbon fee and dividend, AI fee and dividend, reducing tax on labour in favour of tax on resources, represent stakeholders (not only shareholders) on the board of companies, quadratic taxes, reform limited liability, tax on advertising, campaign finance reform, one person one vote, ...

Expand full comment

Well, generating an infinite set of dystopian futures is natural. A combination of bad actors and bad design will just have to play out before any meaningful enforcement mechanisms can be determined.

Expand full comment

You can never eliminate the human part of human tech. Hence why every tech optimist faces a Groundhog Day-like nightmare cycle. It’s not just bad actors, but people who are incentivized in all the wrong ways to do whatever is necessary without ever pausing to think about ethics. Ethics require deep thinking and caution, and society and economics rewards action.

I honestly don’t see any way around this unless we adopt the naïveté of most DAO activists: “business would be so much better if only we got rid of all the messy people.”

Expand full comment

DAO as in decentralized autonomous organization? I noticed this as well. Libertarians like to say that "socialism just doesnt work" but as we have learned in the 15 years since bitcoin this compliment can now be returned: "libertarianism just doesnt work".

The irony is that, as you say, in both cases this is for the same reason: people are too messy to conform to the (different) ideals that libertarianism and socialism require.

The deeper question then is: What should we put in place of socialism and libertarianism?

I think it is time to finally give **democracy** a chance again.

Sure, it will be messy, but then this is just the way we people are.

Expand full comment

I still think its pretty late for his conscience to reincarnate.

Expand full comment

The next iteration of GPT is Auto-GPT ... a student has just shown me how it can be used to develop a pretty impressive project without a human sitting in between the code and the AI.

https://github.com/Significant-Gravitas/Auto-GPT

Expand full comment

Eighty years ago deep concerns were expressed by scientists about nuclear weapons (bad) vs nuclear power generation (good). Turns out politicians don't need big weapons to kill small people, but having a button to press makes people pay more attention to you. I suspect AI in its good and bad forms will evolve to be much the same.

Expand full comment

There is a lot of talk about alignment and AI safety. But we live in a market economy. Incentives will determine where we go in the future. If we want to be serious about alignment and AI safety, we need to align the economy. And ask how we can restructure economic incentives to make the economy (and AI) safe for our future.

Expand full comment