47 Comments
May 7, 2023Liked by Gary Marcus

The fact that important scientists with divergent views share deep concern about where AI is headed is both concerning…and inspiring. Thank you both for continuing to speak up publicly.

Expand full comment

Bad actors are much more dangerous, leading to an internet filled with uncertainty regarding origins of the information we consume. Presumably software will be developed to identify human vs nonhuman origins, but the free market being what it is, such software may be priced at levels only well-off people can afford. Result a trusty reliable internet for some, a junked up internet for the rest of us. Two-tier internet.

Expand full comment

What are the compelling benefits of AI which justify taking on more risk right now, at a time when we already face a number of serious risks that we have little to no idea what to do about??

Why is there never an answer to this question? Why are we taking on YET ANOTHER risk?

Why are so many experts endlessly waffling, wringing their hands, and making utterly vague statements about global governance schemes and so on? What is so hard about simply saying...

"We aren't ready for AI at the moment, so let's put a hold on it for now, and shift our focus to addressing the unresolved questions."

Here's an alternate suggestion to what Marcus offers:

1) Get AI experts out of the room. People who make their living developing AI can hardly be expected to be objective on the question of whether AI development should continue.

2) Get scientists out of the picture too, for the same reason, lack of objectivity. The science community is hopelessly trapped within an outdated 19th century "more is better" relationship with knowledge. That philosophy is a blind faith holy dogma to them. Few of them seem to even realize this. Scientists are great at science, and largely clueless about our relationship with science.

We already know that AI presents risks in both the short and long term. We the public need to decide whether we feel it's rational to take on more risk at this point in time.

If someone should argue that it's worth taking on the risk with AI, please tell us how many more risks you feel we should also accept. Is there any limit to that? Should we be mindless drone slaves and just blindly take on any risk, no matter how many, no matter how large, that some engineer somewhere decides is a cool idea?

Artificial intelligence exists because human intelligence doesn't.

Expand full comment

Difficult conversations: And still I wonder, have we asked or are we interested in what God the creator has to say?

That's not an issue of "religion", by the way, but something much deeper. For do we really believe that there's no spiritual component in this matter? All technical, nothing else?

I understand GNC pretty well, yet: where is our guidance coming from, wh or what is navigating, and who or what is "in control"?

Don't think we need any other wisdom? By all means, steady as she goes, proceed as before.

I remain in prayer as well as technical problem solving. Because they're not mutually exclusive.

Peace

Expand full comment

Misinformation has been around since humans invented language. People are good at generating it and they’re not very good at spotting it. You don’t need AI to be able to generate a lot of it, and you don’t need AI as the tool at fault in order to blow concern about it out of proportion to the threat. https://www.synthcog.blog/p/complexity-misinformation-bias

@Swag Valence — given that both Gary and Geoff are scientists in the field and all the major AI companies have been discussing ethics of AI for awhile, I’m not sure how you draw this conclusion.

Expand full comment

An even more fundamental core issue (IMO) is that we need to actually understand (at a mathematical, scientific, and engineering level) the AI systems that we are building and (shudder) deploying. (And I must say, despite how unfashionable it might be to do so, that symbolic AI is decades ahead of connectionist AI in that regard.) Only then will we have any chance of being able to "guarantee that we can control future systems" as you have highlighted.

Expand full comment

Well, generating an infinite set of dystopian futures is natural. A combination of bad actors and bad design will just have to play out before any meaningful enforcement mechanisms can be determined.

Expand full comment

You can never eliminate the human part of human tech. Hence why every tech optimist faces a Groundhog Day-like nightmare cycle. It’s not just bad actors, but people who are incentivized in all the wrong ways to do whatever is necessary without ever pausing to think about ethics. Ethics require deep thinking and caution, and society and economics rewards action.

I honestly don’t see any way around this unless we adopt the naïveté of most DAO activists: “business would be so much better if only we got rid of all the messy people.”

Expand full comment

I still think its pretty late for his conscience to reincarnate.

Expand full comment

The next iteration of GPT is Auto-GPT ... a student has just shown me how it can be used to develop a pretty impressive project without a human sitting in between the code and the AI.

https://github.com/Significant-Gravitas/Auto-GPT

Expand full comment

Eighty years ago deep concerns were expressed by scientists about nuclear weapons (bad) vs nuclear power generation (good). Turns out politicians don't need big weapons to kill small people, but having a button to press makes people pay more attention to you. I suspect AI in its good and bad forms will evolve to be much the same.

Expand full comment

There is a lot of talk about alignment and AI safety. But we live in a market economy. Incentives will determine where we go in the future. If we want to be serious about alignment and AI safety, we need to align the economy. And ask how we can restructure economic incentives to make the economy (and AI) safe for our future.

Expand full comment