123 Comments
User's avatar
Steve Nevins's avatar

The reality is, there will be a never ending supply of people eager to exploit AI to satisfy their agenda, enrich their bank accounts, and harm others. Drawing lines in the sand, while important, will often just be symbolic …

Expand full comment
Donald Severs's avatar

We have to try. And yes, it will depend on dependable enforcement with serious penalties.

The Asilomar Conference was an example of effective constraints on a new technology.

https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA

Expand full comment
Vin LoPresti's avatar

A bit ironic that you should reference that Wikipedia article, which initially references concerns about the use of SV40 promoter/ enhancer DNA sequences. Ironic because it illustrates how it's all meaningless if regulatory bodies are corrupted by industry. This is underlined by the presence of SV40 promoter/ enhancer sequences in COVID "vaccines", manufacturing junk to cheaply produce the spike protein mRNA -- potential carcinogens for which the FDA raised the allowable product limit; when given their greatly enhanced transport into cells by lipid nanoparticles, the exact opposite regulatory decision was logically required. Enforcement requires honest enforcers.

Expand full comment
Jan Steen's avatar

Funny: an anti-vaxxer trying to pretend to understand the science.

Expand full comment
Vin LoPresti's avatar

No dear, a molecular biologist, who entirely understands the science.

Vaccines are simple:

Fantastic STRATEGY. Diverse, sometimes imbecilic TACTICAL IMPLEMENTATION. Each case: Assess Risk/benefit ratio.

COVID "vaccines" Poor

https://vinlopresti.substack.com/p/dna-contamination-in-covid-jabs-what

What we have is here is: Another troll, who doesn't understand shite. Suggest you read my post and learn something.

Expand full comment
Vin LoPresti's avatar

From what I've read in refereed literature, I disagree with a large part of it, and COMPLETELY DISAGREE WITH

"The protective benefits of vaccination far outweigh the potential risks". UNLESS qualified with "FOR SELECTED POPULATIONS." Not to include infants, young males, pregnant females, and never to be mandated in a way that validates any discrimination against the unvaccinated. Because it does NOT induce mucosal immunity and its effect on transmission is therefore minimal to nonexistent. And if you continue to call that anti-vax, you're nothing but a stubborn ass. And it was clear who was the troll the instant an "anti-vaxxer" accusation casually passed from those fingers onto a keyboard.

Expand full comment
Costa's avatar

Lol - you cite the Australian government, which was one of the most rabid governments in the world at enforcing vaccine mandates and punishing its citizens for rightly refusing the clot shots. Just send this guy a message and tell him he did the right thing by taking the CV "vaccine": https://www.youtube.com/watch?v=7rZZTPp-eYU.

Expand full comment
Donald Severs's avatar

Charity, please. None of us can read minds and referencing one's own work settles nothing.

An informed public would be nice, but it would still come down to public trust. Politicians will always have the advantage through exploiting human biases. This stuff is hard.

I'll look into the Asilomar conference further if I can find another source for your concerns :)

Expand full comment
Vin LoPresti's avatar

Not just "referencing my own work", thank you. The Wikipedia article itself attests to the oncogenic potential of SV40 regulatory sequences. Though there's still uncertainty, there's also a ton of concern. Here's an example:

https://tlcr.amegroups.org/article/view/35999/24352

As far as SV40 DNA found in excess of regulatory limits, I cite a Canadian study in my post. This one:

https://www.researchgate.net/publication/395330536_Quantification_of_residual_plasmid_DNA_and_SV40_promoter-enhancer_sequences_in_PfizerBioNTech_and_Moderna_modRNA_COVID-19_vaccines_from_Ontario_Canada

Expand full comment
Costa's avatar
6dEdited

So basicaly, you conduct an ad hominem attack against someone who questions the COVID "vaccines". Hmmm... Nice

Expand full comment
Jan Steen's avatar

Only idiots write '"vaccines"' instead of 'vaccines'. You prefer people to die from preventable diseases, I take it. A fan of Donald T. and RFK jr., I assume.

Expand full comment
Costa's avatar

Another ad hominem attack. Nice again.

Expand full comment
Steve Nevins's avatar

That’s why I said it’s important to do … but let’s be realistic and realize we have to go well beyond ineffective policies and laws.

Expand full comment
Joel Byron Barker's avatar

I think this surrender mindset is precious. Like today is different than prior frontiers. We can legislate and we can prevent harm.

Expand full comment
Marco Masi's avatar

I'm much more concerned about the deceitful and harmful behavior of humans using AI than about the deception of AI itself, which is a learned pattern from its training by humans. The myth of mass unemployment has no rational basis and serves as a distraction from the real dangers. The most devastating threat comes from the human use of AI for political propaganda and anti-climate change disinformation.

Expand full comment
Jack's avatar

Or just garden-variety scams and marketing, amped up to 11 with personalized AI. Humanity isn't prepared for what unscrupulous people can do with these tools, and will have every incentive to do.

Expand full comment
Geoffrey Tully's avatar

Good (and appropriate) reference to Spinal Tap.

Expand full comment
Lorenz Granrath's avatar

The machine is not the problem, it's the monkeys behind the machine! It depends how its programmed and that is (still) done by humans

Expand full comment
Sugarpine Press's avatar

Of the opinion that it will be people acting stupidly (using machines) rather than machines acting intelligently, that poses the broader risk over the near-midterm.

Expand full comment
Denis Poussart's avatar

No **agentic** AI process should be deployed in any critical situation - i.e. one with serious / non repairable consequences - without having a human in the loop and witthout unbreakable identification. In other words, in the classical Observe-Orient-Decide-Act cycle, the transition from Decide-to-Act should never be allowed unless it is supervised by a speciifc human process which is fully open to inspection by others. I am sure this red line definition can be written in a tigh legal framework.

Expand full comment
J. Corey's avatar

Do you believe that human in the loop is necessary if the AI is provably and robustly able to do things better than humans? And what about in situations where response speed is a factor?

I think human-in-the-loop is generally a good thing to have in the absence of those or other factors, but I don't think it's a desirable requirement for all the time.

Expand full comment
Jack's avatar

How would you apply that logic to a self-driving car, like a Waymo? Those AI systems make serious decisions on a more or less continuous basis.

Expand full comment
Paul Czyzewski's avatar

Waymo cars _frequently_ need help from remote operators. They are in no way completely self-driving.

Expand full comment
Denis Poussart's avatar

You are right but waymo keeps in well documnted (fully) mapped) surrounds and Tesla requires that a driver keeps aware. In either case, the right-to-act may be justified because advanced self-driving saves lives *in the long run", i.e. if it is demonstrated that serious accidents are less frequent that for human drivers who are often distracted.

Expand full comment
Andy G's avatar

So you literally just undercut your own prior statement!

So much for your “No **agentic** AI process should be deployed in any critical situation…”

Expand full comment
Denis Poussart's avatar

I wrote "I am sure this red line definition can be written in a tigh legal framework." Reality is complex and fuzzy, that's why laws ("red lines") are not just one-liner binaries but rather nuanced directives that may emerge after extensive analysis and recognition of the context, scope of their use and entailing responsibilities. For self driving vehicles for instance. Weapons which become fully autonomous after having been launched and that select spectific targets on their own go through the Observation-Decision-Action sequence, using a collection of avanced techniques, is another example. One could possibly argue that their "red line" should comply with the framework of the Geneva Conventions of 1949, with adjustments for modern capabilities. "Red lines" are not narrow. They habe become a critical field with the development of agentic AI.

Expand full comment
Jack's avatar
7dEdited

I suspect it will be quite hard to define useful "red lines" in ways that are clear and enforceable, and also don't exclude good uses of AI (like self-driving cars). I hope I'm wrong.

Russell has one or two that might be candidates (e.g., "no AI should be allowed to self-replicate on its own"), but I suspect these will only cover a small part of total AI risk. But that doesn't mean those things shouldn't be done.

Expand full comment
Denis Poussart's avatar

Yes, laws and legal restrictions mean little if they are not enforceabl; they fall in the "necessary but not sufficient' category", like, for instance, the Geneva Convention. As to an AI that would "self-replicate on its own", it is a particular case of a system that is provided the capability to autonomously cross the Decision-Action transition (to be agentic) without being "supervised by a specific human process which is fully open to inspection by others", as per my initial post. The "human-in-the-loop" is already the norm in human-critical decision making in Canada for instance, with the provision that a decision can be appealed.

Expand full comment
Jim Brander's avatar

Denis I wrote "I am sure this red line definition can be written in a tigh legal framework." Your next sentence says it can't be - "Reality is complex and fuzzy, that's why laws ("red lines") are not just one-liner binaries". Who actually implements these nuanced directives - do we have a lawyer at the programmer's shoulder. The lawyer has the same problem we all do - the Four Pieces Limit, so all you will get is a dirty mess. We need a machine to handle complex legislation, so everything gets connected as it should be (page 119 refers to page 703).

Expand full comment
Vince F Golubic's avatar

Thanks Gary !

Expand full comment
Craig's avatar

The suicide issue is absolutely non-trivial and I'm surprised nobody is really going after it.

Expand full comment
Technoskeptic Staff's avatar

Right now more worried about shoddy AI being baked in government/military processes because capabilities were vastly oversold by Silicon Valley political donors who are counting on USG to be a customer. Once it gets inside the government process it is hard to get it out even if it performs poorly.

Expand full comment
MarkS's avatar

Gary: where is the draft agreement? What are these "red lines"? Can you (or someone) please write them down?

Who is going to write them down if not you and the other signatories?

The fact that none of you have done it shows how impossible a task it actually is.

Expand full comment
J. Corey's avatar

Yeah, if you couldn't articulate hard lines to ask for, this is no more than wishful thinking.

Expand full comment
Jan Steen's avatar

If you want to know who has power over you, ask who is being invited to a gala dinner with the King. Answer: Sam Altman in the UK, last week (along with the equally detestable Rupert Murdoch, not to mention the guest of honour, who should have been in jail instead).

Most politicians are too stupid to understand AI and see through the empty promises by the likes of SA. The call for red lines will fall on deaf ears, I'm afraid.

Expand full comment
Wolfgang Knorr's avatar

The only viable path to those lines actually being respected, in my opinion, would be to essentially nationalise (de-privatise) the entire big-tech AI industry. We need primacy of the political will over economic and technical feasibility. In other words - a revolution.

Expand full comment
RJ Robinson's avatar

That would solve the harder part of the problem, but I doubt that governments are to be trusted either!

Expand full comment
Oleg  Alexandrov's avatar

Not much will happen, because the systems are very basic. For now, this is more like software and regressions that benefit from a lot of data, rather than "alien intelligence".

Expand full comment
Future of Citizenship's avatar

I encourage AI experts to engage more with the frustrating but essential UN process. Don’t just watch the UNGA - join the frameworks and side events. The UN is very much slowly slowly, then all at once.

Expand full comment
RJ Robinson's avatar

A laughable soon. We should believe that Big AI and governments would adhere to any such agreement when we see it. Which, of course, we will never be allowed to do.

Expand full comment
RJ Robinson's avatar

Oops. A laudable aim...

Expand full comment
Geoffrey Tully's avatar

Sometimes autocorrect enhances the intended meaning.

Expand full comment
RJ Robinson's avatar

True, but as LLMs show, it only requires a mechanical process, without any sign of intelligence.

Expand full comment
Amy A's avatar

Social media has created a Dunning Kruger epidemic and genAI is likely to accelerate it. I’d encourage others to read Maria Ressa’s work on how the powerful manipulate us with misinformation. It’s people, sure, but the tools amplify the harms.

Expand full comment
Bill Quick's avatar

I have trouble reconciling your stance on AI, which seems to be that contemporary AI is not very capable, prone to errors of all kind, and very limited in its potential. Yet you sign a petition calling for red lines to protect us all from this terrible threat of rampant digital incompetence currently being expressed.

Expand full comment
Med Kharbach, PhD's avatar

About time to muzzle the genie!

Expand full comment
Larry Jewett's avatar

The Gen-AI has become too big for its bubble-pants.

Expand full comment
Larry Jewett's avatar

Gen-AI : "Master, I will grant you 3 wishes."

Aladdin: "1) world peace 2) no more hunger."

Gen-AI : "Done!! And what is your third wish, oh Master?"

Aladdin: "Get thee back in the bottle!!"

Gen-AI: "Im sorry Dave...I mean Master. I'm afraid I can't do that."

Expand full comment