Discussion about this post

User's avatar
Gerben Wierda's avatar

What still isn't clear to most people is that with GenAI

(1) useful memorisation and unacceptable training data leakage are technically the same thing

(2) creativity and 'hallucinations' are technically the same thing

They are the same. We just stick two different labels on them based on if we like/want the result or not

Expand full comment
Earl Boebert's avatar

As somebody who has been working computer security for as long as there was such a field, I can give you a rough idea of how bad the situation is. Think back to the basic elements of the WWII Ultra effort. There was one target, the Enigma cipher system. Breaking it gave its adversaries *everything.* Consider the amount of time and effort the adversaries put into achieving that break.

Now we have three targets: the OpenAI facility, the Gemini facility, and the Claude facility, plus the upcoming Stargate colossus. These are being constructed by organizations who have demonstrated no appreciation for the magnitude of the threats they face and no sympathy whatever for the direct and indirect costs required to respond to such a threat [1]. They are and will be the worst combination of soft target and (if they succeed in attracting enough business to be profitable) valuable target that has ever existed. Meditate on that and then consider the potential adversaries and examine the efforts those entities have mounted in the past in this area.

The true existential risk of GenAI is that it will succeed in being accepted, and by doing so will become essential. If its providers have not already been penetrated they soon will be, and that will be catastrophic for us in the way that Ultra was catastrophic for the Germans. Not through a single, massive event, but by operating at a constant disadvantage, one encounter after another, until ultimate defeat and collapse.

1. See: https://arstechnica.com/security/2024/03/thousands-of-servers-hacked-in-ongoing-attack-targeting-ray-ai-framework/

Expand full comment
157 more comments...

No posts