28 Comments

Is it clear what strategic brilliance a super-AI is even theoretically supposed to bring to war? It seems fairly easy to summarise what wins wars. In decreasing order of importance: 1. superior weapons technology (guns versus spears), 2. superior numbers (Red Army in WW2), 3. tactical acumen and superior movement that allow a side that, within reason, has the smaller numbers, to achieve superior numbers locally (Napoleon), and 4. encircling the enemy (Hannibal; works only if they let you do that, of course). Orthogonal to these are logistics. Their importance varies depending on how long a war lasts, how complex the technology is (do you only need food or also petrol and spare parts?) and how acceptable or not it is to plunder for supplies.

Despite the glaring incompetence of many military commanders throughout history, all of these are well within the grasp of human intellect and can be and are in fact being taught in military academies. What would a superhuman genius add to this? Some 7D chess move that would have allowed the Inca empire to win against cannons, steel, and cavalry when it lacked all of these? Some flash of inspiration that allows a thousand people armed with automatic guns to pull off a surprise victory against a nuclear bomb that detonates above their heads? A genius of organisation that somehow enables an army to drive its tanks even when they have run out of fuel?

The underlying problem is, as so often, that the AI hypsters do not understand diminishing returns, AKA low-hanging fruit already having been harvested. Instead, they have to cling to the belief in exponential improvements, because otherwise their cultish ideology falls apart.

Expand full comment

Really smart big picture comment

Expand full comment

Also, a superior (artificial?) intelligence would hopefully get to the real, unique solution on that problem, which was already identified in the 80's by sci-fi movie Wargames, which was quite prophetic AND realistic (it trains, it answers CORRECTLY)

Spolier warning: if you haven't seen that film, this is the ending, you may ruin the chance to see an old but gold movie (expecially, today 😉)

https://youtu.be/s93KC4AGKnY

Expand full comment

Our current AI definitely does not train "correctly." You can improve prompts by bribing it(it has no use for money), and my general understanding is that it is attempts "human simulation."

As a result, it is very jank. IMHO this is even worse in some ways to the superintelligent godlike imagination. Imagine having the world heavily damaged by an AI highly convinced it is Joseph Stalin who must complete perfect communism. AI doesn't really seem to "correct" itself once it is on a goal, it just comes up or hallucinates more reasons to continue on its goal.

Expand full comment
Feb 11Edited

Fully agreed. My comment was purely ironic, as an imagined semi-geneal AI in 80's, as presented in that good movie, based on retroactive training, after having been showed to "a new concept"(impossible to win in tic tac toe) was able to get "to an abstraction" for the other broader problem (identify best tactics to be a winner in a global global nuclear war: non solution, everyone loses). Quite prophetic for 80's, though intention was in the moral ending more than in explaining that sci-fi tech at that time, as also nowadays any AI technology and approach (DL, supervised/unsupervised systems, SVM, etc.) are far away from having any capability of real thinking as human/animal capabilities of "abstraction".

That's it.

Ps. a joke is not good if it needs being explained 😊

Expand full comment

Loved that example :) I was laughing out loud! It perfectly illustrates the essence of LLM shortcomings in a wonderful way, making it accessible to those who have yet to fully grasp the difference between a 'meaningful series of words' and 'comprehension of time, space, and reality' (whatever those may be ;) ).

Expand full comment

I've listened to some of the Pentagon's top AI people on very obscure podcasts ya'll will never have heard of, and our military is looking to roll out AI cautiously. For now, in any case. Other countries? At recent arms fair Russians were flogging something as an AI weapon but it was really just motion sensor and ranging gear linked to automated gun. In other words, it would shoot anything that moved. No target discrimination. Automated war crimes. Yikes.

Expand full comment

An LLM cannot be held accountable. Therefore, an LLM must never make a management decision.

Expand full comment

Once again another dagger Gary, on the most simple or “obvious” point. ChatGPT gives you the most known answer based on the data all users have access to. So how can there be an advantage in ANY type of strategy developed? It’s counterintuitive.

Furthermore, ChatGPT can’t account for randomness, so in fact ChatGPT can be counterproductive when it comes to strategy.

Expand full comment

On point comment

Expand full comment

While giving a generative model control over strategic decisions seems both stupid and ethically suspect, I think there are genuine uses for generative AI to aid humans conducting war in decision making. As the linked article about using AI in war discusses, gen AI could help aggregate data, feeding Generals and decision makers with a more comprehensive picture of the battlefield. Not, perhaps, with ChatGPT 5, but a version not too far off.

All the failure points of current gen AI systems will exist, of course, and their impacts on battlefield use would be significantly worse, so one would hope there is great hesitancy in integrating such systems into war, but I don't have much confidence that world governments will have the patience or foresight to wait, and the need to match what the "enemy" does is going to drive such integrations in the coming years, unless we place a moratorium that is globally binding.

Worse, though, even current AI systems could be used to create malware, and generate fake data to confuse enemy sensors and intelligence. The war in Gaza is already seeing this, but imagine floods of fake videos and photos to drive propaganda. These uses would, of course, be driven by strategic considerations from humans, but it isn't impossible to imagine AI agents suggesting such tactics, and helping in their execution.

So while I think fears of an AI general who comes up with military strategy that wows the world are overblown, there is good reason to fear the use of AI in war can reshape it, and do so catastrophically.

Expand full comment

Malware developers immediately attempted to weaponize LLMs and found it disappointing, so at least that isn't much of a threat just yet. The idea was there - a code that writes code, but it turned out to be too error-prone to work.

Datafaking via astroturfing and algorithmic manipulation are much more powerful and likely, and LLMs would just add to the current problems as opposed to being novel.

Expand full comment

Did they? Source?

Expand full comment

https://www.scmagazine.com/native/cybercriminals-cant-agree-on-gpts

"

- Real-world applications remain aspirational for the most part, and are generally limited to social engineering attacks, or tangential security-related tasks

- We found only a few examples of threat actors using LLMs to generate malware and attack tools, and that was only in a proof-of-concept context"

It was most helpful for scriptkiddies, but they also were the least able to use it. As a tool to generate alluring bullshit, it has some use.

Expand full comment

GPT-4 actually does appear to have quite a good understanding of the game, and how unfair it is to play this way: (see https://poe.com/s/E8uFr4SV23stgtxObr6X)

"f you choose after seeing my choice, you'd have a significant advantage as you could always select the winning option against my prior choice. Rock-paper-scissors is designed to be played simultaneously to ensure a fair game where neither player knows the other's choice ahead of time. That's why it's usually played with both players revealing their choice at the same time, often with a rhythmic hand motion or countdown to synchronize the reveal.

To keep the game fair, we should "reveal" our choices simultaneously. Since we're communicating via text and I can't see or predict your choice, you can simply type out your choice and "send" it to simulate a simultaneous reveal. Would you like to play another round?"

Expand full comment

I think about this quite a lot, and if Eric Schmidt is making autonomous drones, what do we expect to happen exactly? OpenAI is going to work with the DoD because it's very profitable and one thing will lead to another always because "China is doing it". So in effect by using ChatGPT we are actually supporting what Military AI will become, we're funding it.

Some of Sam Altman's moves will make the world a much more dangerous place. It's pretty well guaranteed already.

Expand full comment

The article is a little old, and I was looking for a more recent one in which the AI had learned to master the game including - lying.

https://www.science.org/content/article/ai-learns-art-diplomacy-game

Expand full comment

I linked to my own discussion of that article

Expand full comment

Great article!

Expand full comment

"Shall we play a game?" WOPR (AKA Joshua)

Expand full comment

Ever hear of Col. John Boyd?

Expand full comment

OODA loop, right?

Expand full comment

Right. Heard him speak a long time ago. People, ideas, technology, in that order. OpenAI has it ass backwards. They start with a technology, wrap it in magical thinking, and cut people out of the loop.

Actually, it's worthwhile studying the notion of the Orient step, to understand why neural net/machine learning systems will reach AGI. Or FSD for that matter.

Expand full comment

ROTFL when I read the rock-paper-scissors example. Extremely funny.

Given that it seems LLMs have already plateaued, GPT-5, when it eventually arrives (if only for marketing purposes), will most likely be only a limited advance.

Expand full comment

I aim to amuse :)

Expand full comment

Many writers and speakers do... (i.e. yearn for the audience's applause/recognition/etc.).

Expand full comment
Comment removed
Feb 7
Comment removed
Expand full comment

Indeed, the chatbots are not plateauing, but that is not what I said. What I said was closer to what you said. The LLMs that are part of the chatbots are plateauing. The future indeed seems to be that LLMs are going to be more and more an element in a more complex architecture. But that suggests that the plateauing of LLMs themselves is accepted within OpenAI/Microsoft/Google. See https://ea.rna.nl/2024/02/07/the-department-of-engineering-the-hell-out-of-ai/

Expand full comment
Comment removed
Feb 8
Comment removed
Expand full comment

I find it useful to remind myself (and people) that LLMs find the most likely *continuation*. If it is indeed an answer depends on the meaning of what that continuation is. And while the massive models can in areas where there has been a lot of training material can be statistically constrained such that these continuations are indeed usable as answers. I see these chatbot getting better still (but with a lot of effort), but fundamentally they will not become AGI.

Expand full comment