32 Comments
Feb 7Liked by Gary Marcus

Is it clear what strategic brilliance a super-AI is even theoretically supposed to bring to war? It seems fairly easy to summarise what wins wars. In decreasing order of importance: 1. superior weapons technology (guns versus spears), 2. superior numbers (Red Army in WW2), 3. tactical acumen and superior movement that allow a side that, within reason, has the smaller numbers, to achieve superior numbers locally (Napoleon), and 4. encircling the enemy (Hannibal; works only if they let you do that, of course). Orthogonal to these are logistics. Their importance varies depending on how long a war lasts, how complex the technology is (do you only need food or also petrol and spare parts?) and how acceptable or not it is to plunder for supplies.

Despite the glaring incompetence of many military commanders throughout history, all of these are well within the grasp of human intellect and can be and are in fact being taught in military academies. What would a superhuman genius add to this? Some 7D chess move that would have allowed the Inca empire to win against cannons, steel, and cavalry when it lacked all of these? Some flash of inspiration that allows a thousand people armed with automatic guns to pull off a surprise victory against a nuclear bomb that detonates above their heads? A genius of organisation that somehow enables an army to drive its tanks even when they have run out of fuel?

The underlying problem is, as so often, that the AI hypsters do not understand diminishing returns, AKA low-hanging fruit already having been harvested. Instead, they have to cling to the belief in exponential improvements, because otherwise their cultish ideology falls apart.

Expand full comment
Feb 8Liked by Gary Marcus

Loved that example :) I was laughing out loud! It perfectly illustrates the essence of LLM shortcomings in a wonderful way, making it accessible to those who have yet to fully grasp the difference between a 'meaningful series of words' and 'comprehension of time, space, and reality' (whatever those may be ;) ).

Expand full comment

I've listened to some of the Pentagon's top AI people on very obscure podcasts ya'll will never have heard of, and our military is looking to roll out AI cautiously. For now, in any case. Other countries? At recent arms fair Russians were flogging something as an AI weapon but it was really just motion sensor and ranging gear linked to automated gun. In other words, it would shoot anything that moved. No target discrimination. Automated war crimes. Yikes.

Expand full comment
Feb 7Liked by Gary Marcus

An LLM cannot be held accountable. Therefore, an LLM must never make a management decision.

Expand full comment
Feb 7Liked by Gary Marcus

Once again another dagger Gary, on the most simple or “obvious” point. ChatGPT gives you the most known answer based on the data all users have access to. So how can there be an advantage in ANY type of strategy developed? It’s counterintuitive.

Furthermore, ChatGPT can’t account for randomness, so in fact ChatGPT can be counterproductive when it comes to strategy.

Expand full comment
Feb 7Liked by Gary Marcus

On point comment

Expand full comment

While giving a generative model control over strategic decisions seems both stupid and ethically suspect, I think there are genuine uses for generative AI to aid humans conducting war in decision making. As the linked article about using AI in war discusses, gen AI could help aggregate data, feeding Generals and decision makers with a more comprehensive picture of the battlefield. Not, perhaps, with ChatGPT 5, but a version not too far off.

All the failure points of current gen AI systems will exist, of course, and their impacts on battlefield use would be significantly worse, so one would hope there is great hesitancy in integrating such systems into war, but I don't have much confidence that world governments will have the patience or foresight to wait, and the need to match what the "enemy" does is going to drive such integrations in the coming years, unless we place a moratorium that is globally binding.

Worse, though, even current AI systems could be used to create malware, and generate fake data to confuse enemy sensors and intelligence. The war in Gaza is already seeing this, but imagine floods of fake videos and photos to drive propaganda. These uses would, of course, be driven by strategic considerations from humans, but it isn't impossible to imagine AI agents suggesting such tactics, and helping in their execution.

So while I think fears of an AI general who comes up with military strategy that wows the world are overblown, there is good reason to fear the use of AI in war can reshape it, and do so catastrophically.

Expand full comment

GPT-4 actually does appear to have quite a good understanding of the game, and how unfair it is to play this way: (see https://poe.com/s/E8uFr4SV23stgtxObr6X)

"f you choose after seeing my choice, you'd have a significant advantage as you could always select the winning option against my prior choice. Rock-paper-scissors is designed to be played simultaneously to ensure a fair game where neither player knows the other's choice ahead of time. That's why it's usually played with both players revealing their choice at the same time, often with a rhythmic hand motion or countdown to synchronize the reveal.

To keep the game fair, we should "reveal" our choices simultaneously. Since we're communicating via text and I can't see or predict your choice, you can simply type out your choice and "send" it to simulate a simultaneous reveal. Would you like to play another round?"

Expand full comment

I think about this quite a lot, and if Eric Schmidt is making autonomous drones, what do we expect to happen exactly? OpenAI is going to work with the DoD because it's very profitable and one thing will lead to another always because "China is doing it". So in effect by using ChatGPT we are actually supporting what Military AI will become, we're funding it.

Some of Sam Altman's moves will make the world a much more dangerous place. It's pretty well guaranteed already.

Expand full comment

The article is a little old, and I was looking for a more recent one in which the AI had learned to master the game including - lying.

https://www.science.org/content/article/ai-learns-art-diplomacy-game

Expand full comment

"Shall we play a game?" WOPR (AKA Joshua)

Expand full comment

Ever hear of Col. John Boyd?

Expand full comment

ROTFL when I read the rock-paper-scissors example. Extremely funny.

Given that it seems LLMs have already plateaued, GPT-5, when it eventually arrives (if only for marketing purposes), will most likely be only a limited advance.

Expand full comment