One major problem with this adage is that it can lead otherwise peaceful nations into the nastiness of arms races, brinkmanship or even unwanted war. The problem here lies in the conservative thread that underlies this behavior: if you don’t understand the opponent, then you must prepare for the worst as you won’t be able to interpret positive signs that they will attack you.

Consider:

  • Country A, Country B and Country C are opponents.
  • Country B lies between A and C.
  • Country A has always maintained a strong conventional military presence on the border with B as a deterrent/defense against attack.
  • Country B has always maintained its land forces at the borders of A and C for the same reason.

Now suppose Country C develops and tests an IRBM. If B and C have a longstanding grudge, then Country B is suddenly in a vulnerable position; logically, it begins to pursue IRBMs as well in order to deter C. A, however, sees B begin to develop a technology that threatens them, and by this adage must assume that B is developing the capability to preempt them in order to use it. A has the following choices:

  • Ignore the situation, which is unwise according to this node’s prescription.
  • Start developing its own IRBMs to counterbalance Country B. This might then drive Country B, in turn to try to maintain enough forces to offset both A and C, which would mean that it had enough forces to perform a splendid first strike on either one singly...and so it goes.
  • Pre-emptively attack B using its existing forces in order to avoid falling dangerously behind in the missile race.

The conflation of capability with intent is the base cause of arms racing and conflict spiraling (where small countermoves beget additional countermoves which beget larger moves and so forth). One reason this sort of analysis continues to get done is because (as it admits) it is far easier to do beancounting than to understand an opponent. It is much easier to say to one’s superiors ‘Here is their order of battle and I’m 95% sure of the totals; if I’m wrong, it’s intelligence’s fault’ than to say ‘Here is what we think they want and what they will do.’ While the second is more useful, it is much easier to get wrong, which of course no analyst wants to do, and no decision maker likes to risk.