I’ve been meaning to write a quick post about the question of when a nation should go to war, and when it should not — and in particular, under what conditions the United States should use large-scale military force against another country. I don’t mean the question of whether a war is legal under the international humanitarian law governing jus ad bellum. I mean the question of when large-scale military engagement is a good idea, something that the public should support. It’s not inconceivable that there are situations when military force is a good idea even though the legal basis is unclear or lacking — such as Kosovo in 1999, or maybe Libya in 2011 — and there are also, certainly, situations when the legal grounds for a war exist, but going to war would be unwise — such as attacking Russia in response to its annexation of Crimea last year.
Based on the armed conflicts involving the United States during my lifetime, it sometimes seems as though the wisdom of entering or not entering an armed conflict gets determined in retrospect, based on how the war turned out — which doesn’t seem like a useful or fair standard for judging wisdom. No one seems particularly bothered about Desert Storm, looking back, although many progressives at the time (including, for example, Joe Biden) opposed military intervention. On the other hand, many people seem to feel that the United States should have intervened in Rwanda to stop the genocide, although there was no great progressive push to do so at the time. It’s hard to avoid the conclusion, looking at attitudes toward U.S. uses of force over the last few decades, that we tend to treat decisions about wars as good decisions when they turn out well, and treat them as bad decisions when they don’t. But we often can’t know in advance how a war, or the choice not to go to war, will turn out — wars are notoriously unpredictable, and often develop their own momentum, and motivations and expectations frequently change — so how are we supposed to decide what to support beforehand?
The idea I’ve been meaning to post is an answer to this question. It’s a fairly simple one, and it may already appear somewhere in the literature on just war. But I’ve never come across it before.
The most basic split in moral philosophy is between something like consequentialist reasoning (e.g., utilitarianism) and something like principle-based (or “deontological“) reasoning (e.g., Kant). There’s a school of moral reasoning based on virtue as well, but I’ll leave that and other alternatives to the side for a moment.
A consequentialist approach to deciding when to go to war (or to support going to war) would be to try to make the best possible calculation in advance of the likelihood that the benefits of the war, in every respect, will outweigh the costs of the war, in every respect. If and only if the benefits outweigh the costs should the state go to war. A deontological approach to the decision, by contrast, might insist that a state should go to war if and only if some grave principle has been violated — for example, when a state invades another state without just cause, or when a genocide is taking place.
The weakness of the consequentialist approach is that wars, as already mentioned, are exceptionally unpredictable. They often end up inflicting far greater suffering (costs) than expected, as in the Iraq War beginning with the 2003 invasion. Sometimes they also inflict less suffering than expected — as arguably happened early on after the 2001 invasion of Afghanistan. Just as importantly, because wars often disrupt and reshape history on a large scale, their benefits are extremely difficult to calculate in advance as well. Calculating the costs and benefits of wars appears to be a good example of what Keynes called “irreducible uncertainty,” as opposed to risks whose probability can be calculated.
I remember Paul Wolfowitz making a consequentialist argument for the Iraq War in 2002.
Wolfowitz says that he agonizes a good deal over the dangers of dispatching Americans to war, that he respects the traditional conservatism of men in uniform who know the Antietams of the globe firsthand. Interventions that are only indirectly about American interests, like Somalia, he says, should be ”as close to risk-free as possible,” and, he suggests, ”maybe somewhere along the way we should have a volunteer force that is specifically volunteering for missions other than defending the country.” …
Wars that defend our safety may command a higher price. What price? Would the danger posed by a nuclear-armed Saddam be worth, say, the lives of thousands of American soldiers, if that is what the experts estimated it would take to disarm him by force?
Wolfowitz posed the question himself and answered no. Weapons of mass destruction would not be enough to justify the deaths of thousands of Americans. And in any case, thousands killed would mean the mission had gone badly wrong.
But Wolfowitz was not letting the discussion end there. Later, he e-mailed me an afterthought about that grisly calculus of going to war against Iraq.
”So if that’s what you estimate the costs of action to be, then you have to have something more on the other side of the ledger than just the possession of weapons of mass destruction,” he wrote. Whether that ”something more” that would justify that greater sacrifice meant evidence that Iraq was on the verge of using its weapons, or the prospect of establishing Iraq as an outpost of democracy, or a smoking gun tying Iraq to Sept. 11, he did not specify. ”In the end, it has to come down to a careful weighing of things we can’t know with precision, the costs of action versus the costs of inaction, the costs of action now versus the costs of action later.”
Here we see an illustrative example of the weakness of consequentialist reasoning as a basis for deciding to go to war.
The weakness of the deontological approach suggested above lies in the practical impossibility of assembling a list of principles whose violation would make war not only just, but a good idea. For example, the principle that one country should not be allowed to invade another country without just cause or legal basis. What could be a more clear-cut deontological justification for war? But as the case of Russia in Crimea shows, sometimes starting a war on this basis would be a horrible idea.
How do we know this? Because of consequentialist reasoning.
It seems to me that my own intuitions about when to support a war are best captured by the following principle: a state should go to war if and only if the war is justified under both consequentialist and deontological reasoning.
Of course, a lot will depend on which kinds of consequentialist and deontological reasoning one uses. But that’s not my interest here. My thought is that due to the unique characteristics of war as a moral question — its capacity to feed upon itself and spiral out of control, the irreducible uncertainty of its costs and benefits, and the moral gravity of its costs — the decision to go to war should be based on two hurdles, one consequentialist and one principle-based.
If the principle-based conditions are satisfied, then we can at least know — no matter how much worse the war turns out than we expected — that the war was, in principle, just. The consequentialist side of the analysis reduces the likelihood that we will go to war when doing so would be just but unwise.
Applying this two-part principle (in a very rough way, of course, since the details of the consequentialist and deontological reasoning haven’t been specified) leads more or less to the outcomes that I find intuitively right — and in a way that using consequentialist or deontological reasoning alone does not. The two-part principle argues against supporting the wars that I don’t think should have been seen as wise, even at the time, and in favor of supporting the wars that I do think should have been seen as wise.
If the goal is avoiding regret at one’s decision to support or oppose a military action, this two-part principle seems to be the best approach, at least in my own case.
I would argue that the Iraq War beginning with the 2003 invasion lacked an adequate basis in principle — and should have failed under a consequentialist analysis as well, although few people in the United States knew enough about insurgency, counterinsurgency, and the history, ethnography, and culture of Iraq and its neighbors to competently carry out that calculation in 2003. I certainly didn’t.
A final thought: there might be a corollary to the reasoning in this post for situations where the moral (and other) stakes are very low, and where consequentialist reasoning would be very time-consuming (or unlikely to produce reliable results). It might be best in those situations to pursue neither consequentialist nor principle-based reasoning, but instead to follow some simple heuristic in making the decision — a rule-of-thumb, like ordering the most popular product on a website when you’re unfamiliar with the product category, rather than doing independent research. I suppose heuristics could be viewed as shorthand versions of principle-based or consequentialist reasoning — the consequentialist could say that she uses the heuristic because the benefits of doing so outweigh the costs, the deontologist could argue that the heuristic is itself a kind of principle — but couldn’t a case be made that heuristic-based reasoning belongs in a category of its own? This seems to be Gerd Gigerenzer‘s approach.
But perhaps the use of heuristics, and for that matter “following your gut” (which seems to be most effective in situations where it is possible to develop practical expertise through repeated encounters with closely related scenarios), belong to a different conversation than the conversation in this post, which is about arriving at an optimally low level of regret concerning a certain type of moral decision (the decision to go to war), not about optimizing decision-making processes in general. Both consequentialist and deontological reasoning are forms of reason-based, deliberative decision-making, while the use of heuristics and following your gut can be seen as alternatives to that.