Estimated time to read: 10 minutes
By Marcus McNabb
Quantitative analysis is a powerful tool, but it will never precisely solve the riddle of warfare despite repeated promises to the contrary.
The US Air Force has a history of over-emphasizing quantitative analysis in warfare, seeking to reduce war to a mathematical equation that can then be solved. However, warfare is inherently a human endeavor with infinitely many complexities. This inherent complexity means warfare does not yield itself to strict objective reductionism. Instead, quantitative analysis must be used for what it is – a powerful decision-making aid – but we must also realize it will never precisely solve the riddle of how to execute warfare despite oft-repeated promises and beliefs to the contrary. What follows is an examination of historical pitfalls of over-reliance on quantitative analysis with the aim of providing insights on how to avoid these mistakes in the future.
The period between World Wars I and II was a turbulent era for airpower with early advocates struggling to establish a clear doctrine on the proper employment of airpower. In Great Britain, the Air Ministry recognized the need for scientific input “to make better sense of operations.” One of the key documents to emerge from the US was the Munitions Requirements for the Army Air Force from the Air War Plans Division, known as AWPD-1. This plan sought to identify how to most effectively employ airpower in “the breakdown of the industrial and economic structure of Germany” through “destruction of precise objectives.” The plan further reduced the problem to target sets with a specific number of targets and then calculated the number of aircraft required to achieve destruction of these targets. For example, AWPD-1 claimed that the destruction of 50 power plants and switching stations would reduce Germany electrical power capacity to approximately 20% of its current capacity with “industrial power to key manufacturing centers…almost completely shut off.” The plan called for 32 groups of heavy or medium bombardment aircraft to accomplish this task and allowed for a 20% attrition rate per month. This number was calculated using range-based bombing accuracy, with an allowance for reduced accuracy in wartime by claiming “the force required in war time is thus…5 times that for peace bombing.” The planners applied the most advanced statistical methods available at the time to historical data and tried to forecast force capabilities and requirements. The planners even acknowledged that only a “smattering of facts” was available from which they could make deductions. Despite these acknowledgements, bomber zealots were eager to prove their theories and began to implement this plan when the US became involved in the European theater in 1942.
History would eventually illustrate several flaws in this planning document. First, the bombing accuracy was grossly overstated. Error by American bomber pilots in practice scenarios on bombing ranges averaged about 200 feet. Error in actual war time was sometimes measured in miles. The figurative fog of war, as well as the literal fog of northern Europe, had a far bigger impact than planners foresaw. Second, the attrition rate was far higher than the 20% per month for which planners accounted. The most striking example of this misestimate was the Allied attack on a German ball-bearing plant in 1943 as part of the Combined Bomber Offensive. Of the 291 Allied aircraft tasked with the raid, 67 were lost and 138 were damaged, meaning nearly 75% of the aircraft were lost or damaged in a single day. Third, the entire plan was based on industrial web theory with the assumption that striking key nodes of a country’s industrial base would cripple its capacity to conduct war. Implementation did not produce the robust results for which air planners hoped. “Bombing interrupted in arbitrary and unpredictable ways the web of supplies of materials and parts on which the whole industrial structure depended.” Despite the Air Corps’ best efforts at analysis, the evidence showed they fundamentally did not understand or correctly apply their “Industrial Web” theory, or it was fundamentally incorrect. Therefore, what began as a prescient observation of the need for more rigorous analysis ended in over-reliance on scientific deduction and dogmatic execution of an inadequate plan.
A similar attitude appeared again in the US approach to the conflict in Vietnam in the 1960s. The Secretary of Defense, Robert McNamara, “was famous for his love of quantitative analysis,” a love stemming from his time working for the Ford Motor Company. McNamara thought that with enough computers and analysts he could produce “an optimal strategy in war.” However, history would again show that there is a vast difference between an industrial production line and warfare. An industrial production line has inputs that are easily quantified and forecasted with known outputs where variances and errors can be identified by deductive analysis of the process. However, the conflict in Vietnam was fundamentally different. McNamara’s analysis failed to account for human dedication to a cause, as the Vietnamese were unconcerned about Communism but instead were fanatical about their independence having just shed their French colonial shackles 10 years earlier. McNamara’s mental model also failed to account for the differences between conventional and guerilla warfare. If Industrial Web theory is difficult to employ against a conventional enemy, it is nigh impossible against an enemy employing guerilla tactics that does not rely on a vast industrial base to support its fighting force. McNamara’s belief that his model had delivered a solution to the Vietnam War enabled a rigid hierarchical leadership style that only exacerbated the problem by attempting to tactically control air campaigns while muting key inputs from the front lines. McNamara and President Lyndon B. Johnson routinely dictated target sets as well as targeting information. As a result, they were attempting to solve the Vietnam problem from half a world away with the wrong framework to the problem. Combined with a top-heavy, centralized management style that stifled disagreement, McNamara’s “optimal” strategy was an utter disaster, resulting in nearly 60,000 US troops dead and a Communist-controlled Vietnam.
The US Air Force fell victim to this mindset once again in the Gulf War in 1991. Colonel John Warden developed and applied an analytical planning process known as the Five Ring Theory, claiming its proper application in concert with modern weaponry—namely stealth technology and precision guided munitions—could push the system beyond hysteresis and produce strategic paralysis in an enemy. Warden’s “detached analysis and methodical approach” looks very much like an evolution of the Industrial Web theory employed in World War II. Criticism of this theory is that it ignores the complexities of warfare and is oblivious to “the fundamentally human nature of warfare.” In the Gulf War, this manifested in the planners’ failure to account for the durability and survivability of Saddam Hussein’s grasp of power in Iraq, which was based primarily on a “balance of military, Ba’ath Party, and Tikriti tribalism.” While airpower did “contribute materially to the success of the war” due in large part to Warden’s theories and planning acumen, the strategic paralysis he predicted did not occur.
Today’s air planners risk making these same mistakes. Consider for example the Course of Action (COA) Analysis step in the Joint Operational Planning Process for Air (JOPPA). Planners are encouraged to use a numerical weighting schema to objectively compare the different COAs under consideration. The COA that is eventually selected must essentially “win” this showdown over alternate COAs. The result is typically a cognitive bias—perhaps intentionally, perhaps not—where the selection criteria are chosen and weighted in such that reinforces the preferred COA. Notwithstanding the lack of analytical rigor and expertise in most planning shops, this approach misses the entire point of this step in the process. In fact, this step is admittedly still a subjective comparison in which the goal is to provide a commander insight into which COA has the best chance of success relative to desired objectives and risk concerns. By taking an overly analytical approach, air planners too often focus on determining a solution to the numerical problem formulated in this step. The correct approach is instead to inform the staff on the strengths and weaknesses of each COA, highlight to the staff areas of each COA that may need further work, and decision points for a branch or sequel should be planned. Ultimately, this information informs the commander’s decision-making much more thoroughly than simply highlighting a COA as a winner relative to a few simple criteria.
Another example in the US Air Force is Air Force Instruction (AFI) 38-401, Continuous Process Improvement (CPI). This AFI mandates the use of “several widely accepted process improvement methodologies, including lean, six sigma, theory of constraints, and business process reengineering. Key principles contained in these methodologies include improving flow and reducing waste within a process, focusing on factors that degrade product quality, identifying and overcoming process constraints, and redesigning processes.” The aim of CPI is “to define problems; measure, manage, and monitor performance; and strategically align organizational goals, objectives, and project selection.” This is an important part of organizational management. However, these methodologies are all similar to McNamara’s analytical process in that they were developed for implementation in industrial factories and businesses. Thus while these tools are valuable, we must recognize the limitations of such methodologies. Just as AWPD-1 and McNamara’s “optimal strategy” each showed, none of these processes is a “silver bullet” to replace the human judgment and expertise required of leaders. Rather, these processes should inform appropriate decision-makers to challenge assumptions and support decisions using quantitative methods.
The near future is likely to see the implementation of “cognitive warfare” in which machine learning and artificial intelligence are leveraged in gathering, transferring, filtering, analyzing, and interpreting information. But just as we have seen throughout history, this next step will not reduce warfare to an optimal solution. Increased analytical capabilities brought about through cognitive warfare will not change the reality that human nature cannot be reduced to an analytical function. Artificial intelligence “can detect structure in data, but it cannot assess or compare values within rapidly changing social contexts.” Rather, these new capabilities are best utilized in “adding the additional clarification and context we need to make wise decisions.” These methods are best utilized to help us discern sources of continuity and change, distill causal factors in complex and dynamic environments, and guide us to sound strategic, operational, and tactical decisions. The US military must remember that warfare cannot be solved like a mathematical equation, for it is inherently a human endeavor and is complex and ever-changing. Analytical methods can help allocate scarce resources, and to prepare, plan, analyze, and assess operations, but we must not allow such methods to usurp human judgment.
Marcus McNabb is an Air Force officer with over 13 years of academic and operational experience as an operations research analyst. He holds a PhD in Operations Research from the Air Force Institute of Technology and has worked in a variety of jobs including test and evaluation, staff of US Air Forces in Europe, the Air Force Nuclear Weapons Center, and the 609th Air and Space Operations Center.
The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.