We Wasted Money on the “War On Terror” Because We Ignored Data
Written by Iziah Thompson
Published 21 January 2019
(Banner image credit: Larry Downing/Reuters, via Newsweek)
Since the September 11th attacks on the U.S., the public conversation around the War on Terror and mass violence in the U.S. have kept the attention of the public. It isn’t surprising that the ongoing conversation surrounding both of these topics dwarf those of less hot-button issues – say, tax policy – as measured by data from Google searches.
In 2017, almost half of Americans wanted to increase anti-terrorism spending, but is that warranted? We know that Americans often perceive much greater risk regarding violent crimes than really exists. According to a Pew Report, the American public perceives crime to be more rampant than the data show.
Violent crime has generally trended downward since the early 90s. But in 18 of the 22 Gallup surveys on violent crime, since 1993, 60 percent of Americans surveyed said there was more crime in the U.S. compared with the year before… The salience of mass violence as a topic could be an effect driven by the availability heuristic – which causes us to overestimate the likelihood of something that’s fresh in the mind – and perhaps it’s also the affect heuristic, our tendency to act with emotion and not accurately calculate the true risk of some things.
Given this, it’s important to consider the actual risks of issues like terrorism or anything else that harms us. Are we correctly assessing these risks? And more broadly, does government spending accurately reflect these risks?
A Misunderstanding of Risk Verus Reality
Americans should know that terrorism definitely doesn’t look the way it used to. The Rand Corporation, a leading strategic global policy think tank, recently published a report on the differences between terrorist recruits today vs. those of the past. The terrorist threat increasingly doesn’t fit the stereotype historically associated with violent extremism. Recruits are more likely to be white or black residents of the United States than Arab. Young men with little education who converted to Islam are susceptible to radicalization, but recruits are increasingly likely to be women.
This changing face of extremist terror is evident in the capture of Warren Christopher Clark, 34, a former substitute teacher from Texas who was fighting for Isis in Syria. More recently, on January 9th, the first minor American Citizen — a 16-year-old boy — was caught on the remaining portion of Isis-controlled land. It is no coincidence that as ISIS’s territory dwindles, reports of American and European nationals getting involved with the violent movement increase. Research suggests that there is an enhanced threat posed by extremists as they return to their country of origin in the wake of the defeated Islamic State. While the majority of terrorist attacks are still carried out in the Middle East, the threat of homegrown terror is more pertinent to the United States. In fact, the emphasis on the Middle East may be syphoning attention from other types of extremism as the profile of terror changes.
A report published last year by the Anti-Defamation League (ADL) analyzed 150 terrorist acts carried out by right-wing extremists in the United States. “More than 800 people were killed or injured in these attacks.” Also analyzing all attacks motivated by extremist ideologies, they found that from 2007 to 2016 “372 people were killed by some type of extremist violence; 74 percent of these were the result of right-wing extremists — like white supremacists and militias.” They noted a spike since the election of President Barack Obama, and that many shared in their targeting of Jewish, Muslim, and African American populations. 11% of the domestic terror events they analyzed were carried out by anti-abortion extremists. The point here is that while preventative efforts should be directed towards Islamic extremism, which has been the motivator of horrific bombings and shootings, we also must understand the opportunity cost that comes with ignoring trends in risk. The ADL report is illuminating in informing Americans about attacks that didn’t make the nightly news outside of their locality.
An Out-of-Balance Opportunity Cost
As mentioned, violence in the U.S. is trending downward. Though, other threats have risen. Last year the Pentagon released an unclassified 11-page report on America’s top national security threats. Russia and China were among the preeminent threats, surpassing acts of terrorism.
The perceived threat from China is represented through what are seen as predatory economic strategies and real military engagements in the South China Sea. The Pentagon is worried that Russia wants “to shape a world consistent with their authoritarian model—gaining veto authority over other nations’ economic, diplomatic, and security decisions.” One major change in this report was the dropping of climate change, global warming and the effects of sea level rise from the report. Climate change was first mentioned in DOD reports since the Bush Administration, and has risen in importance every year since. This should not be taken as a mitigation of the threat of climate change but as a political decision unrelated to any qualified risk assessment, backed by science. Now, having contextualized the terror risk, it’s time to look at the numbers, and bigger picture.
Robust Data Betray the Fallacy
Fortunately, terrorism is far from a leading cause of death in the United States. Data from the Center for Disease Control shows that from 1999-2017, heart attacks, cancer, and unintentional injuries were the biggest causes of death for American men. For women, the rates are identical except that chronic respiratory disease takes the place of unintentional injuries. It has long been known that women, on average, live healthier, longer lives than men. And essentially, accidents – falls, alcohol related car accidents, and other preventable events – plague men at much higher rates than women. When dissecting the data by race, we see that diabetes is a major killer of African Americans and American Indians, especially women.
Attacks with firearms is listed way down the the causes of death list. In comparison to other causes of death, being shot or shooting one’s self is not a huge threat. Malnutrition and car crashes kills more Americans than gun homicides. And deaths from gun-wielding extremists wouldn’t even be visible on a graph of these leading causes.
In terms of acts of terror, there is no objective way of defining what counts as one. Study of Terrorism and Responses to Terrorism (START) program seems to be an accepted authority in terms of data on terrorism. According to their data, there were 10,900 terrorist attacks around the worldwide in 2017— responsible in all for taking 26,400 lives (including those that perpetrated the acts). This shows a decrease from the year before, which was a decrease from the year before that (2015). In 2017, the year of the horrific Las Vegas shooting, 95 people were killed by terrorist attacks in America. (Granted some do not include this as a “terrorist” event.) From a policy perspective, the increase in death toll from 2014 to 2017 is very bad, but it must be taken into context with other threats.
Say we are considering deaths from terrorist acts from 1975 through 2015, including the 9/11 attack. Let’s narrow the scope to attacks linked to non-Americans, since this is the most politically charged area of interest. In that time span, the likelihood of an American dying from such an attack was 1 in 3.6 million, or 0.028 per 100,000 people. Now, let’s compare this to other causes:
Gun Homicides – about 4.5 per 100,000 people – 162 times greater
Traffic Deaths – about 12.5 per 100,000 people – 450 times greater
Heart Disease – about 196 per 100,000 people – 7,056 times greater
(Note: The time spans for these statistics are not the same. But the figure for terrorist attacks shrinks incredibly when properly calibrated. The figure presented gets the point across well.)
Inherent in this entire analysis is a concept that is lost on many commentators on causes of death, risk and government’s place in preventing them. Simply parroting off risk numbers or rates says nothing about government’s success in protecting its residents. This is because there is no counterfactual being provided. A counterfactual is a condition, rate, or risk number that would exist if government did nothing. In the case of terrorism, it requires the consideration of how much terrorist violence there would be if government did nothing. Perhaps the death toll would rival that of heart disease if government didn’t do the job it currently does. For this reason, we should refrain from simply talking about Americans being more likely to be “hit by buses or killed by vending machines” as valid proof of wasted money— because government vigilance could very well be providing that buffer between the counterfactual and the much safer reality we have. We will consider the counterfactual, but first let’s look at general government spending.
Vastly Disproportionate Government Spending
Before jumping into the question of, “Are we putting money towards the greatest threat?”, it’s important to realize that gross spending does not have a 1:1 relationship with prevention or related activities. Government can do more than just spend money to mitigate risks for people; sometimes throwing money at a problem is not the right answer. For example, in the case of heart disease (CVD) deaths (which have been on a radical decline since the ‘60s), improved health protocols have contributed immensely to the saving of lives. (There is also evidence of a stall in the CVD decline due to an increase in risk factors like obesity.) The concept of mitigating traffic accidents is similar. In 2015, the National Highway Traffic Safety Administration released a report updating the public and officials on the state of traffic deaths and provides policies that would go a long way in preventing many accidents. The policies include Automated Red-Light Enforcement, Automated Speed-Camera Enforcement, and Alcohol Interlocks. Enhancing these measures do require funding, but again, general spending numbers aren’t useful by themselves. Understanding that government spending itself is not the silver bullet of prevention, it is still a crucial part of the conversation of risk.
The U.S. Treasury divides all federal spending into three groups: mandatory spending, discretionary spending and interest on debt. Mandatory and discretionary spending account for more than ninety percent of all federal spending and pay for all of the government services and programs on which we rely. Interest on debt, which is a much smaller amount than the other two categories, is the interest the government pays on its accumulated debt, minus interest income received by the government for assets it owns. In 2015, almost 65 percent of the budget’s funds were spent before a single budget item was discussed— $2.45 trillion went to mandatory spending. 6 percent, or $229.15 billion, went towards paying off debt interest. 30 percent, or $1.11 trillion, went towards discretionary spending.
$66 billion, or 6 percent, was spent on Medicare and other healthcare expenses. 465 billion, or a little less than 6 percent, was spent on veterans’ benefits. About $600 billion — almost 54 percent — was spent on military expenditures.
Approximately $986 billion was spent on Medicare and other health related mandatory expenses. Social Security and other earned benefit programs take up a large portion of mandatory programs. At $1.35 trillion, entitlement spending makes up almost 49 percent of mandatory spending. Now, let’s take a closer look, focusing on heart disease and terrorism, since the raw risk figures for both are so wildly different and medical spending and military spending are so robust.
Obviously, on the back end, we spent massive amounts of money treating and dealing with the consequences of the causes of death considered in this article. So, it makes sense to focus on the front-end prevention spending. We can look at National Institute of Health funding to find the disease prevention figures. In 2017, the NIH spent $1.37 million on heart disease research. The funds went primarily to research hospitals and medical schools so that they could carry out studies: like Brigham and Women’s Hospital’s Molecular Mechanisms of Stretch-Induced Electrical Remodeling in the Heart, produced to the tune of $249,000. Some of the funding goes to private organizations as well, like the $3 million given to VADovations, Inc. They are a start-up producing mini blood pumps for children. [Their research: Small blood pumps for small patients]
The American Heart Foundation poses that NIH funding resulted in the severe drop in the death rate over the past century from heart disease. But they also, unsurprisingly, aren’t happy with current spending levels. “Even though heart disease and stroke account for 23 percent and 4 percent of all deaths respectively, the National Institutes of Health (NIH) invests a meager 4 percent of its budget on heart disease research, a mere 1 percent on stroke research and only 2 percent on other CVD research.”
Regarding sheer dollar amounts, the NIH allocates the 39th most amount of money to strictly heart disease-related research. Therefore, the Foundation has a point, given that heart disease is the leading cause of death. On the other hand, this can be misleading due to the fact that many of the categories of research prior to heart disease, like nutrition and aging, are components of heart disease risk; the single highest allocation category, prevention, actually includes studies on heart disease, take the University of Washington’s Safety and Tolerability of the Nutritional Supplement, Nicotinamide Riboside, in Systolic Heart Failure, or Massachusetts General Hospital’s The Role of Galectin-3 in Cardiac Remodeling and Heart Failure. So, perhaps the NIH’s funding priorities don’t suggest that heart disease is 39th most important, but really show that high-cost areas like nanobot tech may be titillating enough to accept the opportunity cost for an innovative medical future.
Expense on the War on Terror
The $600 billion in 2015 discretionary funds includes allocations to various military endeavors. But paring down to a single number of government spending on counter-terrorism is a difficult task because of issues of tracking funds, the defining of spending items and general transparency issues.
However, a nonpartisan global security policy research center, the Stimson Foundation, estimated that the Federal Government spent about $2.8 trillion on counterterrorism activities since 9/11. Of the $600 billion spent on military in 2015, they estimate that $146.3 billion was spent on counter-terrorism. That number has only risen since, reaching $174.7 million in 2017.
The fact that since September 11th the U.S. spent 16% of its discretionary budget on counter-terrorism may be appalling to many; because despite what we think politically about the last decade of American foreign policy, there is no place to go to find real figures on the actual cost. When it comes to counter-terror, we can’t just scroll down a page – like the NIH’s website — and find the exact projects and amounts down to the dollar that was spent. But, working with what we do have, the question remains: Is it possible that the amount spent is worth it?
Was it worth it? In hindsight, no.
Because every dollar spent on anti-terror efforts or nanotechnology has the potential to be used elsewhere, it’s important not only to think of unnecessary spending as fiscally wasteful but also as morally wasteful. In order to figure out whether the “right” amount is being spent, cost-benefit-analysis (CBA) can be used to measure how the value of the policy (usually measured in statistical lives) compares to the amount spent. As one can imagine, this gets difficult with things like biomedical research. CBA relies on some understanding of how much people value the effects of policies; in this case it’s their life. This may be one of the reasons people think economists are soulless robots.
But the cringing reaction to CBA may have adverse policy effects. Richard Rivesz and Michael Livermore argue that this economic analysis has been abandoned by those wishing to protect people’s safety to the detriment of the regulatory state in their book Retaking Rationality. Liberals, they argue, have had a hard time with the practice of putting a dollar value on a life in order to make comparisons. But some of the cringe may be warranted. In biomedical research, or really any science, where failure is not a negative thing, economic analysis does a poor job at measuring efficiency. According to Richard Harris, NPR’s science reporter, “there has been no systematic attempt to measure the quality of biomedical science as a whole.” But what we do know about the quality of scientific research does not bode well given our political climate. With a president looking to cut NIH funding, Harris argues “sloppy science” and even cheating is more likely. When budgets are squeezed, there is evidence that good researchers may break convention to keep the lights on in laboratories.
Despite a solid economic analysis, a recent study in the Journal of Science made an honest attempt. The authors found that
“-8.4 percent of grants (approximately 30,000) were directly responsible for a patent, most of which were “Bayh-Dole” patents held by the university or hospital where the innovation was made.
-A larger proportion of the grants — 31 percent — had been cited by 81,642 private-sector patents, suggesting the research played a role in the development of those concepts.
-However, fewer than 1 percent of these grants — 4,414 — were directly tied to FDA-approved drugs, while 5 percent were mentioned in a patent for marketed drugs.”
The futility involved in carrying out a CBA in this sector is apparent even in this limited analysis, but no one will argue that the advances made in this area of work are not valuable.
in this sector is apparent even in this limited analysis, but no one will argue that the advances made in this area of work are not valuable.
In regards to a CBA for counter terror efforts, we run into an issue as well. In order to carry out this analysis you have to compare the statistical value of a human life multiplied by the number of lives saved to the amount spent. The problem is, if you do not have accurate information for how many lives were saved, it’s essentially up to the Department of Homeland Security to make the figure up. This isn’t far from what has been happening. Alex Nowrasteh at the Cato Institute, a libertarian think tank, attempted to see whether the value of the statistical lives saved broke even with the funds spent on counter-terrorism. Before any analysis, he found that in 2016, “…the Department of Homeland Security (DHS) produced an initial estimate that valued each life saved from an act of terrorism at $6.5 million, then doubled that value (for unclear reasons) to $13 million per life saved.” After doing his own calculations, he found that “188,740 lives would have to have been saved by all CT [counter terrorism] spending for the value of that spending to save an equivalent value in terms of human life.” And furthermore, focusing on domestic counter terror spending, Homeland Security “would have to have saved 65,233 lives from 2002 through 2017 to break even…” Nowrasteh makes the logical leap that this number of lives needed to be saved for the program to be cost-effective is well beyond reasonable. He writes,
“…even under the most negative conditions where the marginal cost saved is equal to the statistical value of life, assumes that hundreds of thousands of Americans died because of this increased CT spending who otherwise would have lived due to improved safety elsewhere.”
Our lesson? Protect and promote data in policymaking.
Looking for a simple or straightforward way of knowing whether the government is spending the optimal amount to protect its citizens from risk is but a dream. But relying on our emotional attachment to issues or how available one is in our minds is not the way to do it. For every issue we care about, there is another which we should also care about, that could be underfunded as a result of systematic partiality of one over the other. In this article, heart disease and extremist violence received the most attention, and an honest look at the data tells us that we most likely would be better off spending more on the former and spending less on the latter. This article, and the calculations done by the Cato Institute, aren’t the first to point out this overspending. I think this is because, despite an inability to perfectly quantify the efficiency, people paying close attention can tell when the numbers are unreasonable. But this can only occur when good data and information are available.
Overall, this is unfortunately less an analysis of spending compared to causes of death, and more of another instance where poor data keeps the American people in the dark. Evaluating the effectiveness of government policy—especially in the domains of health and safety—is only possible when transparency is preserved. Reliable data shouldn’t be an afterthought or like some faucet that can be closed or opened upon convenience or to escape scrutiny.
The Stimson Foundation’s study on counter-terrorism spending concluded that without the implementation of several transparency measures, any evaluation of the efficiency of counter-terrorism spending would produce problematic estimates at best. Furthermore, the State Department has funded START Program (the collectors of the terrorism data used earlier in this article) for the last six years. But Erin Miller, the program manager for START’s Global Terrorism Database, said in an email to the Washington Post that her team has been told the State Department “did not renew its contract.” Her team has no plans to publish data for 2018.
The truth is that this lack of accurate data collection leaves an opportunity for unscientific measures and unsubstantiated facts to come to prominence. If people in power wish to exploit the biases inherent in human psychology, a data-starved world is fertile ground for the promotion of ungrounded figures that incite strong emotions. There is no reliable policy analysis without reliable access to information and transparency.
LaFree, Gary, Laura Dugan, and Erin Miller. 2015. Putting Terrorism in Context: Lessons from the Global Terrorism Database. London: Routledge.
LaFree, Gary. 2011. Using open source data to counter common myths about terrorism. Pp. 411-442 in Criminologists on Terrorism and Homeland Security. Brian Forst, Jack Greene and Jim Lynch (eds.) Cambridge University Press.
LaFree, Gary. 2010. The Global Terrorism Database: Accomplishments and challenges. Perspectives on Terrorism 4:24-46.
LaFree, Gary and Laura Dugan. 2009. Research on terrorism and countering terrorism. In Crime and Justice: A Review of Research, M. Tonry (ed.) Chicago: University of Chicago Press 38: 413-477.
LaFree, Gary, and Laura Dugan. 2007. Introducing the global terrorism database. Political Violence and Terrorism 19:181-204.