Monica Toft in Texas National Security Review — Making Academic Work Relevant to Policymakers
February 23, 2018
As the international community faces tremendous change and upheaval, and the United States undergoes shifts in its foreign and domestic policies under the Trump administration, there is a critical need for sound and relevant advice on issues of national security. A key question is, what, if anything, do national security academics have to offer policymakers? The answer is, “quite a bit.” However, academics need to understand that what policymakers need is often quite different than what academics pursue and produce. The good news is that there does seem to be movement within the academy to analyze and write in ways, and on topics, that policymakers will find useful. Moreover, engaging with policymakers will only help to make academic research more relevant and interesting.
As someone who has advised the policy community for a number of years, it is clear that this community’s needs and demands are quite different from those of academics. Policymakers are often confronted with crises in quick succession. They need good — not perfect — answers fast, and they require clear predictions along with sound options for dealing with those crises. Above all, policymakers need analysis shorn of excessive detail. They will listen, and ask follow-up questions that are relevant to their needs.
Academics for their part face a different situation. For them it is not a matter of getting a problem solved, but getting it right. That means spending long hours in archives, analyzing data, reading relevant literature, surveying populations, or interviewing experts and decision-makers. Very often “getting it right” means updating or improving an earlier analysis or argument, not necessarily discovering practical applications in the real world. For example, suppose ten years ago a social scientist had written an influential article on what sorts of military interventions are unlikely to succeed. In making her case, she developed and tested a statistical model, which showed that in a particular case, military interventions were much more likely than not to fail. That research would be useful to a policymaker grappling with whether to recommend to the president committing armed forces to help resolve a national security crisis.
However, academics might later argue there were errors in how the previous author collected data or in the model supported by her statistics. Their research might make interesting and useful academic contributions, but would be of interest to policymakers only if the errors suggested a different course of action. Very often, there are no new policy implications from such contributions. It is incentives like these that widen the gap between national security academics, who have a luxury of time but are under constant pressure to be more “scientific” (each generation entertaining a different conception of what “scientific” means), and policymakers, who are invariably pressed for time, and as such are primarily interested in what might work or, perhaps more importantly, what will not work when attempting to solve the problem at hand.
When Data Took Over Academia
Does this mean that policymakers and academics cannot engage one another? Hardly. In fact, some of the best national security scholarship is often that which is directly informed by what is happening in the world. Perhaps the best example is Graham Allison’s Essence of Decision, a book informed by the Cuban missile crisis, which Allison crafted into a theoretically-informed report that remains relevant today in both academic and policy circles.47 It remains a classic, highlighting that while external conditions and security contexts shift, perennial questions and national security dilemmas remain.
Allison’s book is also worth reviewing because of its timing. It came out before the desktop computer revolution, and the spread of high level statistical and computational modeling that followed. This is especially evident with the development of the Singer and Small Correlates of War data set, first introduced in 1963 (and still going strong).48 Within the political science discipline and the international relations subfield, a fetishizing of data became the norm as computational capacity continued to grow through the 1980s and 1990s and new data sets appeared. A similar phenomenon occurred with regional studies. Why learn a foreign language or travel overseas when you can build a data set and mine it from the comfort of your office? This trend toward data-informed analyses happened at the same time that history — particularly international and military history — diminished in stature. Just as qualitatively-informed research diminished in political science so did historical approaches, and with these trends so too did the ability of national security experts to speak clearly and effectively to policy makers.
This explains why Samuel Huntington, perhaps the most famous and accomplished political scientist, started the John M. Olin Institute for Strategic Studies. He became concerned that academics were becoming untethered from national security policy as they developed ever more complex models and modelling. He therefore established a fellowship program to promote the best junior scholars and their research in the hopes that scholarship would stay relevant to policy challenges and perhaps help to secure international peace and security.
The situation today is not as dire as it was in the late 1980s and early 1990s. No longer does data-driven analysis rule the day. Rather, international relations and security studies scholars recognize that sound analysis requires a variety of methods. Furthermore, history as a field of study has been undergoing a revival of sorts. Interestingly, this revival is not happening in traditional academic departments, but in schools of public policy where there is a recognition that most government decision-making — especially in national security — is conducted by way of historical reasoning and comparison. What better way to teach those skills than with historical methods and historians themselves?
Challenges Within the Academy Remain
Yet, despite some progress, significant and potentially costly gaps between the academy and the policy community, remain. This is due, in no small part, to tensions within the academy itself.
It is still difficult for academics to make the case that it pays to write for the policy community. This is especially true for junior scholars who, in order to get tenure, have to publish in academic journals and for academic presses. They are not rewarded for writing policy papers or opinion editorials. They could even be harmed by publishing such content, as some colleagues may think that it is not the job of academics to engage in policy debates or that they are wasting their time and should be devoting more to the scholarly enterprise. When letters are solicited for tenure reviews, these types of opinions may be reflected by the referees, thereby undermining the tenure outcome for candidates. It is clear to candidates for tenure that what matters first and foremost is the opinion of fellow academics. The admonition “publish, and publish ‘scientifically’ or perish,” still looms large.
Unfortunately, even within public policy schools the letters that are sought for tenure and that count the most are those from fellow academics from within the different academic disciplines. They are the ones who can make the case that the candidate has (or has not) had a significant impact on the field. That is what comes first. Regardless of policy influence or experience, a junior academic will not be tenured without a substantial and significant (however defined) contribution to their academic discipline. The effect of this environment is that fewer academics consider writing and informing policy debates and policy makers than would otherwise be the case, at least at the junior level. The structure of incentives simply mitigates too strongly against it.
Such incentives (or disincentives) also mean that academics within the university system are only rarely taught how to think about policy challenges and how to communicate them effectively. Again, they are rewarded for writing for academic audiences, and end their essays with their theoretical, empirical, and methodological implications. If policy implications are included, they are so generic and anemic that a policymaker would have a hard time thinking through how exactly to operationalize them to effect any desired change. This is not to say that this sort of training cannot be done. Master’s students at schools of public policy and international affairs are taught policy evaluation and guided through the mechanics of writing for the policy community. PhD candidates in traditional academic departments, those with real expertise on an issue, are not. Again, their primary audience is fellow academics sitting in offices reading other fellow academics’ research — and not many of them at that. The average American academic article garnered about three citations in 2010.49
Policymakers Need Specific Guidance, not Generalizations
In addition to the difficulty of communicating effectively and concisely to the policy community, there is also the problem that much of what academics write increasingly involves a continued use of statistical analysis and sophisticated modelling that took off in the 1980s and 1990s. This has led to two problems. First, the outcomes tend to relate probabilistic outcomes and conditions that might contribute to war and peace as opposed to point predictions. Policymakers want guidance on particular cases at a particular moment, not generalizations across a set of cases. They want to know how to end the war in Syria, not how civil wars end in general. Moreover, few policymakers have been trained to recognize that probabilistic theories cannot be refuted by one or more counter-examples, which then often results in them rejecting scholarly models and findings.
Academic research has produced a multitude of data sets and corresponding analyses, each with its own definitions of war, peace, conflict, cooperation, alliances, trade embargoes, death, destruction, and the like. As a result, scholars often find themselves contradicting existing research, leaving both academics and policymakers scratching their heads about whether they can ever make generalizations across a number of cases. What is a policymaker to make of fundamental disagreements coming from academic research? If academics can’t agree on the best way to terminate a war and maintain peace or how to deter a rival to prevent war in the first place, then why should a policymaker turn to that literature to begin with? She is already pressed for time, why add confusion over what academics have to say on the matter to the mix?
Finally, we know that most issues that concern policymakers in the national security arena are a mix of political, economic, and social factors. Civil wars, for example, often emerge in states with fragile political institutions, failing or compromised economic systems, and cleavages that divide societies along ethnic, linguistic, racial, and/or religious lines. Yet, most scholars are trained in one discipline and publish in disciplinary journals. There is little cross fertilization or collaboration. So while economists tend to focus on the supply of goods and services in a society as the explanation for why a civil war emerges — there are too many workers and not enough jobs — political scientists will look to the structure of the political system, issues of equity, and whether and how elites or majorities prey upon minorities within a society causing them to rise up and challenge the system.
This specialization is further compounded by the tendency of scholars to research and publish alone, particularly in the non-economic social sciences. This practice does not allow for different sets of knowledge and expertise to inform the analysis in order to get a fuller and more appropriate sense of what is happening in the world that policymakers are trying to address. Once again, the incentive structure mitigates against the kind of collaboration that might support informed and accessible policies.
What Can be Done?
Given the gaps in time pressures, incentives, and interests between the academic community and the policy world, what can be done? We should first recognize that the gap is not as large it was in the late 1980s and 1990s. There has been progress made both across the gap and within the academy.
Across the gap, perhaps most notably, there have been movements to fund academic research through The Minerva Research Initiative, which is “administered jointly by the Office of Basic Research and the Office of Policy at the U.S. Department of Defense, supports social science research aimed at improving our basic understanding of security, broadly defined.”50 Second, opportunities to inform policy circles have grown through media platforms, including the Duck of Minerva, Lawfare, and War on the Rocks, regular features by Daniel Drezner and Stephen Walt for the Washington Post and Foreign Policy online respectively, as well as interest from prominent journalists and authors working to get academic research noticed and promoted, including the New York Times’ David Brooks, the Washington Post’s Shankar Vandantam, and Malcolm Gladwell.
Within the academy itself, there has been a move by academics to co-author more frequently, thereby drawing on broader expertise and avoiding a silo effect. Interdisciplinary work has been slower to gain traction, again because of the incentive structure of the tenure system, but it is not entirely absent. Universities and research institutes have been making strides to appoint “bridge” faculty who work across disciplinary departments. Some of the fastest growing institutions are schools of public policy and professional schools of international affairs, bastions of interdisciplinary work whose purpose is to ask policy-relevant questions and to answer those questions using the most appropriate methods and sources, rather than those dictated by disciplinary preferences and sometimes fads.
Conclusion
Allow me to close by commenting on a final and perhaps graver problem, one that is related to the academic-policy gap and that itself suggests a solution. In the United States, the rise of science doubters — including evolution, climate change, vaccination skeptics, and even those who question that the earth is a sphere — has led to an increase in the rejection of “expert” advice and guidance of all sorts in favor of a quest for better authorities.
Given that the sciences are made of up communities of people who highly value critical thinking, this rejection of science as such comes as a rather depressing shock. The current U.S. presidential administration has placed people at the heads of agencies whose primary qualification is that they previously worked to oppose the existence or core mission of the same agency. This is the situation at the Departments of Interior, Energy, and State, as well as the Environmental Protection Agency. Many times objections from those discounting the scientific inquiry are framed as doubt over the “science” or “the facts.”51 But the opposition is actually energized not by doubt, but by certainty that an alternative authority — often an iconoclastic personality or a particular construction of a holy text such as the Bible — should be our guide for crafting policy.52
The academic instinct is to confront doubters with more facts and more research. However, it seems that this confrontation will not work as the representation of an opinion as expert actually lowers its persuasiveness in the minds of doubter audiences. How then should we work to overcome this war, if you will, against science and against issue-area experts?
In order to bridge the gap between the university and the policy world, we need to alter the structure of academic promotion and research production by incentivizing rather than punishing co-authorship and cross-disciplinary collaboration. This is beginning to happen and is having a positive, if slow, impact. But we need to add advocacy training to the mix. Researchers need to learn how to persuade audiences that are skeptical of their stock in trade and come to learn that having the right set of facts, though necessary, is not sufficient to change a doubter’s mind. That kind of training can be incorporated into both policy schools and academia more generally so that national security students are armed with more than good facts, theories, and arguments, but armed also with empathy for a doubting audience and the patience and skill that is increasingly needed to make the difference in the world that we all seek a reality. READ MORE