Corporate Power and Responsibility in the New Era of Techno-nationalism

Corporate Power and Responsibility in the New Era of Techno-nationalism

by Kelsey Thurman

“Whoever becomes the leader in this sphere [AI] will become ruler of the world” – Vladimir Putin[1]

Russian President Putin’s foreboding statement highlights the sentiment underpinning the current arms race between the U.S. and other superpowers: the race to integrate the most advanced artificial intelligence (AI) into modern defense systems. Known as techno-nationalism, this  attempt to dominate emerging technology is not new in this era. However, what is novel is a significant shift in the partnership dynamics between the U.S. Department of Defense (DOD) and defense contractors. Companies like Google LLC (Google) and Microsoft Inc. (Microsoft) are increasingly exhibiting divergent attitudes and approaches to these partnerships based on their different understandings of corporate responsibility.

AI is not, per se, a weapon, nor is it a single technology. In terms of defense systems, Defense One, a national security commentator, defines AI as “the introduction of machine learning to cyber security and operations, new techniques for cognitive electronic warfare, and the application of computer vision to analyze video and imagery, as well as enhanced logistics, predictive maintenance, and more.”[2] In trying to keep pace with the rapid push for AI integration into defense systems, the DOD has turned to multinational technology companies, such as Google and Microsoft, to “parlay” their sophisticated commercial AI capabilities into advanced military applications and weapons.[3]

While International Humanitarian Law (IHL) provides a framework for regulating the development, review, and use of military applications, how AI-enhanced weapons systems fit into the existing IHL framework is still evolving. On the brink of this new frontier in defense weaponry, IHL does not include explicitly defined rules or principles dictating the development, use, and accountability for such weapons – especially autonomous weapons.[4] In this absence of hard law, the ethics of developing and using such applications remains murky. Thus, DOD partnerships with publicly-traded tech giants like Google and Microsoft (each regulated by corporate governance structures and bound by corporate ethics) has complicated this conversation. 

Google and Microsoft initially had similar, yet ultimately divergent reactions when considering DOD’s calls for AI defense contract bids. Each company decided to engage with the DOD via these partnerships, developed a set of principles for AI development, but then faced public scrutiny for engaging in such partnerships. In response to this backlash, Microsoft went forth with these contracts, while Google pulled out of major AI defense contract bids – such as JEDI – citing a conflict with  Google’s AI code of ethics.[5][6]

These differing responses may be a result of specific points of friction between DOD project objectives and each company’s motivation for and application of their own internal ethical AI principles.   

For Google – a company ranked within the top five of The World’s Most Reputable Companies (Reputation Institute from 2013-2018) – the reputation for corporate ethics is intrinsic to its success.[7] The outcry over Google’s involvement in the DOD’s Project Maven, led by Google employees, demonstrates how the reputation and expectation for Google’s high standard of ethical business practices are reinforced from both within and from the outside.[8] Motivated to uphold the reputation and standard of ethics for which it is known, Google has used its new AI principles to set a boundary-line around which type of AI projects they consider ethical. Using their ethical principles as a standard, the company evaluates the objective and potential use of a defense system, if the project crosses this boundary-line, the company has indicated that they will not engage.  

Alternatively, Microsoft – ranked within the top 15 of The World’s Most Reputable Companies (Reputation Institute from 2015-2019) – interprets its ethical responsibilities as a duty to develop human-centered AI.[9] Unlike Google’s evaluation process that looks at whether an AI project meets its ethical standards, Microsoft’s principles connote a corporate responsibility to shape AI technology standards to meet said principles. Founded in 1975, Microsoft has been a forerunner of advanced computing technologies for over 40 years. Underlying this expectation of corporate responsibility is a long-cultivated motivation to pioneer, to step into the emerging technology space and personally create and shape the latest technology as a model for future technological expansion. For this reason, Microsoft is motivated to continue partnerships with the DOD in order to establish the “foundation for the development and deployment of AI-powered solutions that will put humans at the center.”[10]

Despite differing motivational factors driving corporate decision-making, as gatekeepers of advanced AI technology, the actions of Google and Microsoft are imperative in shaping the discussion around AI-enhanced defense systems. Google’s decision to disengage from certain DOD projects highlights the scrutiny of ethics in militarized AI systems, while Microsoft strives to get ahead of ethical problems by developing human-centered AI for DOD applications. Here, in the absence of explicit IHL regulations for AI in defense systems, each company’s involvement in DOD projects – or lack thereof – has the potential to establish standards for development and influence emerging international law regarding militarized AI in warfare.


Kelsey is a recent graduate of the Fletcher School of Law and Diplomacy, where she concentrated in public international law and human security. Her research has focused on the nexus of technology, international law, corporate ethics, and accountability. Kelsey has served as a Policy Fellow with Accountability Counsel in Washington, DC, researching policy development and accountability. Upon graduation, Kelsey has returned to Berry Appleman and Leiden LLP, where she manages legal casework for clients within the technology sphere. Kelsey has lived and worked in multiple countries including, Kazakhstan, South Korea, and the UK, and is a graduate of Texas A&M University, where she studied communication and the Russian language.


[1] David Myer, “AI Power Will Lead to World Domination, Says Vladimir Putin,” Fortune, September 4, 2017. https://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/.

[2] Elsa Kania, “The Race for AI,” Defense One, March 2018. https://www.defenseone.com/ideas/2018/04/pursuit-ai-more-arms-race/147579/. 

[3] Michael C. Horowitz, “The Algorithms of August.” Foreign Policy (blog), September 12, 2018. https://foreignpolicy.com/2018/09/12/will-the-united-states-lose-the-artificial-intelligence-arms-race/.

[4] Vasily Sychev, “The Threat of Killer Robots.” UNESCO Courier, June 25, 2018. https://en.unesco.org/courier/2018-3/threat-killer-robots.

[5]  Joint Enterprise Defense Infrastructure (JEDI) “is a tailored acquisition for commercial cloud infrastructure and platform services at all classification levels. It will be widely available to any organization in the DOD.” “Joint Enterprise Defense Infrastructure (JEDI),” U.S. Department of Defense, November 6, 2017. https://www.nextgov.com/media/gbc/docs/pdfs_edit/121217fk1ng.pdf.

[6] Catherine Shu, “Google Will Not Bid for the Pentagon’s $10B Cloud Computing Contract, Citing Its ‘AI Principles.’” TechCrunch (blog), October 9, 2018. http://social.techcrunch.com/2018/10/08/google-will-not-bid-for-the-pentagons-10b-cloud-computing-contract-citing-its-ai-principles/.

[7] Based on a review of reports released from 2015-2019. “Global RepTrak® 100,” Reputation Institute. Accessed August 21, 2019. https://www.reputationinstitute.com/global-reptrak-100.

[8] Project Maven – a US Department of Defense AI project, which incorporated specific Google AI technology in military drone development to enhance imagery, and subsequently improve drone strikes in the battlefield. Google management’s response was initially in favor of partnership and a defense contract was signed. However, in April 2018, thousands of Google employees protested and signed a letter demanding Google cease its participation and partnership in the project. In June 2018, Google announced that it would not renew the Project Maven contract and would release a set of ethical AI principles in response to employee demands. Drew Harwell, “Google to Drop Pentagon AI Contract after Employee Objections to the ‘Business of War.’” Washington Post, June 1, 2018, sec. The Switch. https://www.washingtonpost.com/news/the-switch/wp/2018/06/01/google-to-drop-pentagon-ai-contract-after-employees-called-it-the-business-of-war/.

[9]  Based on a review of reports released from 2015-2019. “Global RepTrak® 100,” Reputation Institute. Accessed August 21, 2019. https://www.reputationinstitute.com/global-reptrak-100.

[10] “Executive Summary the Future Computed.” Accessed March 2, 2019. https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/Executive-Summary_The-Future-Computed.pdf.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.