BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How The Department Of Defense Approaches Ethical AI

Following
This article is more than 3 years old.

Military and defense organizations using transformative technologies such as artificial intelligence and machine learning can realize tremendous gains and help to maintain advantages over increasingly capable adversaries and competitors. It can allow autonomous vehicles to go into terrain deemed too dangerous for humans, provide predictive analytics and maintenance to keep large fleets running smoothly and safely, and help to provide autonomous operations in difficult conditions. As the US Department of Defense (DoD) increasingly adopts AI technology in a wide variety of use cases ranging from back-office functions to battlefield operations, there is a realization that despite the benefits that AI can bring, there is also a risk of unintended consequences that could cause significant harm by using these various technologies. 

As a result, the DoD takes the topics of topics of ethics, transparency, and ethics policy very seriously. A few years ago, the DoD created the Joint Artificial Intelligence Center, also referred to as the JAIC, to help figure out how to best move forward with this transformative technology. Earlier this year the DoD adopted a set of AI ethical principles that encompass five major areas including Responsible AI, Equitable AI, Traceable AI, Reliable AI, and Governable AI.

In a recent AI Today podcast Alka Patel, Head of AI Ethics Policy at the Joint AI Center (JAIC) shared how they approach ethical AI, transparent AI, how the private and public sector can work together in their efforts around ethical and responsible AI. In this article she further shares her insights into AI, why is it important for the DoD to be addressing the topics now, as well as some of the challenges of implementing ethical AI.

What is the DoD Joint AI Center (JAIC), and why was it started?

Alka Patel: The Joint Artificial Intelligence Center was stood up in 2018 as the focal point of the DoD’s AI Strategy with the mission of transforming the Department through the adoption, integration and scaling of AI. It utilized pathfinder projects to build a knowledge pool on how to adopt and use AI within the DoD and is now transitioning to leverage those learnings for broad enablement of AI at scale across the DoD. The JAIC is advancing AI transformation to more effectively and efficiently meet mission requirements; building an AI infrastructure that lowers technical barriers across the Department; and leading the DoD AI Enterprise’s governance framework. It acts to create the conditions for thoughtful, responsible, human-centric AI adoption that supports and protects American service members, safeguards our citizens, partners with and defends our allies, and improves effectiveness, affordability and speed of DoD operations.

 Why is it important for the DoD to be addressing the topics of Ethics, transparency, and Ethics Policy?

Alka Patel: Strictly speaking to the self-interests of the DoD, addressing these topics positions us to establish global norms for responsible design, development, and use of AI within the framework of our democratic values. It enables us to earn the trust of the American public, attract and retain a talented digital workforce and along the way fortifies our international partnerships with allies that share our values. All of this furthers the DoD’s mission to strengthen national security and increase mission effectiveness.  Of course, AI ethics isn’t important just for the DoD. Every public or private industry sector and every organization that designs, develops or is using AI should be aware that the ethical use of AI is not only a moral imperative but is also imperative to operational sustainability. The DoD aims to take a leadership role in this regard.

Can you share the DoD’s Ethical Principles for Artificial Intelligence?

Alka Patel: On February 21, 2020, the DoD adopted the following AI Ethical Principles for the design, development, deployment and use of AI:

1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Why was it important for the DoD to come up with these ethical principles now?

Alka Patel: While the DoD AI Ethics Principles were adopted in February of this year, it is just one step along a larger journey.  The adoption of these Principles builds on an established culture grounded in ethics and responsibility, as well as existing ethical and legal frameworks.  Furthermore, the principles support the DoD AI Strategy (part of 2018 National Defense Strategy) which outlines one of its core tenants as leading in military ethics and safety. Soon after the release of the strategy, the Defense Innovation Board, an independent advisory committee, launched a 15-month effort to identify AI ethical principles for the DoD.   They consulted with leading AI experts in commercial industry, government, academia and the American public resulting in a rigorous process of feedback and analysis among the nation’s leading AI experts with multiple venues for public input and comment.  The end result was recommendations in a report issued in the fall of 2019 which became the foundation for the DoD’s AI Ethics Principles adopted this year.  Thanks in part to the leadership that embarked on this journey two years ago, we can now say that the U.S. Department of Defense is the 1st military in the world to adopt AI ethics principles, with the U.S. once again leading the way in the responsible development and application of emerging technologies.

How do regulations and world-wide laws impact the use of AI?

Alka Patel: AI technological development and use has thus far outpaced the development and implementation of AI regulations. Because of the ubiquity of AI, disparate jurisdictional regulations can create challenges to global business operating models, organizational compliance as well as regulatory enforcement. This underscores the DoD’s priority to establish and operationalize our Responsible AI principles and establish these ethical principles as global norms through partnerships with our allies.

What are some of the challenges of implementing ethical AI?

Alka Patel: Responsible AI is more than just the responsible design, development and use of the technology.  It speaks more broadly to organizational operating structures and culture. Some of the challenges include:

∙         Establishing a mindset that recognizes that ‘ethics is an enabler not an inhibitor’.  It is necessary to recognize that ethics is not an extra step or hurdle to overcome when adopting and scaling AI but is a mission critical requirement ensuring DoD objectives are met through AI.

∙         Increasing Responsible AI Literacy across the organization. Everyone, and not just the developers and technologists, should have a baseline understanding what AI Ethics is, why it is important, and what their role is in operationalizing the DoD AI Ethics Principles/supporting Responsible AI.

∙         Utilizing a systems and risk management-based approach for Responsible AI at the technical project level as well as at the enterprise-wide level.  For example, while tools (e.g. bias detection, harms analysis, data cards, etc…) are necessary, the content, conversations or decisions based on that tool’s output cannot be considered in a vacuum. These considerations must be understood in the context of the end to end process or within the construct of organizational processes/governance.  

∙         Designing and developing AI systems for interoperability. Due to varying organizational (even international) vocabulary, practices, standards, or lack thereof, how do we ensure that when we are procuring or sharing technologies, that the fidelity of expected engineering practices (e.g. testing, safety, security, etc…) or the operationalization of ethics principles is maintained. 

How do you suggest helping folks gain Responsible AI literacy?

Alka Patel: Those of us who are in this space and thinking about this every day need to step back and realize that this isn’t at the forefront for everyone…or at least not yet!  We need to start with an awareness campaign that focuses on building an awareness of what Responsible AI is, why it’s unique, and what the specific challenges are. The intent is not to necessarily create subject matter experts but rather create Responsible AI stewards who can understand, anticipate and mitigate the relevant issues. Two specific examples that we can point to at the DoD include the following: 1) This summer we piloted a Responsible AI Champions program at the JAIC where we took 15 cross functional individuals through an experiential learning journey to understand the DoD AI Ethics Principles, identify tactics for operationalizing the Principles and seeding a network of Responsible AI ambassadors; and 2) JAIC recently released a DoD AI Education Strategy. This strategy explicitly requires Responsible AI learning modules for every individual of the workforce, customized in breadth and depth based on archetype.

What steps is the DoD taking to minimize unintended bias in AI capabilities?

Alka Patel: Minimizing unintended bias in AI capabilities is complex. The problem is not limited to eliminating biased data used as input during the design, development and use of the AI system, but also includes bias creep in decision-making during the design, development and use of the AI capability.  As such, minimizing unintended bias requires monitoring across the entire AI product lifecycle (design, development, deployment and use), looking for entry points along the way, utilizing tools that can help analyze, identify and test for data bias, having a robust data governance processes including checkpoints for corresponding reviews and assessments, all of which are points of emphasis at the DoD.

How can the private and public sector work together in their efforts around ethical and responsible AI?

Alka Patel: Private and public sector collaboration is necessary to advance Responsible AI because neither sector has all the answers, yet it is in our collective interests to get this right. We know Responsible AI is nascent in terms of research, standards, and policies (akin to cybersecurity 10 years ago). Because it is a developing field and advancing at an unprecedented speed, cross-sector collaboration is crucial for coordination, sharing of resources and talent, and producing innovative solutions. For example, the public sector can benefit from the research advances or commercial solutions from the private sector (as well as academia) in areas such as explainability or T&E/VV. Conversely, the private sector can benefit from partnering directly with the end users and/or data sets in the public sector. Other benefits include for example, collaboration on standard requirements, acquisition practices and requirements (e.g. for interoperability) and workforce education training.

What AI technologies are you most looking forward to in the coming years?

Alka Patel: So I’ll reveal my own bias and point to a specific effort at the JAIC/DoD called the Joint Common Foundation or JCF. The intent is to deliver a cloud-based AI development platform that provides the necessary enterprise-wide infrastructure and governance so as to accelerate the development of emerging technologies. It aims to provide development, test, and runtime environments with the collaboration, tools, reusable assets, and data that military services need to build AI applications. 

From the Responsible AI perspective, we will be able to integrate and embed a number of Responsible AI tools and processes (e.g. data cards, data catalogs, bias detection tools, model cards, test harnesses, etc.) into the platform architecture.  This not only decreases the barrier to adoption of tools and processes supporting Responsible AI, but also allows for traceability (e.g. documentation, etc…) and scalability while creating enterprise-wide muscle memory around Responsible AI.

Follow me on TwitterCheck out my website