New Guidelines Help Operationalize The Defense Department’s “Ethical Principles for AI”
In February of 2020, the Department of Defense (DoD) formally adopted ethical principles in artificial intelligence. Shortly thereafter, the Defense Innovation Unit (DIU) launched a strategic initiative to help stakeholders put those principles into practice. One of the outcomes of the effort to date is the publication of a report called “Responsible AI Guidelines in Practice”. Production of the report included the operationalization of Responsible AI principles in two case studies. The subject of one case study was a project focused on healthcare and included Google, Jenoptik, and Enliticas as the technology provider partners. “Countering Foreign Malign Influence” was a joint project between the DoD, the DIU, and Quantifind.
The project and the report
The Countering Foreign Malign Influence project aims to better support DoD analysts with open-source intelligence (OSINT). OSINT leverages analytics derived from commercially available information (CAI) and publicly available information (PAI) to identify, track, and counter international criminals and their networks, and in particular those attempting to obfuscate their identities and activities. An important aspect of the project is to make use of this open-source data to construct knowledge graphs that allow for more efficient and productive use of analysts’ time, and importantly to reveal relationships between individuals and organizations–or “entities” –that would be otherwise difficult for human analysts to manually identify, due in part to the large and growing volume of open-source data, the dynamic and unstructured nature of that data, and the complexity of data that must be analyzed.
The RAI Guidelines include questions that should be addressed at the planning, development, and deployment phases of the AI lifecycle. They provide “step-by-step guidance for AI companies, DoD stakeholders, and program managers to ensure AI programs align with the DoD’s Ethical Principles for AI and ensure that fairness, accountability, and transparency are considered at each step in the development cycle.”
The outcomes
The project has served as a valuable source of research and documentation of real-world Responsible AI implementation practices. Citing an example from the report, the work uncovered the importance of analyzing the tradeoff between performance gains obtained by leveraging larger language models and those potential biases and performance irregularities that such models can introduce. Another example is the importance of continually measuring performance at both the individual model and end-to-end system levels.
Another valuable outcome of the work was validation of the DIU’s Responsible AI Guidelines, given that they mirrored many of the ethical-use principles that were already in practice at Quantifind. The report mentions how Quantifind described the “question-response” approach to planning as providing a mechanism for public and private participants to proactively communicate about processes, standards, and known problem areas, and “get them on the table”. Integrating DIU’s RAI Guidelines into their process helped establish a good two-way dialogue that benefits all stakeholders and creates a model for collaboration that can be replicated throughout the DoD.
Applying the RAI Guidelines to active programs and iterating on their content has generated key learnings, which are detailed in the report. They have demonstrated to be a useful starting point for operationalizing the DoD’s Ethical Principles for AI, and the DIU will continue collaboration with stakeholders to further develop the RAI Guidelines. DIU is actively deploying the RAI Guidelines on a range of projects.