Trusted AI and Autonomous Systems

Ensuring effective and safe operations of artificial intelligence / machine learning-enabled aerospace systems requires ongoing monitoring of system state of health and verification and validation of end-to-end enterprise effectiveness.
Assuring Operations of Autonomous Systems Policy Paper Cover Image

U.S. aerospace companies are increasingly using intelligent agents, artificial intelligence (AI), and machine learning (ML) in their complex systems of systems, comprising hardware, software, networks, and human-machine interfaces. The aerospace and defense market for AI is already estimated to be $2B and growing rapidly. 

The Executive Order on Maintaining American Leadership in Artificial Intelligence, released in February by the White House, emphasizes the need for trust in these complex systems.  

Small abnormalities can spread unchecked in these intelligent, complex ecosystems, resulting in unforeseen downstream impacts. An autonomous system can change its operating environment, which changes inputs to the system, causing feedback loops that are difficult to track and manage. There are multiple scenarios where time-critical autonomous systems require improved operational assurance. 

Ensuring effective and safe operations of AI/ML-enabled aerospace systems requires ongoing monitoring of system state of health and verification and validation of end-to-end enterprise effectiveness. These needs drive mission assurance (MA) for AI. 

AI/ML techniques are also needed to accommodate the increasing 5Vs (volume, velocity, variety, value, and veracity) that outpace the capacity of humans. To outpace future threats, assured mission success requires continual system performance assessment that is agile enough to identify threats and abnormalities, can anticipate anomalies, and take remedial actions to ensure sustained and resilient operations. Space systems also require AI to counter adversarial intelligent actors. These needs drive AI for MA. 

To advance U.S. leadership in space, we need both AI for MA and MA for AI. For reference, check out the Center for Space Policy and Strategy paper on Assuring Operations of Autonomous Systems

This story appears in the June 2019 issue of Getting It Right, Collaborating for Mission Success.

Subscribe to Getting It Right

Want Getting It Right delivered to your inbox quarterly?
Abstract background with blue line
Assuring Operations of Autonomous Systems Policy Paper Cover Image

Trusted AI and Autonomous Systems

U.S. aerospace companies are increasingly using intelligent agents, artificial intelligence (AI), and machine learning (ML) in their complex systems of systems, comprising hardware, software, networks, and human-machine interfaces. Small abnormalities can spread unchecked in these intelligent, complex ecosystems, resulting in unforeseen downstream impacts. An autonomous system can change its operating environment, which changes inputs to the system, causing feedback loops that are difficult to track and manage. There are multiple scenarios where time-critical autonomous systems require improved operational assurance.