skip to Main Content

Triaging Autonomous Drone Faults By Simultaneously Assuring Autonomy and Security

Our premise is that assuring autonomy has operational and security components and that artificial intelligence (AI) based methods are appropriate for this task; however, making these AI techniques “Explainable” can be non-trivial. Further, our position is that a reasonable solution to this problem would require a monitor (preferably at the ground station) to access the drone’s operational autonomy and individual components. The monitor could be developed to review every decision made by the native operational autonomy. When a significant number of “bad decisions” (i.e., different from the monitor) are made, the monitor can “lock out” the native autonomy and take over (establishing an explainable “safe state”). Similarly, the monitor can use data from mini-anomaly detectors to observe all of the drone’s major components and aggregate the feedback to determine if the drone’s security has been compromised. Essentially, we propose an “Explainable AI Security” Monitor that not only simultaneously assures the operational autonomy and security of autonomous drone fleets but also can likely output the logic that sourced its decisions. This paper will survey the current autonomy, assurance, and security literature and point out this gap (i.e., Explainable AI Security Monitor) and the critical need.

Back To Top
Search