The advancement and adoption of Machine Learning (ML) algorithms are a crucial innovative disruption. However, to benefit from these innovations within security and safety-critical domains we need to be able to evaluate the risk and benefits of the technologies used: in particular we need to assure ML-based and autonomous systems.
The assurance of complex software-based systems often relies on a standards-based justification, but in the case of autonomous systems, it is difficult to rely solely on this approach given the lack of validated standards, policies, and guidance for such novel technologies. Also, other strategies such as “drive to safety” - using evidence developed from trials and experience to support claims of safety in deployment - is unlikely to be successful by itself, especially if the impact of security threats are taken into account. This reinforces the need for innovation in assurance and the development of an assurance methodology for autonomous systems.
Although forthcoming standards and guidelines will eventually have an important, yet indirect role in helping us justify behaviours, we need further development of assurance frameworks to enable us to exploit disruptive technologies. We focus on directly investigating the desired behaviour (e.g., safety property or reliability) of a system through an argument or outcome-based approach that integrates disparate sources of evidence whether from compliance, experience or product analysis. We argue that building trust and trustworthiness through argument-based mechanisms, specifically, the Claims, Arguments, and Evidence (CAE) framework, allows for the accelerated exploration of novel mechanisms that would lead to the quality advancement and assurance of disruptive technologies.
The TIGARS project details some of our work in assuring autonomy for autonomous vehicles and our Dstl project explores our work on assurance templates.
To discuss how we may be able to help, please contact us.
*AI relates to machines or computer systems with at least near human general intelligence able to autonomously take actions and decisions without human interaction. ML is a particular type of autonomous algorithm that allows machines to learn a particular desired behaviour without explicitly being programmed to do it.