Hazard contribution modes of machine learning components

Colin Smith, Ewen Denney, Ganesh Pai

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs)-i.e., software components that use machine learning (ML)-are to analyze and understand the contribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the notion of hazard contribution modes (HCMs)-a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing LEC safety analysis towards eventually automating it in a tool.

Original languageEnglish
Pages (from-to)14-22
Number of pages9
JournalCEUR Workshop Proceedings
Volume2560
StatePublished - 2020
Event2020 Workshop on Artificial Intelligence Safety, SafeAI 2020 - New York, United States
Duration: Feb 7 2020 → …

Fingerprint

Dive into the research topics of 'Hazard contribution modes of machine learning components'. Together they form a unique fingerprint.

Cite this