Abstract
Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs)-i.e., software components that use machine learning (ML)-are to analyze and understand the contribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the notion of hazard contribution modes (HCMs)-a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing LEC safety analysis towards eventually automating it in a tool.
Original language | English |
---|---|
Pages (from-to) | 14-22 |
Number of pages | 9 |
Journal | CEUR Workshop Proceedings |
Volume | 2560 |
State | Published - 2020 |
Event | 2020 Workshop on Artificial Intelligence Safety, SafeAI 2020 - New York, United States Duration: Feb 7 2020 → … |