Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard

ML and ISO 26262 – What to do?

Machine learning enables many novel applications, and we want to use it also in safety-critical contexts. However, the safety standards such as ISO 26262 are based on best practices for development in the 90s, long before the deep learning era. We interviewed two experts on functional safety to get their views on the way forward.

This short paper is the first peer-reviewed paper that got published from the SMILE project. It’s also the first paper by the PhD student Jens Henriksson – many more to come I believe. In SMILE, we’ve had regular workshops to discuss issues with machine learning in the ISO 26262 context. At one of these workshops, we decided to conduct some interviews to capture thoughts by two experts in the field.

Standing on the shoulders of Salay et al.

Just like many others, we are interested in knowing what parts of standards for development of safety-critical systems contradict the nature of machine learning. By properly understanding this, we could work from two directions to realize safe systems with machine learning features – we could develop learning behavior in a way to meet standards, and we could adapt standards to meet the nature of machine learning.

Our favorite study that does this is Salay et al. (2017), in which they analyzed ISO 26262 from a machine learning perspective, especially 34 methods that apply on the software unit level. They concluded that seven methods need to be adapted, see the first two columns below.

Seven methods in ISO 26262-6 that needs to be adapted according to Salay et al. (2017). The final column shows the recommended adaptations based on our interviews.

In our preliminary study, we interviewed two experts on functional safety in the automotive domain, and asked them to comments on the findings from Salay et al. (2017). To limit the scope of the study, we focused on the 27 methods that are highly recommended for ASIL D.

The two experts reported that most of these methods, such as “initialization of variables”, exist to increase the interpretability of the software unit – and that would for sure also apply to units that include machine learning. Similarly, methods like “fault injection test” and “resource usage test” are highly recommended and already applicable for machine learning. Regarding the seven methods that Salay et al. (2017) believe needs adaptations, the two interviewees recommended the following:

1) Requirements are needed on the training and architecture design phase of machine learning. For example, a neural network is trained to create a mapping from an input to an output, but the corresponding requirements are not needed on a neuron level – instead we need requirements on the network architecture and the approach to training.

2) Machine learning models are often sensitive to new input data. Understanding how sensitive they are to disturbances is critical, for example, altering the input vector slightly should not result in a large step response (although this is common). Thus, fault injection testing is important for machine learning.

3) Test cases need to be carefully designed to verify the right things (including model sensitivity) and to ensure that functional expectations are met. Naïvely relying on random input is not sufficient to detect machine learning corner cases, as we know from ongoing research on adversarial attacks.

We intend to conduct interviews with additional domain experts in the fall.

Implications for Research

  • Corroborates findings by Salay et al. (2017)
  • Paves the way for additional empirical studies on machine learning safety and ISO 26262.

Implications for ML Practitioners

  • Specify requirements on the network architecture and how training should be done
  • Use fault injection to test model sensitivity
  • Expect novel approaches to test case generation, random data is not sufficient.
Jens Henriksson, Markus Borg, Cristofer Englund. Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard, In Proc. of the 1st Software Engineering for AI in Autonomous Systems, 2018. (link, preprint)


Machine learning (ML) applications generate a continuous stream of success stories from various domains. ML enables many novel applications, also in safety-critical contexts. However, the functional safety standards such as ISO 26262 did not evolve to cover ML. We conduct an exploratory study on which parts of ISO 26262 represent the most critical gaps between safety engineering and ML development. While this paper only reports the first steps toward a larger research endeavor, we report three adaptations that are critically needed to allow ISO 26262 compliant engineering, and related suggestions on how to evolve the standard.