Learning-Based Self-Adaptive Assurance of Timing Properties in a Real-Time Embedded System

Learning to assure timing properties

Assuring that performance requirements of real-time systems are met is no simple task. We propose using reinforcement learning to enable an adaptive approach to satisfy response time requirements in PLC systems. We envision that the approach could be integrated in existing supervision control programs.

This paper is an important milestone – the first publication of the first PhD student I co-supervise: Mahshid Helali Moghadam. Mahshid already has a background in research with peer-reviewed publications in cloud computing, adaptive systems and machine learning – I’m sure to learn a lot from her the next years. Looking forward to this! Mahshid is funded through the TESTOMAT Project, employed by RISE in Västerås, and enrolled as a PhD student at Mälardalen University. In the first phase of the project, Mahshid has initiated collaboration with the TESTOMAT industry partner in Västerås: Bombardier Transportation.

Mahshid’s initial plans address requirements regarding response times in PLC systems, more specifically robust temporal behavior in safety-critical systems implemented using function block diagrams. In this early work, we outline a solution using a self-adaptive learning-based approach for response time assurance of a real-time control program. We formulate the control process as a Markov decision process and propose using reinforcement learning (Q-Learning) for adaptive control of response time to meet the performance requirements. We argue that such adaptive performance assurance could be integrated in the supervision control program in real-time embedded systems. There are more publications to come on the topic – two papers have already been accepted at events co-located with ICSE in Gothenburg in May!

Implications for Research

  • We propose formulating response time assurance in PLC systems as an adaptive control problem.
  •  We formulate the control process as a Markov decision process and suggest using Q-learning for adaptive control.
Mahshid Helali Moghadam, Mehrdad Saadatmand, Markus Borg, Markus Bohlin, and Björn Lisper. Learning-Based Self-Adaptive Assurance of Timing Properties in a Real-Time Embedded System, In Proc. of the 2nd International Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems, 2018. (link, preprint)

Abstract

Providing an adaptive runtime assurance technique to meet the performance requirements of a real-time system without the need for a precise model could be a challenge. Adaptive performance assurance based on monitoring the status of timing properties can bring more robustness to the underlying platform. At the same time, the results or the achieved policy of this adaptive procedure could be used as feedback to update the initial model, and consequently for producing proper test cases. Reinforcement-learning has been considered as a promising adaptive technique for assuring the satisfaction of the performance properties of software-intensive systems in recent years. In this work-in-progress paper, we propose an adaptive runtime timing assurance procedure based on reinforcement learning to satisfy the performance requirements in terms of response time. The timing control problem is formulated as a Markov Decision Process and the details of applying the proposed learning-based timing assurance technique are described.