Kailin Tong; Selim Solmaz; Martin Horn
Abstract: A reliable automated driving system (ADS) needs to perform a minimal risk maneuver (MRM) in disrupting normal driving tasks, e.g., when its perception system fails or is unreliable. One way to achieve this is by utilizing a run-time monitoring device/functionality to supervise the automated driving system status to initiate an MRM. Unlike previous research on MRM planning or safe-stop planning, where a redundant planner is running, we solve this problem in a different direction. We propose a motion planning framework for MRM by extending the directed-graph map for normal driving conditions. In our implementation, the Monitoring device supervises sensors' health and data quality and decides whether an MRM should be initiated. If an MRM is triggered, no additional planner is required, but only one additional backup search graph for MRM is utilized. Hence, the planner redundancy is no longer necessary, and the computation resources can be potentially relieved. We evaluated our approach in normal driving and conditions with perception fault injections leading to MRM. Simulations utilizing the Autoware (architecture proposal) software stack  indicate that the proposed framework fulfills the deadline of 30 ms and provides increased reliability in ADS.
Ludwig Kampel, Michael Wagner, Dimitris E. Simos, Mihai Nica, Dino Dodig, David Kaufmann, Franz Wotawa
Abstract: The advancements of automated and autonomous vehicles requires virtual verification and validation of automated driving functions, in order to provide necessary safety levels and to increase acceptance of such systems. The aim of our work is to investigate the feasibility of combinatorial testing fault localization (CT-FLA) in the domain of virtual driving function testing. We apply CT-FLA to screen parameter settings that lead to critical driving scenarios in a virtual verification and validation framework used for automated driving function testing. Our first results indicate that CT-FLA methods can help to identify parameter-value combinations leading to crash scenarios. Index Terms—Combinatorial testing, Combinatorial fault lo- calization, AEB, autonomous driving, test scenario generation
Ovidiu Vermesan, Cristina De Luca, Reiner John, Marcello Coppola, Björn Debaillie, Giulio Urlini
Abstract: The ethics of AI in industrial environments is a new field within applied ethics, with notable dynamics but no well-established issues and no standard overviews. It poses many more challenges than similar consumer and general business applications, and the digital transformation of industrial sectors has brought into the ethical picture even more considerations to address. This relates to integrating AI and autonomous learning machines based on neural networks, genetic algorithms, and agent architectures into manufacturing processes. This article presents the ethical challenges in industrial environments and the implications of developing, implementing, and deploying AI technologies and applications in industrial sectors in terms of complexity, energy demands, and environmental and climate changes. It also gives an overview of the ethical considerations concerning digitis- ing industry and ways of addressing them, such as potential impacts of AI on economic growth and productivity, workforce, digital divide, alignment with trustworthiness, transparency, and fairness. Additionally, potential issues concerning the concentration of AI tech- nology within only a few companies, human-machine relationships, and behavioural and operational misconduct involving AI are examined. Manufacturers, designers, owners, and operators of AI—as part of auton- omy and autonomous industrial systems—can be held responsible if harm is caused. Therefore, the need for accountability is also addressed, particularly related to industrial applications with non-functional requirements such as safety, security, reliability, and maintainability supporting the means of AI- based technologies and applications to be auditable via an assessment either internally or by a third party. This requires new standards and certification schemes that allow AI systems to be assessed objectively for compliance and results to be repeatable and reproducible. This article is based on work, findings, and many discussions within the context of the AI4DI project.
Herbert Mühlburger, Franz Wotawa
Abstract: Worldwide cyber-attacks constantly threaten the security of available infrastructure relying on cyber-physical systems. Infrastructure companies use passive testing approaches such as anomaly-based intrusion detection systems to observe such systems and prevent attacks. However, the effectiveness of intrusion detection systems depends on the underlying models used for detecting attacks and the observations that may suffer from scarce data availability. Hence, we need research on a) passive testing methods for obtaining appropriate detection models and b) for analysing the impact of the scarceness of data for improving intrusion detection systems. In this paper, we contribute to these challenges. We build on former work on supervised intrusion detection of power grid substation SCADA network traffic where a real-world data set (APG data set) is available. In contrast to previous work, we use a semi-supervised model with recurrent neural network architectures (i.e., LSTM Autoencoders and sequence models). This model only considers samples of ordinary data traffic without attacks to learn an adequate detection model. We outline the underlying foundations regarding the machine learning approach used. Furthermore, we present and discuss the obtained experimental results and compare them with prior results on supervised machine learning approaches. The source code of this work is available at:https: //github.com/muehlburger/semi-supervised-intrusion-detection-scada
Nicolas Gerlin, Endri Kaja, Monideep Bora, Keerthikumara Devarajegowda, Dominik Stoffel, Wolfgang Kunz, Wolfgang Ecker
Abstract: While semiconductors are becoming more efficient generation after generation, the continuous technology scaling leads to numerous reliability issues due, amongst others, to variations in transistors characteristics, manufacturing defects, component wear-out, or interference from external and internal sources. Induced bit flips and stuck-at-faults can lead to a system failure. Security-critical systems often use Physical Memory Protection (PMP) modules to enforce memory isolation. The standard loosely-coupled approach eases the implementation but creates overhead in area and performance, limiting the number of protected areas and their size. While delivering great support against malicious software and induced faults, better performance would benefit safety tasks by preventing the program from jumping into an undesired region and giving wrong outputs.We propose a novel model-driven approach to resolve these limitations by generating a tightly-coupled RISC-V PMP, which reduces the impact of run-time reconfiguration. We also discuss guidelines on configuring a PMP to minimize the overhead on performance and memory, and provide an area estimation for each possible PMP design instance. We formally verified a RISC-V Core with a PMP and evaluated its performance with the Dhrystone Benchmark. The presented architecture shows a performance gain of about 3 times against the standard implementation. Furthermore, we observed that adding the PMP feature to a RISC-V SoC led to a negligible performance loss of less than 0.1% per thousand PMP reconfigurations..