Ovidiu Vermesan, Cristina De Luca, Reiner John, Marcello Coppola, Björn Debaillie, Giulio Urlini
Abstract: The ethics of AI in industrial environments is a new field within applied ethics, with notable dynamics but no well-established issues and no standard overviews. It poses many more challenges than similar consumer and general business applications, and the digital transformation of industrial sectors has brought into the ethical picture even more considerations to address. This relates to integrating AI and autonomous learning machines based on neural networks, genetic algorithms, and agent architectures into manufacturing processes. This article presents the ethical challenges in industrial environments and the implications of developing, implementing, and deploying AI technologies and applications in industrial sectors in terms of complexity, energy demands, and environmental and climate changes. It also gives an overview of the ethical considerations concerning digitis- ing industry and ways of addressing them, such as potential impacts of AI on economic growth and productivity, workforce, digital divide, alignment with trustworthiness, transparency, and fairness. Additionally, potential issues concerning the concentration of AI tech- nology within only a few companies, human-machine relationships, and behavioural and operational misconduct involving AI are examined. Manufacturers, designers, owners, and operators of AI—as part of auton- omy and autonomous industrial systems—can be held responsible if harm is caused. Therefore, the need for accountability is also addressed, particularly related to industrial applications with non-functional requirements such as safety, security, reliability, and maintainability supporting the means of AI- based technologies and applications to be auditable via an assessment either internally or by a third party. This requires new standards and certification schemes that allow AI systems to be assessed objectively for compliance and results to be repeatable and reproducible. This article is based on work, findings, and many discussions within the context of the AI4DI project.
Herbert Mühlburger, Franz Wotawa
Abstract: Worldwide cyber-attacks constantly threaten the security of available infrastructure relying on cyber-physical systems. Infrastructure companies use passive testing approaches such as anomaly-based intrusion detection systems to observe such systems and prevent attacks. However, the effectiveness of intrusion detection systems depends on the underlying models used for detecting attacks and the observations that may suffer from scarce data availability. Hence, we need research on a) passive testing methods for obtaining appropriate detection models and b) for analysing the impact of the scarceness of data for improving intrusion detection systems. In this paper, we contribute to these challenges. We build on former work on supervised intrusion detection of power grid substation SCADA network traffic where a real-world data set (APG data set) is available. In contrast to previous work, we use a semi-supervised model with recurrent neural network architectures (i.e., LSTM Autoencoders and sequence models). This model only considers samples of ordinary data traffic without attacks to learn an adequate detection model. We outline the underlying foundations regarding the machine learning approach used. Furthermore, we present and discuss the obtained experimental results and compare them with prior results on supervised machine learning approaches. The source code of this work is available at:https: //github.com/muehlburger/semi-supervised-intrusion-detection-scada
Nicolas Gerlin, Endri Kaja, Monideep Bora, Keerthikumara Devarajegowda, Dominik Stoffel, Wolfgang Kunz, Wolfgang Ecker
Abstract: While semiconductors are becoming more efficient generation after generation, the continuous technology scaling leads to numerous reliability issues due, amongst others, to variations in transistors characteristics, manufacturing defects, component wear-out, or interference from external and internal sources. Induced bit flips and stuck-at-faults can lead to a system failure. Security-critical systems often use Physical Memory Protection (PMP) modules to enforce memory isolation. The standard loosely-coupled approach eases the implementation but creates overhead in area and performance, limiting the number of protected areas and their size. While delivering great support against malicious software and induced faults, better performance would benefit safety tasks by preventing the program from jumping into an undesired region and giving wrong outputs.We propose a novel model-driven approach to resolve these limitations by generating a tightly-coupled RISC-V PMP, which reduces the impact of run-time reconfiguration. We also discuss guidelines on configuring a PMP to minimize the overhead on performance and memory, and provide an area estimation for each possible PMP design instance. We formally verified a RISC-V Core with a PMP and evaluated its performance with the Dhrystone Benchmark. The presented architecture shows a performance gain of about 3 times against the standard implementation. Furthermore, we observed that adding the PMP feature to a RISC-V SoC led to a negligible performance loss of less than 0.1% per thousand PMP reconfigurations..
Birgit Schlager, Thomas Goelles, Stefan Muckenhuber, Daniel Watzenig
Abstract: Lidar sensors play an essential role in the perception system of automated vehicles. Fault Detection, Isolation, Identification, and Recovery (FDIIR) systems are essential for increasing the reliability of lidar sensors. Knowing the influence of different faults on lidar data is the first crucial step towards fault detection for lidar sensors in automated vehicles. We investigate the influences of sensor cover contaminations on the output data, i.e., on the lidar point cloud and full waveform. Different contamination types were applied (dew, dirt, artificial dirt, foam, water, and oil) and the influence on the output data of the single beam lidar RIEGL LD05-A20 and the automotive mechanically spinning lidar Ouster OS1-64 was evaluated. The LD05-A20 measurements show that dew, artificial dirt, and foam lead to unwanted reflections at the sensor cover. Dew, artificial dirt over the entire transmitter, and foam measurements lead to severe faults, i.e., complete sensor blindness. The OS1-64 measurements also show that dew can lead to almost complete sensor blindness. The results look promising for further studies on fault detection and isolation, since the different contamination types lead to different symptom combinations.
Lina Marsso, Radu Mateescu, Lucie Muller, Wendelin Serwe
Abstract: We present two behavioral models of an autonomous vehicle and its interaction with the environment. Both models use the formal modeling language LNT provided by the CADP toolbox. This paper discusses the modeling choices and the challenges of our autonomous vehicle models, and also illustrates how formal validation tools can be applied to a single component or the overall vehicle.