Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - TUNE (Testing the Untestable: Model Testing of Complex Software-Intensive Systems)

Teaser

Software-intensive systems pervade modern society and industry. These systems often play critical roles from an economic, safety or security standpoint, thus making their dependability a crucial matter. Developing technologies to verify and validate that complex systems are...

Summary

Software-intensive systems pervade modern society and industry. These systems often play critical roles from an economic, safety or security standpoint, thus making their dependability a crucial matter. Developing technologies to verify and validate that complex systems are reliable, safe, and secure is therefore an essential societal and economic objective.

One key aspect is that the verification and validation (V&V) of software should be automated to scale up to real, complex systems and services. Such automation is truly challenging as it should be both effective at finding critical faults and economically viable.

This research applies the latest Artificial Intelligence developments (e.g., Machine Learning, Evolutionary Computing, Natural Language Processing) to enable cost-effective V&V automation. This endeavor covers all aspects of V&V, from early system requirements analysis to design verification, automated software testing, and run-time monitoring. It also addresses all aspects of dependability including reliability, safety, security, and compliance with regulations.

Work performed

All projects below were performed in collaboration with industry partners in the automotive, satellite and financial domains. Industrial case studies were used to validate our solutions. Most of the proposed solutions involve the application of machine learning, evolutionary computing, natural language processing, and model-driven engineering.

● Requirements Quality Assurance

Providing automated assistance for requirements quality assurance (RQA) can significantly reduce development costs, increase trustworthiness, and improve innovation by allowing companies to focus more of their (often scarce) resources on building new products. We have focused so far on the automation for some complex and laborious RQA tasks. Our focus throughout was on requirements stated in natural (human) language, motivated by the prevalent use of natural-language requirements in industry.

● Model-Based Testing of Software-Based Systems and Services

In the context of cyber-physical systems (e.g., automotive), we have achieved the objective by developing automated testing solutions that leverage the artifacts commonly produced during software analysis and design practices: requirements specifications in natural language, domain models, and timed automata capturing the timing requirements of the system.

In the context of data processing systems, we have developed scalable and efficient automated testing solutions through the combination of (1) a methodology to model the input and output of the system and their relationships, and (2) a set of techniques for the automated generation of optimized test suites using model-based data mutation, meta-heuristic search and constraint solving.

In the context of cyber-physical systems, we have developed a technology to support the optimization of hardware-in-the-loop testing, which is usually the last stage before deployment and typically a very time-consuming and expensive activity.

● Testing and Analysis of Product lines

We developed and validated a technique for the automated classification and prioritization of test cases in the context of product lines and requirements-driven testing. The technique relies on change impact analysis to identify obsolete and reusable test cases. To automatically prioritize test cases, the technique relies on a prediction model that computes a prioritization score based on multiple risk factors such as fault-proneness of requirements and requirements volatility.

● Security Testing

The work on security testing led to the development of automated, black-box solutions to identify the most frequent security risks according to OWASP, e.g., SQL injection vulnerabilities, XML injection vulnerabilities. Our approach is however generalizable to most types of vulnerabilities.

● Model Testing

We have developed an environment for the co-simulation of software models (in UML) and function models in Simulink, which is a necessary platform for early design verification. In addition, we have started to develop a framework to perform trace checking of simulation results in order to verify the types of properties that are typically checked on input and output signals in cyber-physical systems.
Last, we have also developed and evaluated a technology for the automated testing and verification of Simulink models early in the model-in-the-loop development phase.

Final results

We intend to build on our modeling infrastructure with support for co-simulation to develop a comprehensive model testing framework for CPS function and design models. Our model testing framework will enable automated specification of test oracles for continuous CPS behaviors, analyze models with uncertain and unknown behaviors, and identify high-risk CPS behaviors.

We seek to develop techniques for effective testing and safety analysis of AI-based systems used in self-driving systems (e.g., those containing deep neural network (DNN) components). In particular, we aim to develop automated testing techniques for DNNs based on different model testing strategies for CPS and to provide techniques to help with explaining and interpreting DNN behaviors.

Leveraging our modelling foundation for CPS and our suite of meta-heuristic search algorithms for CPS testing, we plan to create a simulation framework for systems that are based on Internet of Things (e.g., a disaster management system) and develop automated techniques for online self-adaptation of such systems to improve their resilience and reliability.

Having a smooth transition from requirements to testing is of great importance. A major gap nevertheless remains between the two activities. In particular, little work exists on the derivation of system-level test specifications, also known as acceptance criteria. Defining acceptance criteria for a complex CPS is a time-consuming and error-prone activity, especially when the requirements evolve frequently. In the future, we would like to develop automated strategies for deriving acceptance criteria from requirements and ensuring that the derived criteria are feasible, up-to-date, and accurately targeted at the most important system scenarios.

Run-time verification is one of the most suitable techniques for the verification of highly-dynamic and pervasive cyber-physical systems, which are executed in uncertain and variable environments. We plan to develop an innovative run-time verification approach that will lift the verification at the model level, leading to run-time model verification. As the system executes evolves over time, run-time model verification requires 1) the model to be kept alive, and 2) the requirements specifications against which the model is checked to adapt based on the intrinsic evolution of the system and its environment. Our framework will address the challenge of dealing with incomplete and evolving models and specifications in the run-time verification process.

Website & more info

More info: http://www.erc-tune.eu.