Model-based Engineering for Mixed Criticality Systems

Author: Simon Barner (FORTISS)

Introduction

In the domain of computer architecture, the mixed-criticality integration challenge consists of providing mechanisms that allow to integrate as many functions as possible onto a system while ensuring the safe segregation of functions with different criticality levels. Dedicated architectures for mixed-criticality systems (MCS) are usually based on a small set of core services that can be used to instantiate systems (e.g., networked, virtualized multicore computers as researched in the European project DREAMS [1]) satisfying both the performance and segregation demands.

However, the mixed criticality integration challenge also exists for the development process. Here, model-based engineering enables to integrate numerous functions of different criticality levels onto a shared complex hardware / software platform.

In addition to this description of a model-based engineering approach for MCS, the toolchain currently being developed in the frame of DREAMS is presented (which is based on these principles).

 

Meta-Models

A model-based development process for mixed criticality systems relies on meta-models used to describe different aspects of mixed-criticality systems and the artifacts generated in the different steps of the process [2][3]. In the following, a number of viewpoints that will be described that cluster these meta-models.

Architecture Viewpoints

The architecture viewpoints contains meta-models used to describe structural aspects of the system.

Logical Viewpoint

The logical viewpoint contributes the logical component architecture meta-model. It can be used to define the logical and functional aspects of applications in terms of a component architecture composed of a set of hierarchical software components and their interrelationships modeled using directed channels. The information in a component architectures can be augmented using annotations (e.g., criticality levels, non-functional requirements of components, etc.) and external models that reference the component architecture.

Technical Viewpoint

The technical viewpoint defines the platform architecture meta-model that provides an abstraction of the HW/SW platform. It is organized into two layers that describe the physical hardware architecture on the one hand and the architecture of the virtualization layer (e.g., hypervisor partitions) on the other hand. It encodes the topology and the available resources of the platform, as well as non-functional properties of platform components in form of annotations.

Deployment Viewpoint

The deployment viewpoint provides a deployment meta-model that captures mappings between the model elements of the logical and the technical architecture. Examples include the mapping of logical components to execution units, and ports of logical components to platform transceiver ports.

On the other hand, the deployment viewpoints contributes resource utilization meta-models that can be used to describe the usage of resources in the deployed system: The virtual link meta-model supports to map logical output ports and their connected logical channels to virtual links that represent the messages’ route and quality of service attributes. Furthermore, the schedule meta-model can be used to represent hierarchical resource schedules.

Temporal Viewpoint

The temporal viewpoint contributes a timing meta-model that can be used to specify additional constraints on the execution sequence and requirements on the temporal behavior of logical components and is hence an essential input for the computation of the platform configuration.

Safety Viewpoint

The safety viewpoint provides meta-models used to check the safety consistency of mixed-criticality systems based on IEC 61508. The IEC 61508 and diagnostic techniques and measures meta-model is used to represent IEC 61508 based safety standards, SIL integrity levels and diagnostic techniques and measures defined in the standard. The safety compliance meta-model is used to represent safety specifications of logical components, and the physical platform architecture as well as the virtualization layer. Finally, the safety partitioning restrictions meta-model is used to model the constraints to be met by a deployment in order to support ensuring the correctness of the system from the safety point of view.

Variability Viewpoint

Finally, the variability viewpoint provides a meta-model that can be used to create separate specifications of variation points of a given (product) model.

Figure 1: Development Process for Mixed Criticality Systems

 

The model-based development process for mixed-criticality systems based on the meta-models described above is defined as a chain of transformations from the input models to the final artefacts (via a set of intermediate models and textual artefacts) to be deployed to the target platform (e.g., software images, configuration for devices and software services).

 

 

As it can be seen in Figure 1, different stakeholders provide inputs to the process. These models are that is then passed to analyses and optimizations defined the process. In case a valid deployment could be determined, deployable configurations are generated by the backend of the process. In the following, the major steps of this process, namely system architecture development, software development, safety management, offline resource allocation and configuration generation will be described.

System Architecture Development

During system architecture development, the system architect uses modelling tools to create a model of the overall system that covers the following aspects:

  • Application architecture with annotations (e.g., criticality domains)
  • Desired temporal behavior
  • Platform architecture: structure of hardware platform (nodes, processors, networks, etc.) and software platform (e.g., hypervisors and partitions).
  • Safety specifications of logical and platform components

Software Development Process

The actual functionality of the different application subsystem is provided by software developers who implement software components to be integrated into the overall system as pointed out below. On the one hand, the software developer can follow a fully model-driven software development approach and specify the functionality (and the desired temporal behavior) using models that refine the application architecture specified by the system architect. In this case, a code generator can be used to produce the source code of the application. On the other hand, the software developer can also implement components directly in a programming language supported by the integration platform. In this case, application subsystems are provided as black box components with regard to the system model which is still used to support the integration phase. In either case the resulting source code is compiled and further transformed into deployable images supported by the software integration platform.

Safety Management

During safety management, the system design is verified by means of a set of consistency rules. In the following a number of examples (based on IEC 61508) will be given:

  • SIL claimed cannot be higher than the maximum allowable SIL: The SIL level claimed for a Safety Compliant Item cannot be higher than the allowable SIL value calculated depending on the diagnostics used for compliant item.
  • Safety certification standard supported by any 'compliant item' must be compliant with the system certification standard: All the components of a system must be compliant with the certification standard of the system. If any component is not compliant with this standard, the system will not be considered consistent. If any component is not compliant with system’s standard, but supports a derived standard it will be considered consistent, but a warning message will be shown.
  • Functional Safety Management (FSM) used in the development of any ‘compliant item’ must be compliant with the system FSM: The FSM for each component of a system must be compliant with the FSM for the system.
  • SIL level required for the application is provided by the platform: The platform where an application is deployed on must provide the level of integrity required by the application.

Offline Resource Allocation

The key property of mixed-criticality systems is the integration of different application subsystems of different criticality levels into a single target system based on both design-time and runtime methods. Figure 2 depicts the role and workflow of the system integrator who integrates the components provided by application developers into the overall system model designed by the system architect.

  • Mapping of tasks to execution containers (i.e., partitions).
  • Computation of offline task and message schedules.
  • Configuration of online adaptation strategies (e.g., pre-computed static schedules for different system modes).

Offline resource allocation algorithms designed for mixed criticality systems (that e.g., consider segregation constraints) support the system integrator in this complex task. The result of the offline integration step is stored in a resource utilization model based the output of the resource allocation algorithm as well has the initial application and platform model using a model-transformation.

This model refines platform-independent model artefacts from the application model (e.g., messages between components) into platform-specific constructs (e.g., instantiation of specific communication channels depending on the mapping of the communication endpoints, such as inter-partition communication provided by the hypervisor, as well as on-chip and off-chip communication). Furthermore, the PSM contains detailed information about the deployed application (e.g., mapping, schedules, etc.).

The offline resource allocation and exploration step is illustrated in Figure 2 in more detail, which presents the involved processing chain (colored ovals in the figure, from left to right). It provides two entry points that starts at a different level of abstraction. The basic workflow involves the following steps:

Figure 2: Overview of Offline Resource Allocation and Exploration Process

 

  • Offline adaptation strategies for mixed criticality systems, (blue oval) serve as the entry point to the development process. These methods are used to compute a deployment of an application to an instance of the execution platform and also take into account the online resource management strategies. This step uses models of the application subsystems and a platform model as input. The result of this process is a platform-specific model that contains information about the deployed application.
  • In the figure, the blue backward arrow (“analysis”) indicates that this step is an optimization process where different deployment alternatives are explored and that it provides analysis methods to rate the eligibility or quality of a particular solution (e.g., timing analysis).

Configuration Generation

The purpose of configuration generation is to create deployable configurations for a given mixed criticality application and platform instance. The starting point of the generation process is the model of a (complete) system configuration, which consist of rather high level information, if compared to the deployable device configuration files. Then, a configuration file generator takes this model as input and generates human readable configuration files based on generation templates.

Generators may first generate an intermediate configuration model, before the actual generation of the configuration files. Certain vendor specific or very low level configuration parameters are not covered by the meta-model for mixed criticality systems presented above. Hence, they need to be provided separately, e.g., as fixed values in the templates, as the result of some algorithm or as user input to the generator. They may also be set through manual post-processing of the configuration files.

The generated configuration files are the inputs to an implementation specific platform service configurator, which creates (binary) deployable configuration that can downloaded into the device.

Product-Line Exploration Process

Based on the offline resource allocation process outlined above that handles the configuration of a single system, the following extended development process has been defined to consider entire product-line families.

  • In this case, the process starts with the definition of base models and variability specification (red oval in Figure 2). Here, a system model consisting of an application model and a platform model (see above) are used as base models. Additionally, the system designer provides a (separate) variability specification that defines which parts of the base model can be varied. Hence, the base model serves as template which is augmented with appropriate variation points.
  • Together, both models span an entire product-line family from which particular members can be selected. In the figure, this selection step is designated as variability binding (green oval), since for all variation points, a concrete choice is made. The result of this process is a system model that can be further processed using the basic workflow pointed out above.
  • The green and red backward arrows in the figure indicate that also in this workflow, the eligibility and quality of the deployed system is rated. In case the selected solution does not satisfy all requirements, the following two options exist: At first, a different variability binding is selected (indicated by the green backward arrow), i.e. a different member of the product-line family is used as input for the basic workflow. In case the last step was not successful, the designer changes the definition of the product-line by modifying the base model and/or the variability specification (red arrow).

 

Toolchain

In order to apply the development process for mixed criticality systems presented above in a development project, tool support is required. Obviously, such tool-chain must be tailored to the execution platform supposed to host the mixed criticality application and the targeted application domain. In the following, some information on the toolchain that is currently developed in the frame of the European project DREAMS will be presented.

  • AutoFOCUS3 (AF3) is a model-based development tool for distributed, reactive, embedded software systems. It uses models in all development phases including requirements analysis, design of the logical architecture, design of the platform architecture, implementation, and deployment. The development process is supported by analysis and automation tools, including a Design Space Exploration.
  • BVR (Base Variability Resolution) provides editors and transformation to model and exploit variability. It enables editing of variability models, manual or automatic selection of resolutions, and the realization and materialization of the selected resolutions.
  • RTaW-Timing: a cross-domain timing verification and optimization tool that allows to describe timing chains over several resources and the verification of end-to-end timing constraints.
  • The Safety Constraints and Rules Checker provides and checks a set of safety constraints to verify deployment allocations and a set of safety rules to verify safety properties of the system and subsystems (based on IEC 61508).
  • Scheduling tools for mixed criticality systems
  • TTEthernet: configuration generators and vendor tools
  • XTratuM: configuration generators and vendor tools

 

Figure 3: Model-based Development of Mixed Criticality Systems in AutoFOCUS3

 

References

[1]      «D1.2.1 – Architectural Style of DREAMS», DREAMS Consortium, 7/2014.

[2]      «D1.4.1 - Meta-models for Application and Platform», DREAMS Consortium, 3/2015.

[3]      «D1.6.1 - Meta-models for Platform-specific Modelling», DREAMS Consortium, 5/2016.