Timing Analyis

Author: Jörn Migge (RTaW)

Overview of Timing Analysis

The goal of a timing analysis is to make predictions about communication or task response delays when the communication or processing resources are shared. These predictions allow verifying at design time, i.e. before the actual system is built, if the timing constraints will be met at run-time.

The prediction may either be absolute statements or be of statistical/probabilistic nature. The first kind of predictions, obtained from what is called worst-case timing analysis, consists in the computation of (bounds on) worst case delays, which are never exceeded at run-time. If these are consistent with the latency constraints, then it can be concluded that the timing constraints are always met at run-time.

The following figure shows schematically where the worst-case value, the computed bound and latency constraint are situated with respect to the probability distribution of a delay:

The x-axis represents the possible delay values and the y-axis the corresponding probability for the quantity of interest to experience that delay. Notice the following:

 

  • By definition, the probability of occurrence is zero beyond the “Worst-Case delay”
  • and the “Computed bound” is larger than (or equal to) the “Worst-Case delay”.
  • When there is no known method for computing the “Worst-Case delay”, bounds on the actual delays have to be derived which is usually easier than obtaining the true “Worst-Case delay”
  • The latency constraint is met if the “Worst-Case delay” is smaller than the “Latency constraint”, as it is the case in the figure.
  • If the “Computed bound” is smaller than the “Latency constraint” it can be concluded that the latency constraint is met.
  • On the other hand, if the “Computed bound” is larger than the “Latency constraint”, then the latency constraint might nevertheless be met, but without knowledge of the “Worst-Case delay” or a better bound we cannot conclude.

 

Simulations are often used to check functional behavior, but can also be used as timing analysis method. Typical outcomes of simulation are histograms that say how frequently different delay values have been monitored and from which other statistics may be derived. These are the second kind of predictions mentioned above.

Other useful statistics are the quantiles [Na14], which are thresholds with a certain probability of being exceeded. Formally, a p-quantile for a random variable X, is a value xp such that

  • P[X ≤ xp] = p or equivalently, P[X> xp] = 1- p

In other words, it is a threshold xp such that for any delay:

  • the probability to be smaller than xp is larger than p
  • the probability to be larger than xp is smaller than 1 - p

For example, if we need a probability lower than 10^-n to exceed a certain threshold, we consider the (1-10^-n)-quantile, which is illustrated in the following figure: the probability of exceeding the quantile is lower than 10^-n.

Notice however that a simulation cannot find worst-case delays and thus does not allow to verify and guarantee firm timing constraints. For this purpose we need worst-case timing analysis, which is either able to identify the longest (=worst-case) delays, or upper bounds on the worst-case values, as explained before.

Two categories of basic scheduling policies exist for sharing communication or processing resources:

  • Time triggered scheduling: online resource contentions are avoided through time division
  • Event driven scheduling: online resource contentions are solved by rules based for example on arrival order, priorities, credits, etc.

For the first category, timing analysis consists of rather simple algorithms, which are however useful to cross-check the scheduling configuration produced by a dedicated tool.

For the second category of scheduling policies, timing analysis is needed to verify off-line, if on-line all timing constraints will always be met in any possible execution scenario. Such an analysis is performed under the assumption of a given a set of resources usage requests, formalized through contracts that limit their amount. In this context, timing analysis algorithms are usually rather complex, and may be based on over approximation to reduce complexity.

 

Timing Analysis of mixed Time-Triggered / Event-triggered Systems

In order to better exploit the available resource in the context of different levels of criticality and/or timing constraints, it is possible to combine time triggered and event driven scheduling. An example is mixed-critical traffic in switched Ethernet network, based on the SAE6802 standard.

Switched Ethernet is becoming a de-facto standard in industrial and embedded networks. Many of today’s applications benefit from Ethernet’s high bandwidth, large frame size, multicast and routing capabilities through IP, and the availability of the standard TCP/IP protocols. There are however many variants of Switched Ethernet networks, just considering the MAC level mechanisms on the stations and communication switches. An important technology in that landscape is TTEthernet, standardized as SAE6802 [1], which allows the transmission of 1) purely time-triggered (TT) traffic, 2) sporadic, also called rate-constrained (RC) traffic and 3) best-effort (BE) traffic that is simply a class for low-priority flows without timing and delivery guarantees.

In many applications, the most critical traffic will belong to the TT class since TT transmission ensures the least jitters in reception and deadline respect by construct. Indeed, the TT schedule is build so as to meet the timing constraints of the TT frames. Generating communication schedule involves non-trivial optimization algorithms, see for instance [Ta13], whose actual performances are difficult to assess given that no optimal solution can be known except on very small problems.

On the opposite, RC traffic does not require to build a global schedule but require some timing analysis to verify that the system’s timing behavior will respect the memory and frame latency constraints (see [Gr04, Fr06, Bo11]). These analyses have come to a point where they are accurate, as evidenced in [Bo12], but this is at the expense of the complexity needed to account for fine-grained system characteristics such as task scheduling [BD12], frame scheduling at the end-system level [Bo14] and transmission offsets [Li11]. In Network Calculus, it is readily possible to derive the timing analysis for RC traffic in presence of TT traffic, by considering the processing and transmission times of TT frames as time periods during the resource is unavailable for RC traffic and adapt the network-calculus service curves accordingly (see [Bo16]). Such an analysis is implemented in the RTaW-Pegase timing analysis tool for embedded communication architectures developed by RTaW in partnership with ONERA.

 

Mixed-criticality networks, with several classes of traffic, offer a lot of flexibility in terms of how the communication can be organized. It becomes however difficult for the system designer to know beforehand the impact of configuration choices on the communication latencies. Given the number configuration choices parameters involved, we believe that design space exploration techniques that would guide the designer would help to raise the level of abstractions and lead to a faster and more secure design process. Also required in our view is a better understanding of how to best integrate mixed-criticality traffic in complex networked embedded systems as currently investigated in the DREAMS FP7 EU project [DR15].

 

References

[Na14] N. Navet, S. Louvart, J. Villanueva, S. Campoy-Martinez, J. Migge, “Timing verification of automotive communication architectures using quantile estimation“, Embedded Real-Time Software and Systems (ERTS 2014), Toulouse, France, February 5-7, 2014.

[AS6802] “Time-Triggered Ethernet”, SAE standard AS6802, 2011.

[BD12] M. Boyer, D. Doose, “Combining network calculus and scheduling theory to improve delay bound”, 20th Int. Conf. on Real-Time and Network Systems (RTNS2012), 2012.

[Bo11] M. Boyer, J. Migge, M. Fumey, “PEGASE, A Robust and Efficient Tool for Worst Case Network Traversal Time“,SAE 2011 AeroTech Congress & Exhibition, Toulouse, France, 2011.

[Bo12] M. Boyer, N. Navet, M. Fumey (Thales Avionics), “Experimental assessment of timing verification techniques for AFDX“, Embedded Real-Time Software and Systems (ERTS 2012), Toulouse, France, 2012.

[Bo14] M. Boyer, L. Santinelli, N. Navet, J. Migge, M. Fumey, “Integrating End-system Frame Scheduling for more Accurate AFDX Timing Analysis“, Embedded Real-Time Software and Systems (ERTS 2014), Toulouse, France, 2014.

[Bo14] M. Boyer, H. Daigmorte, N. Navet, J. Migge, “Performance impact of the interactions between time-triggered and rate-constrained transmissions in TTEthernet”, to appear at Embedded Real-Time Software and Systems (ERTS 2016), Toulouse, France, 2016.

[Fr06] F. Frances, C. Fraboul, J. Grieu, “Using Network Calculus to optimize AFDX network“, Proc. of the 3thd European congress on Embedded Real Time Software (ERTS06), 2006.

[Li11] X. Li, Xiaoting, J.L. Scharbarg C. Fraboul, F. Ridouard, ”Existing offset assignments are near optimal for an industrial AFDX network” Proc. 10th Int. Workshop on Real-tTme Networks (RTN 2011), Porto, Portugal, 2011.

[TA13] D. Tämas-Selicean, P. Pop, W. Steiner, “Synthesis of Communication Schedules for TTEthernet-Based Mixed-Criticality Systems”, 10th Int. Conf. on Hardware/Software Codesign and System Synt., 2013.

[DR15] Home page of the DREAMS (Distributed REal-time Architecture for Mixed Criticality Systems) FP7 EU project, http:// www.uni-siegen.de/dreams/home/, retrieved 13/11/2015.