Technische
            Universität Braunschweig
  • Homepage
  • Team
  • Research
    • Group Saidi
    • Group Jukan
    • Group Gomez
  • Teaching
    • Courses
    • Student research
    • Seminar topics
    • Incoming Students
  • Job offers
  • News
    • DE
    • EN
Logo Institut für Datentechnik und Kommunikationsnetze der TU Braunschweig
  • Project Overview

Project Overview


SymTA/S: Project overview

SymTA/S - Symbolic Timing Analysis for Systems - is a system-level performance and timing analysis approach based on formal scheduling analysis techniques and symbolic simulation.


State of the Art

Analysis Model

Context-based Analysis

Exploration

Sensitivity Analysis


State of the Art

With increasing embedded system complexity, there is a trend towards heterogeneous, distributed architectures. Multiprocessor system on chip designs (MpSoCs) use complex on-chip networks to integrate multiple programmable processor cores, specialized memories, and other intellectual property (IP) components on a single chip. MpSoCs have become the architecture of choice in industries such as network processing, consumer electronics, and automotive systems. Their heterogeneity inevitably increases with IP integration and component specialization, which designers use to optimize performance at low power consumption and competitive cost. Tomorrow's MpSoCs will be even more complex, and using IP library elements in a `cut-and-paste' design style is the only way to reach the necessary design productivity.

Systems integration is becoming the major challenge in MpSoC design. Embedded software is increasingly important to reach the required productivity and flexibility. The complex hardware and software component interactions pose a serious threat to all kinds of performance pitfalls, including transient overloads, memory overflow, data loss, and missed deadlines. The International Technology Roadmap for Semiconductors, 2003 Edition , names system-level performance verification as one of the top three codesign issues.

Simulation is state of the art in MpSoC performance verification. Tools from many suppliers support cycle-accurate cosimulation of a complete hardware and software system. The cosimulation times are extensive, but developers can use the same simulation environment, simulation patterns, and benchmarks in both function and performance verification. Simulation-based performance verification, however, has conceptual disadvantages that become disabling as complexity increases.

MpSoC hardware and software component integration involves resource sharing that is based on operating systems and network protocols. Resource sharing results in a confusing variety of performance runtime dependencies. For example, the figure below shows a CPU subsystem executing three processes. Although the operating system activates P1, P2, and P3 strictly periodically (with periods T1, T2, and T3, respectively), the resulting execution sequence is complex and leads to output bursts.

As the figure shows, P1 can delay several executions of P3. After P1 completes, P3 -with its input buffers filled- temporarily runs in burst mode with the execution frequency limited only by the available processor performance. This leads to transient P3 output burst, which is modulated by P1's execution. This example does not even include data-dependent process execution times, which are typical for software systems, and operating system overhead is neglected. Both effects further complicate the problem. Yet finding simulation patterns that lead to worst-case situations as highlighted in the above example, is already challenging.

Network arbitration introduces additional performance dependencies. Next figure shows an example. The arrows indicate performance dependencies between the CPU and DSP subsystems that the system function does not reflect. These dependencies can turn component or subsystem best-case performance into system worst-case performance -a so-called scheduling anomaly. Recall the P3 bursts from previous example and consider that P3's execution time can vary from one execution to the next. There are two critical execution scenarios, called corner cases: the minimum execution time for P3 corresponds to the maximum transient bus load, slowing down other components' communication, and vice versa.

http://www.ida.ing.tu-bs.de/research/projects/symta-s/overview/anomaly.gif

The transient runtime effects shown in previous examples lead to complex system-level corner cases. The designer must provide a simulation pattern that reaches each corner case during simulation. Essentially, if all corner cases satisfy the given performance constraints, then the system is guaranteed to satisfy its constraints under all possible operation conditions. However, such corner cases are extremely difficult to find and debug, and it is even more difficult to find simulation patterns to cover them all. Reusing function verification patterns is not sufficient because they do not cover the complex nonfunctional performance dependencies that resource sharing introduces. Reusing component and subsystem verification patterns is not sufficient because they do not consider the complex component and subsystem interactions.

The system integrator might be able to develop additional simulation patterns, but only for simple systems in which the component behavior is well understood. Manual corner case identification and pattern selection is not practical for complex MpSoCs with layered software architectures, dynamic bus protocols, and operating systems. In short, simulation-based approaches to MpSoC performance verification are about to run out of steam, and should essentially be enhanced by formal techniques that systematically reveal and cover corner cases.

Real-time systems research has addressed scheduling analysis for processors and buses for decades, and many popular scheduling analysis techniques are available. Examples include RMS and EDF, using both static and dynamic priorities; similar techniques are available for time-driven mechanisms like TDMA or Round-Robin. Some extensions have already found their way into commercial analysis tools, which are being established, e.g. in the automotive industry to analyze individual units that control the engine or parts of the electronic stability program.

The techniques rely on a simple yet powerful abstraction of task activation and communication. Instead of considering each event individually, as simulation does, formal scheduling analysis abstracts from individual events to event streams. The analysis requires only a few simple characteristics of event streams, such as an event period or a maximum jitter. From these parameters, the analysis systematically derives worst-case scheduling scenarios, and timing equations safely bound the worst-case process or communication response times.

It might surprise that -up to now- only very few of these approaches have found their way into the SoC (system-on-chip) design community by means of tools. Regardless of the known limitations of simulation such as incomplete corner-case coverage and pattern generation, timed simulation is still the preferred means of performance verification in MpSoC design. Why then is the acceptance of formal analysis still very limited?

One of the key reasons is a mismatch between the scheduling models assumed in most formal analysis approaches and the heterogenous world of MpSoC scheduling techniques and communication patterns that are a result of a) different application characteristics; b) system optimization and integration which is still at the beginning of the MpSoC development towards even more complex architectures. Therefore, a new configurable analysis process is needed that can easily be adapted to such heterogeneous architectures. We can identify different approaches: the holistic approach that searches for techniques spanning several scheduling domains, and hierarchical approaches that integrate local analysis with a global flow based analysis, either using new models or based on existing models and analysis techniques.

Analysis Model

The core of SymTA/S is our recently developed technique to couple scheduling analysis algorithms using event streams. Event streams describe the possible I/O timing of tasks and are characterized by appropriate event models such as periodic events with jitter or bursts and sporadic events. At the system level, event streams are used to connect local analyses according to the systems application and communication structure. In contrast to previous work, SymTA/S explicitly supports the combination and integration of different kinds of analysis techniques known from real-time research. For this purpose, it is essential to transition between the often incompatible event stream models resulting from the dissimilitude of the local techniques. This kind of incompatibility appears, for instance, between an analysis technique assuming periodic events with jitter and an analysis technique requiring sporadic events. In order to realize such transitions event model interfaces (EMIFs) and event adaptation functions (EAFs) are used.

The compositional performance analysis methodology alternates local scheduling analysis and event model propagation during system-level analysis. This technique requires the possible timing of output events in order to determine the activation of the next scheduling component. The event models used in SymTA/S allow to specify simple rules to obtain output event models that can be described with the same set of parameters as the input event models. The parameters of the output event models are determined using the characteristics of the input models and the results of the response time analysis, i.e. maximum and minimum response times.

Initially, to generate activating event models for all tasks and thus, to solve eventual cyclic scheduling dependencies, the external event models are propagated along all system paths until an initial activating event model is available for each task. This approach is safe, since, on one hand, scheduling cannot change an event model period. On the other hand, scheduling can only increase an event model jitter. Since a smaller jitter interval is contained in a larger jitter interval, the minimum initial jitter assumption is safe.

After propagating external event models, global system analysis can be performed. A global analysis step consists of two phases. In the first phase local scheduling analysis is performed for each resource and output event models are calculated. In the second phase, all output event models are propagated. It is then checked if the first phase has to be repeated because some activating event models are no longer up-to-date, meaning that a newly propagated output event model is different from the output event models that was propagated in the previous global analysis step. Analysis completes if either all event models are up-to-date after the propagation phase, or if an abort condition, e.g. the violation of a timing constraint has been reached. The analysis flow is shown in the next figure:

http://www.ida.ing.tu-bs.de/research/projects/symta-s/overview/analysis.gif

 

Context-based Analysis

 

Exploration

 

Sensitivity Analysis

In a real-world design flow with tight time-to-market pressure, ever changing requirements, and complex supply-chains including platform-based design, subsystem integration and IP-reuse, it cannot be expected that all performance data required for scheduling analysis is fully available up front. Instead, designers must work with incomplete specifications, early performance estimates, numbers asserted by suppliers in contracts but not yet proven, and so on. Additionally, designer must keep future modifications in mind, in particular late feature requests, product variants and the next product generation.

Sensitivity analysis is a promising approach to deal with those design uncertainties. It allows the system designer to keep track of the flexibility of the system, and thus to quickly assess the system-level impact of changes in performance properties of individual hardware and software components. For example, variations in the implementation of different application parts, functional extensions, or changes of timing at subsystem or system interfaces are issues that can turn a previously conforming system into one that violates performance constraints. These and many other variations are only too common in a realistic design-flow.

Sensitivity analysis parameters may be any system parameter that might change during the design process, for example, the execution demands of the tasks in the system, the operation frequency of the hardware resources, the parameters of the activation models, communication volumes, a.s.o. Since variations of local parameters have impact in the entire system, a list of sensitivity metrics is required. Metrics can be any critical system property or constrained system parameter, for example, task response times, global buffer sizes, the jitter generated at the output of a task, the utilization of a HW component, the latency of a functional system path or the deadline miss ratio in case of soft and firm real-time systems.


Contact

Institute of Computer and Network Engineering
Hans-Sommer-Street 66
38106 Braunschweig
Phone: +49 (0)531 391-3734
Fax: +49 (0)531 391-4587
sekretariat[[a]]ida.ing.tu-bs.de

Office hours:

Mon: 
-
Tue:   
10:30-12:00
Wed:   
10:30-12:00
Thu: 
10:30-12:00
Fri:
-
© Technische Universität Braunschweig
Legal notice / disclaimer Data Privacy Policy