Organic Computing has recently emerged as a new challenge in computer engineering. As ubiquitous and embedded computing systems become increasingly powerful, the development paradigms shift from implementing the technically possible to building robust and easily usable systems. Organic computing systems tackle this challenge by introducing self-adaption, learning and self-configuration into future computer systems. Initiatives as IBMs Autonomic Computing Initiative or Intels Proactive Computing show that introducing self-configuration in complex computer systems is not only an academic endeavour.
This project focuses on heterogeneous networked embedded real time systems. Our goal is to make future embedded systems increasingly fault tolerant and more flexible. To achieve this, we aim to implement current approaches to performance analysis on the embedded system itself. This way, the embedded system can analyse its timing properties online.
Combined with a framework for local data acquisition and global system optimization and adaptation, the embedded system can adapt to changes in its environment, such as component failures, software updates or the like. In the Organic Computing community, this combination of data acquisition and system optimization and adaption is often referred to as an Observer/Controller approach.
We are hiring strong JAVA programmers for software development in an international team. The focus is on distributed algorithms as well as GUI programming.
Student Projects (Masters Thesis, Studienarbeit, Diplomarbeit) in the field of distributed algorihtms or task monitoring on embedded systems can be arranged as well. In both cases, please contact Steffen Stein.
A full list of open student projects is available on the German site for this project.
Consider the example setup shown to the left. It contains four different computational units connected by a bus. Two applications are already mapped on the architecture. A video application (solid chain) gathers data from a camera controlled by the micro controller uC, performs preprocessing on the DSP and post-processing on the PPC core. The second application (dashed chain) reads data from a sensor, which is first processed on the ARM core and then forwarded to the PPC core, which, in turn, controls an actor. Both applications have constrained end-to-end latencies.
Suppose the ARM processor as well as the DSP are not fully loaded with these applications, leaving room for integrating a third application into the system. We propose the integration of a second streaming application that runs on the ARM processor core and also uses the DSP for processing. In the figure above, the task chain is shown under the block diagram, the white arrows indicate the desired mapping of the tasks on the system architecture. This application also has a constrained end-to-end latency.
As of today, integrating such an application requires a lot of engineering effort to guarantee that existing system properties are not violated. The organic approach outlined above however integrates acceptance and integration tests into the embedded system itself. Using online performance analysis techniques, the system can check, wether the additional application can be accepted or not. An optimization/adaptation framework could also adjust system parameters (such as task mappings or priorities) to increase the acceptance rate.
The figure to the left gives an overview of our architecture. The brown bar at the bottom depicts an embedded system as we know it today. The gray boxes atop of it depict the observer/controller layer watching over the system. They both rely on and communicate with the distributed online analysis layer, which plays a key role in our approach. This layer gets information from multiple inputs, such as engineers (orange) or software updates (pink).
The observer block consists of a multitude of local system observers feeding data into the online analysis. This data can include current execution times of tasks, processor state, observed task communication, the system architecture and many more. This way, the online analysis will notice changes in the system's environment.
The controller block permanently checks, wether the current system configuration as analysed complies with the system constraints (AC) and feeds this data into the system's optimization and adaption framework (OPT), which also uses the online analysis to deduce optimized system setups which can then be fed into the embedded system itself.
For now, it is assumed that an engineer will annotate tasks to be integrated into the system with key metrics needed for analysis, such as best and worst case execution times (B/WCET) and task dependencies. From this, a global application model can be derived for analysis. The same is assumed for updates to be integrated into the system.
Future work will focus on minimising the amount of data to be given by an engineer in favor of automatically determining the needed data. This may include prediction of execution times from binary code, as investigated in the wormhole project, or deducing task dependencies by observing communication.