Future computer architectures are likely to have tens to hundreds of processor cores, running different application classes in parallel. There will be “classical” independent processes, tightly coupled parallel applications and stream applications, all of which have different service requirements. These applications compete for shared ressources such as on-chip memories and communication infrastructure.
In this project, we look at techniques to combine these heterogeneous requirements and make such architectures predictable. Examples are partitioning of on-chip memories, quality of service for the network-on-chip, and mechanisms for flexible yet efficient data transfers, just to name a few.
The listed material is protected by copyright. The corresponding copyright must be observed.