STE||AR (pronounced as stellar) stands for “Systems Technologies, Emergent Parallelism, and Algorithms Research”. We decided to use this name for our group because the focus of our work has shifted over the last months. We are a group of faculty, researchers and students working at the Center of Computation and Technology (CCT) at Louisiana State University (LSU). Everything we do is centered around the ParalleX execution model and its implementation in our experimental runtime system HPX (High Performance ParalleX). We use HPX for a broad range of scientific applications, helping scientists and developers to write code which scales better and shows better performance if compared to more conventional programming models such as MPI.
ParalleX is a new (and still experimental) parallel execution model aiming to overcome the limitations imposed by the current hardware and the way we write applications today. Our group focuses on two types of applications – those requiring excellent strong scaling, allowing for a dramatic reduction of execution time for fixed workloads and those needing highest level of sustained performance through massive parallelism. These applications are presently unable (through conventional practices) to effectively exploit a relatively small number of cores in a multi-core system. More often than not, these applications will not be able to exploit high-end computing systems likely to employ hundreds of millions of such cores by the end of this decade.
Critical bottlenecks to the effective use of new generation HPC systems include:
- Starvation: due to lack of usable application parallelism and means of managing it,
- Overhead: reduction to permit strong scalability, improve efficiency, and enable
dynamic resource management,
- Latency: from remote access across system or to local memories,
- Contention: due to multicore chip I/O pins, memory banks, and system interconnects.
The ParalleX model has been devised to address these challenges by enabling a new computing dynamic through the application of message-driven computation in a global
address space context with lightweight synchronization.b