Abstract
New high-performance computing system designs with steeply escalating processor and core counts, burgeoning heterogeneity and accelerators, and increasingly unpredictable memory access times call for one or more dramatically new programming paradigms. These new approaches must react and adapt quickly to unexpected contentions and delays, and they must provide the execution environment with sufficient intelligence and flexibility to rearrange the execution to improve resource utilization. The authors present an approach based on task parallelism that reveals the application's parallelism by expressing its algorithm as a task flow. This strategy allows the algorithm to be decoupled from the data distribution and the underlying hardware, since the algorithm is entirely expressed as flows of data. This kind of layering provides a clear separation of concerns among architecture, algorithm, and data distribution. Developers benefit from this separation because they can focus solely on the algorithmic level without the constraints involved with programming for current and future hardware trends.
Original language | English |
---|---|
Article number | 6654146 |
Pages (from-to) | 36-45 |
Number of pages | 10 |
Journal | Computing in Science and Engineering |
Volume | 15 |
Issue number | 6 |
DOIs | |
State | Published - Nov 2013 |
Externally published | Yes |
Keywords
- Distributed programming
- HPC
- High-performance computing
- Programming paradigms
- Scheduling and task partitioning
- Scientific computing