Abstract
The mapping of computational needs onto execution resources is, by and large, a manual task, and users are frequently guided simply by intuition and past experiences. We present a queueing theory based performance model for streaming data applications that takes steps towards a better understanding of resource mapping decisions, thereby assisting application developers to make good mapping choices. The performance model (and associated cost model) are agnostic to the specific properties of the compute resource and application, simply characterizing them by their achievable data throughput. We illustrate the model with a pair of applications, one chosen from the field of computational biology and the second is a classic machine learning problem.
Original language | English |
---|---|
Title of host publication | Proceedings of RSDHA 2021 |
Subtitle of host publication | Redefining Scalability for Diversely Heterogeneous Architectures, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 17-26 |
Number of pages | 10 |
ISBN (Electronic) | 9781665458771 |
DOIs | |
State | Published - 2021 |
Event | 2021 IEEE/ACM Redefining Scalability for Diversely Heterogeneous Architectures, RSDHA 2021 - St. Louis, United States Duration: Nov 19 2021 → … |
Publication series
Name | Proceedings of RSDHA 2021: Redefining Scalability for Diversely Heterogeneous Architectures, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis |
---|
Conference
Conference | 2021 IEEE/ACM Redefining Scalability for Diversely Heterogeneous Architectures, RSDHA 2021 |
---|---|
Country/Territory | United States |
City | St. Louis |
Period | 11/19/21 → … |
Funding
This manuscript has been co-authored by UT-Battelle,LLC under Contract No. DE-AC05-00OR22725with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. This research was supported in part by the following sources: National Science Foundation (NSF) under grant CNS-1763503, Defense Advanced Research Projects Agency (DARPA) Microsystems Technology Office (MTO) Domain-Specific System-on-Chip Program, and the US Department of Energy (DOE) Advanced Scientific Computing Research (ASCR) program.
Keywords
- data transformation
- fpga
- gpu