Abstract
A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale-free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidia GeForce 1080 GPU, cuPPA generates a scale-free network of two billion edges in less than 3 seconds.
Original language | English |
---|---|
Title of host publication | Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017 |
Editors | Jian-Yun Nie, Zoran Obradovic, Toyotaro Suzumura, Rumi Ghosh, Raghunath Nambiar, Chonggang Wang, Hui Zang, Ricardo Baeza-Yates, Ricardo Baeza-Yates, Xiaohua Hu, Jeremy Kepner, Alfredo Cuzzocrea, Jian Tang, Masashi Toyoda |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 3302-3311 |
Number of pages | 10 |
ISBN (Electronic) | 9781538627143 |
DOIs | |
State | Published - Jul 1 2017 |
Event | 5th IEEE International Conference on Big Data, Big Data 2017 - Boston, United States Duration: Dec 11 2017 → Dec 14 2017 |
Publication series
Name | Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017 |
---|---|
Volume | 2018-January |
Conference
Conference | 5th IEEE International Conference on Big Data, Big Data 2017 |
---|---|
Country/Territory | United States |
City | Boston |
Period | 12/11/17 → 12/14/17 |
Funding
This paper has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Keywords
- GPU
- Preferential-Attachment
- Random Networks
- Scale-Free Networks