BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing
Author(s): Wang, Linnan; Wu, Wei; Xu, Zenglin; Xiao, Jianxiong; Yang, Yi
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1jc1t
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Linnan | - |
dc.contributor.author | Wu, Wei | - |
dc.contributor.author | Xu, Zenglin | - |
dc.contributor.author | Xiao, Jianxiong | - |
dc.contributor.author | Yang, Yi | - |
dc.date.accessioned | 2021-10-08T19:48:49Z | - |
dc.date.available | 2021-10-08T19:48:49Z | - |
dc.date.issued | 2016 | en_US |
dc.identifier.citation | Wang, Linnan, Wei Wu, Zenglin Xu, Jianxiong Xiao, and Yi Yang. "BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing." In Proceedings of the 2016 International Conference on Supercomputing (2016): pp. 1-11. doi:10.1145/2925426.2926256 | en_US |
dc.identifier.uri | https://arxiv.org/pdf/1510.05041.pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1jc1t | - |
dc.description.abstract | Basic Linear Algebra Subprograms (BLAS) are a set of low level linear algebra kernels widely adopted by applications involved with the deep learning and scientific computing. The massive and economic computing power brought forth by the emerging GPU architectures drives interest in implementation of compute-intensive level 3 BLAS on multi-GPU systems. In this paper, we investigate existing multi-GPU level 3 BLAS and present that 1) issues, such as the improper load balancing, inefficient communication, insufficient GPU stream level concurrency and data caching, impede current implementations from fully harnessing heterogeneous computing resources; 2) and the inter-GPU Peer-to-Peer(P2P) communication remains unexplored. We then present BLASX: a highly optimized multi-GPU level-3 BLAS. We adopt the concepts of algorithms-by-tiles treating a matrix tile as the basic data unit and operations on tiles as the basic task. Tasks are guided with a dynamic asynchronous runtime, which is cache and locality aware. The communication cost under BLASX becomes trivial as it perfectly overlaps communication and computation across multiple streams during asynchronous task progression. It also takes the current tile cache scheme one step further by proposing an innovative 2-level hierarchical tile cache, taking advantage of inter-GPU P2P communication. As a result, linear speedup is observable with BLASX under multi-GPU configurations; and the extensive benchmarks demonstrate that BLASX consistently outperforms the related leading industrial and academic implementations such as cuBLAS-XT, SuperMatrix, MAGMA. | en_US |
dc.format.extent | 1 - 11 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | Proceedings of the 2016 International Conference on Supercomputing | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing | en_US |
dc.type | Conference Article | en_US |
dc.identifier.doi | 10.1145/2925426.2926256 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
BlasxMultiGpuComputing.pdf | 3.52 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.