Skip to main content

BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing

Author(s): Wang, Linnan; Wu, Wei; Xu, Zenglin; Xiao, Jianxiong; Yang, Yi

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1jc1t
Abstract: Basic Linear Algebra Subprograms (BLAS) are a set of low level linear algebra kernels widely adopted by applications involved with the deep learning and scientific computing. The massive and economic computing power brought forth by the emerging GPU architectures drives interest in implementation of compute-intensive level 3 BLAS on multi-GPU systems. In this paper, we investigate existing multi-GPU level 3 BLAS and present that 1) issues, such as the improper load balancing, inefficient communication, insufficient GPU stream level concurrency and data caching, impede current implementations from fully harnessing heterogeneous computing resources; 2) and the inter-GPU Peer-to-Peer(P2P) communication remains unexplored. We then present BLASX: a highly optimized multi-GPU level-3 BLAS. We adopt the concepts of algorithms-by-tiles treating a matrix tile as the basic data unit and operations on tiles as the basic task. Tasks are guided with a dynamic asynchronous runtime, which is cache and locality aware. The communication cost under BLASX becomes trivial as it perfectly overlaps communication and computation across multiple streams during asynchronous task progression. It also takes the current tile cache scheme one step further by proposing an innovative 2-level hierarchical tile cache, taking advantage of inter-GPU P2P communication. As a result, linear speedup is observable with BLASX under multi-GPU configurations; and the extensive benchmarks demonstrate that BLASX consistently outperforms the related leading industrial and academic implementations such as cuBLAS-XT, SuperMatrix, MAGMA.
Publication Date: 2016
Citation: Wang, Linnan, Wei Wu, Zenglin Xu, Jianxiong Xiao, and Yi Yang. "BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing." In Proceedings of the 2016 International Conference on Supercomputing (2016): pp. 1-11. doi:10.1145/2925426.2926256
DOI: 10.1145/2925426.2926256
Pages: 1 - 11
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the 2016 International Conference on Supercomputing
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.