Skip to main content

SLAQ: quality-driven scheduling for distributed machine learning

Author(s): Zhang, Haoyu; Stafman, Logan; Or, Andrew; Freedman, Michael J

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1d530
Abstract: Training machine learning (ML) models with large datasets can incur significant resource contention on shared clusters. This training typically involves many iterations that continually improve the quality of the model. Yet in exploratory settings, better models can be obtained faster by directing resources to jobs with the most potential for improvement. We describe SLAQ, a cluster scheduling system for approximate ML training jobs that aims to maximize the overall job quality. When allocating cluster resources, SLAQ explores the quality-runtime trade-offs across multiple jobs to maximize system-wide quality improvement. To do so, SLAQ leverages the iterative nature of ML training algorithms, by collecting quality and resource usage information from concurrent jobs, and then generating highly-tailored quality-improvement predictions for future iterations. Experiments show that SLAQ achieves an average quality improvement of up to 73% and an average delay reduction of up to 44% on a large set of ML training jobs, compared to resource fairness schedulers.
Publication Date: Sep-2017
Citation: Zhang, Haoyu, Logan Stafman, Andrew Or, and Michael J. Freedman. "Slaq: quality-driven scheduling for distributed machine learning." In Symposium on Cloud Computing (2017): pp. 390-404. doi:10.1145/3127479.3127490
DOI: 10.1145/3127479.3127490
Pages: 390 - 404
Type of Material: Conference Article
Journal/Proceeding Title: Symposium on Cloud Computing
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.