Skip to main content

Starchart: Hardware and software optimization using recursive partitioning regression trees

Author(s): Jia, Wenhao; Shaw, Kelly A; Martonosi, Margaret

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr11c4w
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJia, Wenhao-
dc.contributor.authorShaw, Kelly A-
dc.contributor.authorMartonosi, Margaret-
dc.date.accessioned2021-10-08T19:50:35Z-
dc.date.available2021-10-08T19:50:35Z-
dc.date.issued2013en_US
dc.identifier.citationJia, Wenhao, Kelly A. Shaw, and Margaret Martonosi. "Starchart: Hardware and software optimization using recursive partitioning regression trees." In Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques (2013). doi:10.1109/PACT.2013.6618822en_US
dc.identifier.issn1089-795X-
dc.identifier.urihttps://mrmgroup.cs.princeton.edu/papers/wjiaPACT13.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr11c4w-
dc.description.abstractGraphics processing units (GPUs) are in increasingly wide use, but significant hurdles lie in selecting the appropriate algorithms, runtime parameter settings, and hardware configurations to achieve power and performance goals with them. Exploring hardware and software choices requires time-consuming simulations or extensive real-system measurements. While some auto-tuning support has been proposed, it is often narrow in scope and heuristic in operation. This paper proposes and evaluates a statistical analysis technique, Starchart, that partitions the GPU hardware/software tuning space by automatically discerning important inflection points in design parameter values. Unlike prior methods, Starchart can identify the best parameter choices within different regions of the space. Our tool is efficient - evaluating at most 0.3% of the tuning space, and often much less - and is robust enough to analyze highly variable real-system measurements, not just simulation. In one case study, we use it to automatically find platform-specific parameter settings that are 6.3× faster (for AMD) and 1.3× faster (for NVIDIA) than a single general setting. We also show how power-optimized parameter settings can save 47W (26% of total GPU power) with little performance loss. Overall, Starchart can serve as a foundation for a range of GPU compiler optimizations, auto-tuners, and programmer tools. Furthermore, because Starchart does not rely on specific GPU features, we expect it to be useful for broader CPU/GPU studies as well.en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniquesen_US
dc.rightsAuthor's manuscripten_US
dc.titleStarchart: Hardware and software optimization using recursive partitioning regression treesen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/PACT.2013.6618822-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
SoftwareOptRegressionTrees.pdf403.6 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.