Skip to main content

Foundations of empirical memory consistency testing

Author(s): Kirkham, Jake; Sorensen, Tyler; Tureci, Esin; Martonosi, Margaret

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr19c37
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKirkham, Jake-
dc.contributor.authorSorensen, Tyler-
dc.contributor.authorTureci, Esin-
dc.contributor.authorMartonosi, Margaret-
dc.date.accessioned2021-10-08T19:51:16Z-
dc.date.available2021-10-08T19:51:16Z-
dc.date.issued2020en_US
dc.identifier.citationKirkham, Jake, Tyler Sorensen, Esin Tureci, and Margaret Martonosi. "Foundations of empirical memory consistency testing." Proceedings of the ACM on Programming Languages 4, no. OOPSLA (2020): pp. 1-29. doi:10.1145/3428294en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr19c37-
dc.description.abstractModern memory consistency models are complex, and it is difficult to reason about the relaxed behaviors that current systems allow. Programming languages, such as C and OpenCL, offer a memory model interface that developers can use to safely write concurrent applications. This abstraction provides functional portability across any platform that implements the interface, regardless of differences in the underlying systems. This powerful abstraction hinges on the ability of the system to correctly implement the interface. Many techniques for memory consistency model validation use empirical testing, which has been effective at uncovering undocumented behaviors and even finding bugs in trusted compilation schemes. Memory model testing consists of small concurrent unit tests called “litmus tests”. In these tests, certain observations, including potential bugs, are exceedingly rare, as they may only be triggered by precise interleaving of system steps in a complex processor, which is probabilistic in nature. Thus, each test must be run many times in order to provide a high level of confidence in its coverage. In this work, we rigorously investigate empirical memory model testing. In particular, we propose methodologies for navigating complex stressing routines and analyzing large numbers of testing observations. Using these insights, we can more efficiently tune stressing parameters, which can lead to higher confidence results at a faster rate. We emphasize the need for such approaches by performing a meta-study of prior work, which reveals results with low reproducibility and inefficient use of testing time. Our investigation is presented alongside empirical data. We believe that OpenCL targeting GPUs is a pragmatic choice in this domain as there exists a variety of different platforms to test, from large HPC servers to power-efficient edge devices. The tests presented in the work span 3 GPUs from 3 different vendors. We show that our methodologies are applicable across the GPUs, despite significant variances in the results. Concretely, our results show: lossless speedups of more than 5× in tuning using data peeking; a definition of portable stressing parameters which loses only 12% efficiency when generalized across our domain; a priority order of litmus tests for tuning. We stress test a conformance test suite for the OpenCL 2.0 memory model and discover a bug in Intel’s compiler. Our methods are evaluated on the other two GPUs using mutation testing. We end with recommendations for official memory model conformance tests.en_US
dc.format.extent1 - 29en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the ACM on Programming Languagesen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleFoundations of empirical memory consistency testingen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1145/3428294-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
FoundationsMemoryTesting.pdf1.29 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.