To refer to this page use:
|An efficient optimizing compiler can perform many cascading rewrites in a single pass, using auxiliary data structures such as variable binding maps, delayed substitutions, and occurrence counts. Such optimizers often perform transformations according to relatively simple rewrite rules, but the subtle interactions between the data structures needed for efficiency make them tricky to write and trickier to prove correct. We present a system for semi-automatically deriving both an efficient program transformation and its correctness proof from a list of rewrite rules and specifications of the auxiliary data structures it requires. Dependent types ensure that the holes left behind by our system (for the user to fill in) are filled in correctly, allowing the user low-level control over the implementation without having to worry about getting it wrong. We implemented our system in Coq (though it could be implemented in other logics as well), and used it to write optimization passes that perform uncurrying, inlining, dead code elimination, and static evaluation of case expressions and record projections. The generated implementations are sometimes faster, and at most 40% slower, than hand-written counterparts on a small set of benchmarks; in some cases, they require significantly less code to write and prove correct.
|John M. Li and Andrew W. Appel. 2021. Deriving Efficient Program Transformations from Rewrite Rules. Proc. ACM Program. Lang. 5, ICFP, Article 74 (August 2021), 29 pages. https://doi.org/10.1145/3473579
|compiler correctness, compiler optimization, metaprogramming, domain- specific languages, interactive theorem proving, shrink reduction
|Type of Material:
|Proceedings of the ACM on Programming Languages
|Final published version. This is an open access article.
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.