skip to main content
10.1145/3078659.3078664acmotherconferencesArticle/Chapter ViewAbstractPublication PagesscopesConference Proceedingsconference-collections
short-paper

Stencil Autotuning with Ordinal Regression: Extended Abstract

Published:12 June 2017Publication History

ABSTRACT

The increasing performance of today's computer architecture comes with an unprecedented augment of hardware complexity. Unfortunately this results in difficult-to-tune software and consequentially in a gap between the potential peak performance and the actual performance. Automatic tuning is an emerging approach that assists the programmer in managing this complexity. State-of-the-art autotuners are limited, though: they either require long tuning times, e.g., due to iterative searches, or cannot tackle the complexity of the problem due to the limitation of the supervised machine learning (ML) methodologies used. In particular, traditional ML autotuning approaches exploiting classification algorithms (such as neural networks and support vector machines) face difficulties in capturing all features of large search spaces. We propose a new way of performing automatic tuning based on structural learning: the tuning problem is formulated as a version ranking prediction modeling and solved using ordinal regression. We demonstrate its potential on a well-known autotuning problem: stencil computations. We compare state-of-the-art iterative compilation methods with our ordinal regression approach and analyze the quality of the obtained ranking in terms of Kendall rank correlation coefficients.

References

  1. Jason Ansel, Shoaib Kamil, Kalyan Veeramachaneni, Jonathan Ragan-Kelley, Jeffrey Bosboom, Una-May O'Reilly, and Saman Amarasinghe. 2014. OpenTuner: An Extensible Framework for Program Autotuning. In Int. Conference on Parallel Architectures and Compilation Techniques (PACT). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Gükhan H. Bakir, Thomas Hofmann, Bernhard Schölkopf, Alexander J. Smola, Ben Taskar, and S. V. N. Vishwanathan. 2007. Predicting Structured Data (Neural Information Processing). The MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cosenza Biagio, Juan J. Durillo, Stefano Ermon, and Ben Juurlink. 2017. Autotuning Stencil Computations with Structural Ordinal Regression Learning. In IEEE International Parallel and Distributed Processing Symposium (IPDPS).Google ScholarGoogle Scholar
  4. Matthias Christen, Olaf Schenk, and Helmar Burkhart. 2011. PATUS: A Code Generation and Autotuning Framework for Parallel Iterative Stencil Computations on Modern Microarchitectures. In IEEE International Parallel & Distributed Processing Symposium (IPDPS). 676--687. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Thorsten Joachims. 2002. Optimizing Search Engines Using Clickthrough Data. In ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining (KDD). 133--142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Thorsten Joachims. 2006. Training Linear SVMs in Linear Time. In ACM SIGKDD Intl. Conference on Knowledge Discovery and Data Mining (KDD). 217--226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Maurice Kendall. 1976. Rank Correlation Methods (4 ed.). Hodder Arnold.Google ScholarGoogle Scholar
  8. Klaus Kofler, Ivan Grasso, Biagio Cosenza, and Thomas Fahringer. 2013. An Automatic Input-Sensitive Approach for Heterogeneous Task Partitioning. In ACM International Conference on Super computing (ICS). 149--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Hugh Leather, Edwin Bonilla, and Michael O'Boyle. 2009. Automatic Feature Generation for Machine Learning Based Optimizing Compilation. In Int. Symp. on Code Generation and Optimization (CGO). 81--91. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yulong Luo, Guangming Tan, Zeyao Mo, and Ninghui Sun. 2015. FAST: A Fast Stencil Autotuning Framework Based On An Optimal-solution Space Model. In ACM Int. Conference on Supercomputing (ICS). 187--196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Muralidharan, M. Shantharam, M. Hall, M. Garland, and B. Catanzaro. 2014. Nitro: A Framework for Adaptive Code Variant Tuning. In IEEE International Parallel and Distributed Processing Symposium (IPDPS). 501--512. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mark Stephenson and Saman P. Amarasinghe. 2005. Predicting Unroll Factors Using Supervised Classification. In IEEE / ACM International Symposium on Code Generation and Optimization (CGO). 123--134. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kevin Stock, Louis-Noél Pouchet, and P. Sadayappan. 2012. Using Machine Learning to Improve Automatic Vectorization. ACM Trans. Archit. Code Optim. 8, 4, Article 50 (Jan. 2012), 23 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    SCOPES '17: Proceedings of the 20th International Workshop on Software and Compilers for Embedded Systems
    June 2017
    100 pages
    ISBN:9781450350396
    DOI:10.1145/3078659
    • Editor:
    • Sander Stuijk

    Copyright © 2017 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 12 June 2017

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper
    • Research
    • Refereed limited

    Acceptance Rates

    SCOPES '17 Paper Acceptance Rate6of9submissions,67%Overall Acceptance Rate38of79submissions,48%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader