skip to main content
10.1145/1229428.1229479acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections
Article

Methods of inference and learning for performance modeling of parallel applications

Published:14 March 2007Publication History

ABSTRACT

Increasing system and algorithmic complexity combined with a growing number of tunable application parameters pose significant challenges for analytical performance modeling. We propose a series of robust techniques to address these challenges. In particular, we apply statistical techniques such as clustering, association, and correlation analysis, to understand the application parameter space better. We construct and compare two classes of effective predictive models: piecewise polynomial regression and artifical neural networks. We compare these techniques with theoretical analyses and experimental results. Overall, both regression and neural networks are accurate with median error rates ranging from 2.2 to 10.5 percent. The comparable accuracy of these models suggest differentiating features will arise from ease of use, transparency, and computational efficiency.

References

  1. A. Petit, R. Whaley, J. Dongarra, and A. Cleary. HPL - A portable implementation of the high-performance LINPACK benchmark for distributed-memory computers.Google ScholarGoogle Scholar
  2. P.N. Brown, R.D. Falgout, and J.E. Jones. Semicoarsening multigrid on distributed memory machines. SIAM Journal on Scientific Computing, 21(5), 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. Carrington, A. Snavely, X. Gao, and N. Wolter. A performance prediction framework for scientific applications. In Proceedings International Conference on Computational Science Workshop on Performance Modeling and Analysis (PMA03), June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. L. Carrington, N. Wolter, A. Snavely, and C. Lee. Applying an automatic framework to produce accurate blind performance predictions of full-scale HPC applications. In Department of Defense Users Group Conference, June 2004.Google ScholarGoogle Scholar
  5. R. Caruana, S. Lawrence, and C. Giles. Overfitting in neural nets: backpropagation, conjugate gradient, and early stopping. In Neural Information Processing Systems (NIPS), November 2002.Google ScholarGoogle Scholar
  6. R. Falgout and U. Yang. HYPRE: A library of high performance preconditioners. In International Conference on Computational Science, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. F. Harrell. Regression modeling strategies. Springer, New York, NY, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. E. Ipek, B. de Supinski, M. Schulz, and S. McKee. An approach to performance prediction for parallel applications. Proceedings of Euro-Par, Springer LNCS, 3648, August 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. E. Ipek, S. McKee, B. de Supinski M. Schulz, and R. Caruana. Efficiently exploring architectural design spaces via predictive modeling. In Proceedings Architectural Support for Programming Languages and Operating Systems (ASPLOS XII), October 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Joseph, K. Vaswani, and M.J. Thazhuthaveetil. Construction and use of linear regression models for processor performance analysis. In Proceedings of the 12th Symposium on High Performance Computer Architecture, Austin, Texas, February 2006.Google ScholarGoogle ScholarCross RefCross Ref
  11. D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, H. Wasserman, and M. Gittings. Predictive performance and scalability modeling of a large-scale application. In Proceedings IEEE/ACM Supercomputing, November 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. B. Lee and D. Brooks. Accurate and efficient regression modeling for microarchitectural regression models. In Proceedings International Symposium on High-Performance Computer Architecture (HPCA-13), February 2007.Google ScholarGoogle Scholar
  13. G. Marin and J. Mellor-Crummey. Cross-architecture performance predictions for scientific applications using parameterized models. In Proceedings International Conference on Measurement and Modeling of Computer Systems (Sigmetrics), June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Mitchell. Machine Learning. WCB/McGraw Hill, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proceedings IEEE International Conference on Neural Networks, May 1993.Google ScholarGoogle ScholarCross RefCross Ref
  16. C. Stone. Comment: Generalized additive models. Statistical Science, 1:312--314, 1986.Google ScholarGoogle ScholarCross RefCross Ref
  17. T. Yang, X. Ma, and F. Mueller. Cross-platform performance prediction of parallel applications using partial execution. In Proceedings IEEE/ACM Supercomputing, November 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. Yi, D. Lilja, and D. Hawkins. Improving computer architecture simulation methodology by adding statistical rigor. IEEE Computer, November 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. Zell and et. al. SNNS: Stuttgart neural network simulator, user manual, version 4.2. In User Manual, Version 4.2, University of Stuttgart.Google ScholarGoogle Scholar

Index Terms

  1. Methods of inference and learning for performance modeling of parallel applications

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      PPoPP '07: Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming
      March 2007
      284 pages
      ISBN:9781595936028
      DOI:10.1145/1229428

      Copyright © 2007 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 14 March 2007

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      PPoPP '07 Paper Acceptance Rate22of65submissions,34%Overall Acceptance Rate230of1,014submissions,23%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader