Skip to main content

Algebra and Geometry Combined Explains How the Mind Does Math

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8384))

  • 1561 Accesses

Abstract

This paper updates my talk on Cache Blocking for Dense Linear Algorithms since 1985 given at PPAM 11; see [11]. We again apply Dimension Theory to matrices in the Fortran and C programming languages. New Data Structures (NDS) for matrices are given. We use the GCD algorithm to transpose a \(n\) by \(m\) matrix \(A\) in CMO order, standard layout, in-place. Algebra and Geometry are used to make this idea concrete and practical; it is the reason for title of our paper: make a picture of any matrix by the GCD algorithm to convert it into direct sum of square submatrices. The picture is Geometry and the GCD algorithm is Algebra. Also, the in-place transposition of the GKK and TT algorithms will be compared. Finally, the importance of using negative integers will be used to give new results about subtraction and finding primitive roots which also make a priori in-place transpose more efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    nb is the order of a square submatrix \(A_{ij}\) of \(A\) that enters a core.

References

  1. Buttari, A., Langou, J., Kurzak, J., Dongarra, J.: A class of parallel tiled linear algorithms for MC architectures. Parallel Comput. 35(1), 38–53 (2009)

    Article  MathSciNet  Google Scholar 

  2. Gustavson, F.G.: Recursion leads to automatic variable blocking for dense linear-algebra algorithms. IBM J. R. & D. 41(6), 737–755 (1997)

    Article  Google Scholar 

  3. Gustavson, F.G.: New generalized data structures for matrices lead to a variety of high-performance algorithms. In: Boisvert, R.F., Tang, P.T.P. (eds.) Proceedings of the IFIP WG 2.5 Working Group on The Architecture of Scientific Software, Ottawa, Canada, pp. 211–234. Kluwer Academic Publishers, Boston, October 2–4 2000

    Google Scholar 

  4. Gustavson, F.G.: High performance linear algebra algs. using new generalized data structures for matrices. IBM J. R. & D. 47(1), 31–55 (2003)

    Article  MathSciNet  Google Scholar 

  5. Gustavson, F.G.: New generalized data structures for matrices lead to a variety of high performance dense linear algebra algorithms. In: Dongarra, J., Madsen, K., Waśniewski, J. (eds.) PARA 2004. LNCS, vol. 3732, pp. 11–20. Springer, Heidelberg (2006)

    Google Scholar 

  6. Gustavson, F.G., Gunnels, J.A.: Method and structure for cache aware transposition via rectangular subsections. U.S. Patent US20060161607 A1, Application No. 11/035,953, submitted 14 January 2005, published 20 July 2006

    Google Scholar 

  7. Gustavson, F.G., Gunnels, J.A., Sexton, J.C.: Minimal data copy for dense linear algebra factorization. In: Kågström, B., Elmroth, E., Dongarra, J., Waśniewski, J. (eds.) PARA 2006. LNCS, vol. 4699, pp. 540–549. Springer, Heidelberg (2007)

    Google Scholar 

  8. Gustavson, F.G., Swirszcz, T.: In-place transposition of rectangular matrices. In: Kågström, B., Elmroth, E., Dongarra, J., Waśniewski, J. (eds.) PARA 2006. LNCS, vol. 4699, pp. 560–569. Springer, Heidelberg (2007)

    Google Scholar 

  9. Gustavson, F.G.: The relevance of new data structure approaches for dense linear algebra in the new multicore/manycore environments. IBM Research report RC24599, also, to appear in PARA’08 proceeding, 10 p. (2008)

    Google Scholar 

  10. Gustavson, F.G., Karlsson, L., Kågström, B.: Parallel and cache-efficient in-place matrix storage format conversion. ACM TOMS 38(3), Article 17, 1–32 (2012)

    Google Scholar 

  11. Gustavson, F.G.: Cache blocking for linear algebra algorithms. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds.) PPAM 2011, Part I. LNCS, vol. 7203, pp. 122–132. Springer, Heidelberg (2012)

    Google Scholar 

  12. Gustavson. F.G.: A subtraction algorithm based on adding C to both A and B. Power Point Presentation, fg2935@gmail.com, 50 slides, 28 October 2013

    Google Scholar 

  13. Gustavson, F.G., Walker, D.W.: Algorithms for in-place matrix transposition. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds.) PPAM 2013, Part II. LNCS, vol. 8385, pp. 105–117. Springer, Heidelberg (2014)

    Google Scholar 

  14. Karlsson, L.: Blocked in-place transposition with application to storage format conversion. Technical report UMINF 09.01. Department of Computing Science, Umeå University, Umeå, Sweden. January 2009. ISSN 0348–0542

    Google Scholar 

  15. Kunth, D.: The Art of Computer Programming, 3rd edn., vol. 1, 2 & 3. Addison-Wesley, Reading (1998)

    Google Scholar 

  16. Kurzak, J., Buttari, A., Dongarra, J.: Solving systems of linear equations on the Cell processor using Cholesky factorization. IEEE Trans. Parallel Distrib. Syst. 19(9), 1175–1186 (2008)

    Article  Google Scholar 

  17. Kurzak, J., Dongarra, J.: Implementation of mixed precision in solving mixed precision of linear equations on the Cell processor: Research Articles. Concurr. Comput.: Pract. Exper. 19(10), 1371–1385 (2007)

    Article  Google Scholar 

  18. Lagrange, J.L.: Lectures On Elementary Mathematics, 156 p. Dover Publications, New York (2008)

    Google Scholar 

  19. Tietze, H.: Three Dimensions-Higher Dimensions. Famous Problems of Mathematics, pp. 106–120. Graylock Press, Rochester (1965)

    Google Scholar 

  20. Tretyakov, A.A., Tyrtyshnikov, E.E.: Optimal in-place transposition of rectangular matrices. J. Complex. 25, 377–384 (2009)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fred G. Gustavson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gustavson, F.G. (2014). Algebra and Geometry Combined Explains How the Mind Does Math. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds) Parallel Processing and Applied Mathematics. PPAM 2013. Lecture Notes in Computer Science(), vol 8384. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55224-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-55224-3_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-55223-6

  • Online ISBN: 978-3-642-55224-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics