Skip to main content

Locality abstractions for parallel and distributed computing

  • Invited Talk 7
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 907))

Abstract

Temporal and spatial locality are significant concerns in the design and implementation of any realistic parallel or distributed computing system. Temporal locality is concerned with relations among objects that share similar lifetimes and birth dates; spatial locality is concerned with relations among objects that share information. Exploiting temporal locality can lead to improved memory behavior; exploiting spatial locality can lead to improved communication behavior. Linguistic, compiler, and runtime support for locality issues is especially important for unstructured symbolic computations in which lifetimes and sharing properties of objects are not readily apparent.

Language abstractions for spatial and temporal locality include mechanisms for grouping related threads of control, allowing programs flexibility to map computations onto virtual processors, reusing dynamic contexts efficiently, and permitting asynchronous garbage collection across multiple processors. These abstractions give users and implementations a large degree of mobility to exploit inherent locality properties found within many dynamic parallel applications.

We have investigated a number of these abstractions within a high-level language framework and within compilers targeted for such a framework. In this paper, we discuss several of these abstractions and justify their importance.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Marina Chen, Young-il Choo, and Jingke Li. Compiling Parallel Programs by Optimizing Performance. Journal of Supercomputing, 1(2):171–207, 1988.

    Article  Google Scholar 

  2. K.L Clark and S. Gregory. PARLOG: Parallel Programming in Logic. ACM Transactions on Programming Languages and Systems, 8(1):1–49, 1986.

    Article  Google Scholar 

  3. William Clinger and Jonathan Rees, editors. Revised4 Report on the Algorithmic Language Scheme. ACM Lisp Pointers, 4(3), July 1991.

    Google Scholar 

  4. Eric Cooper and Richard Draves. C Threads. Technical Report CMU-CS-88-154, Carnegie-Mellon University, June.

    Google Scholar 

  5. Eric Cooper and J.Gregory Morrisett. Adding Threads to Standard ML. Technical Report CMU-CS-90-186, Carnegie-Mellon University, 1990.

    Google Scholar 

  6. Patrick Cousot. Semantic Foundations of Program Analysis. In Program Flow Analysis: Theory and Foundation, pages 303–342. Prentice-Hall, 1981.

    Google Scholar 

  7. J. Dongarra, D. Sorenson, and P. Brewer. Tools and Methodology for Programming Parallel Processors. In Aspects of Computation on Asynchronous Processors, pages 125–138. North-Holland, 1988.

    Google Scholar 

  8. Raphael Finkel and John Fishburn. Parallelism in Alpha-Beta Search. Artificial Intelligence, 19(1):89–106, 1982.

    Article  Google Scholar 

  9. Cormac Flanagan and Matthias Felleisen. The Semantics of Future and Its Use in Program Optimization. In ACM 22 nd Annual Symposium on Principles of Programming Languages, January 1995.

    Google Scholar 

  10. Allan Gottlieb, B. Lubachevsky, and Larry Rudolph. Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors. ACM Transactions on Programming Languages and Systems, 5(2):164–189, April 1983.

    Article  Google Scholar 

  11. David Saks Greenberg. Full Utilization of Communication Resources. PhD thesis, Yale University, June 1991.

    Google Scholar 

  12. Robert Halstead. Multilisp: A Language for Concurrent Symbolic Computation. Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.

    Article  Google Scholar 

  13. Christopher Haynes and Daniel Friedman. Embedding Continuations in Procedural Objects. ACM Transactions on Programming Languages and Systems, 9(4):582–598, 1987.

    Article  Google Scholar 

  14. Feng hsiung Hsu. Large Scale Parallelization of Alpha-Beta Search: An Algorithmic and Architectural Study with Computer Chess. PhD thesis, Carnegie-Mellon University, 1990. Published as Technical Report CMU-CS-90-108.

    Google Scholar 

  15. Paul Hudak. Para-functional Programming in Haskell. In Boleslaw K. Szymanski, editor, Parallel Functional Languages and Compilers. ACM Press, 1991.

    Google Scholar 

  16. Paul Hudak et. al. Report on the Functional Programming Language Haskell, Version 1.2. ACM SIGPLAN Notices, May 1992.

    Google Scholar 

  17. Takayasu Ito and Robert Halstead, Jr., editors. Parallel Lisp: Languages and Systems. Springer-Verlag, 1989. LNCS number 41.

    Google Scholar 

  18. Suresh Jagannathan. TS/Scheme: Distributed Data Structures in Lisp. Lisp and Symbolic Computation, 7(2):283–305, 1994.

    Article  Google Scholar 

  19. Suresh Jagannathan and James Philbin. A Customizable Substrate for Concurrent Languages. In ACM SIGPLAN '91 Conference on Programming Language Design and Implementation, June 1992.

    Google Scholar 

  20. Suresh Jagannathan and James Philbin. A Foundation for an Efficient Multi-Threaded Scheme System. In Proceedings of the 1992 Conf. on Lisp and Functional Programming, June 1992.

    Google Scholar 

  21. Suresh Jagannathan and Stephen Weeks. Analyzing Stores and References in a Parallel Symbolic Language. In Proceedings of the ACM Symposium on Lisp and Functional Programming, pages 294–306, 1994.

    Google Scholar 

  22. Suresh Jagannathan and Stephen Weeks. A Unified Treatment of Flow Analysis in Higher-Order Languages. In ACM 22 nd Annual Symposium on Principles of Programming Languages, January 1995.

    Google Scholar 

  23. David Kranz, Robert Halstead, and Eric Mohr. Mul-T: A High Performance Parallel Lisp. In Proceedings of the ACM Symposium on Programming Language Design and Implementation, pages 81–91, June 1989.

    Google Scholar 

  24. Sun Microsystems. Lightweight Processes, 1990. In SunOS Programming Utilities and Libraries.

    Google Scholar 

  25. Robin Mimer, Mads Tofte, and Robert Harper. The Definition of Standard ML. MIT Press, 1990.

    Google Scholar 

  26. Rick Mohr, David Kranz, and Robert Halstead. Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. In Proceedings of the 1990 ACM Conference on Lisp and Functional Programming, June 1990.

    Google Scholar 

  27. J. Gregory Morrisett and Andrew Tolmach. Procs and Locks: A Portable Multiprocessing Platform for Standard ML of New Jersey. In Fourth ACM Symposium on Principles and Practice of Parallel Programming, pages 198–207, 1993.

    Google Scholar 

  28. James Philbin. An Operating System for Modern Languages. PhD thesis, Dept. of Computer Science, Yale University, 1993.

    Google Scholar 

  29. John Reppy. CML: A Higher-Order Concurrent Language. In Proceedings of the SIGPLAN'91 Conference on Programming Language Design and Implementation, pages 293–306, June 1991.

    Google Scholar 

  30. Anne Rogers and Keshave Pingali. Process Decomposition Through Locality of Reference. In SIGPLAN'89 Conference on Programming Language Design and Implementation, pages 69–80, 1989.

    Google Scholar 

  31. Rarsten Schwan and Win Bo. “Topologies” — Distributed Objects on Multicomputers. ACM Transactions on Computer Systems, 8(2):111–157, 1990.

    Article  Google Scholar 

  32. Ehud Shapiro, editor. Concurrent Prolog Collected Papers. MIT Press, Cambridge, Mass., 1987.

    Google Scholar 

  33. V.S. Sunderam. PVM: A Framework for Parallel Distributed Computing. Concurrency: Practice & Experience, 2(4), 1990.

    Google Scholar 

  34. A. Tevanian, R. Rashid, D. Golub, D. Black, E. Cooper, and M. Young. Mach Treads and the UNIX Kernel: The Battle for Control. In 1987 USENIX Summer Conference, pages 185–197, 1987.

    Google Scholar 

  35. Stephen Weeks, Suresh Jagannathan, and James Philbin. A Concurrent Abstract Interpreter. Lisp and Symbolic Computation, 7(2):171–191, 1994.

    Article  Google Scholar 

  36. Michael Wolfe. Optimizing Supercompilers for Supercomputers. MIT Press, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Takayasu Ito Akinori Yonezawa

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jagannathan, S. (1995). Locality abstractions for parallel and distributed computing. In: Ito, T., Yonezawa, A. (eds) Theory and Practice of Parallel Programming. TPPP 1994. Lecture Notes in Computer Science, vol 907. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0026577

Download citation

  • DOI: https://doi.org/10.1007/BFb0026577

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59172-6

  • Online ISBN: 978-3-540-49218-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics