Skip to main content

Distributed file caching in parallel architectures utilizing high speed networks

  • Networking
  • Conference paper
  • First Online:
Book cover High-Performance Computing and Networking (HPCN-Europe 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 797))

Included in the following conference series:

  • 154 Accesses

Abstract

To obtain well scalable services in distributed and parallel systems, critical parts of these services must be distributed to avoid bottlenecks, thus reducing server processing load and consumed network bandwidth. This work investigates the distribution of the file service by distributed block-level file caches, also called client caches. Fixed-size client caches with various cache coherency schemes are compared with a server cache and a variable-size client cache model, in order to reveal useful caching distribution concepts.

Partially supported by Computing and Information Services Center, Graz University of Technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mary G. Baker, John H. Hartman, Michael D. Kupfer, Ken W. Shirriff, and John K. Ousterhout. Measurements of a Distributed File System. Technical report, University of California at Berkeley, Computer Science Division, July 1991. Also appeared in Proceedings of the 13th Symposium on Operating Systems Principles, Oct. 1991.

    Google Scholar 

  2. A. Beguelin, J. Dongarra, G. Geist, W. Jiang, R. Manchek, K. Moore, and V. Sunderam. The PVM Project, 1993. Oak Ridge National Laboratory, Oak Ridge, Tennessee.

    Google Scholar 

  3. Eric A. Brewer, Chrysanthos N. Dellarocas, Adrian Colbrool, and William E. Weihl. Proteus: A High-Performance Parallel-Architecture Simulator. Technical Report MIT/LCS/TR-516, Massachusetts Institute of Technology, Laboratory for Computer Science, September 1991.

    Google Scholar 

  4. George F. Coulouris and Jean Dollimore. Distributed Systems: Concepts and Design. Addison-Wessley, 1988. ISBN 0-201-18059-6.

    Google Scholar 

  5. M. Z. Ghanem. Dynamic Partitioning of the Main Memory Using the Working Set Concept. IBM Journal of Research and Dev., 19(9):445–450, September 1975.

    Google Scholar 

  6. Artur Klauser. A Simulation Study for Distributed File Caching in High-Performance Parallel Architectures. Master's thesis, Graz University of Technology, Austria, Department of Applied Information Processing, January 1994.

    Google Scholar 

  7. Message Passing Interface Forum. Draft Document for a Standard Message-Passing Interface. Technical report, Univ. of Tennessee, Knoxville, Tennessee, Nov. 1993.

    Google Scholar 

  8. Harold S. Stone, John Turek, and Joel L. Wolf. Optimal Partitioning of Cache Memory. IEEE Transactions on Computers, 41(9):1054–1068, September 1992.

    Google Scholar 

  9. Dominique Thiebaut, Harold S. Stone, and Joel L. Wolf. Improving Disk Cache Hit-Ratios Through Cache Partitioning. IEEE Transactions on Computers, 41(6):665–676, June 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wolfgang Gentzsch Uwe Harms

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Klauser, A., Posch, R. (1994). Distributed file caching in parallel architectures utilizing high speed networks. In: Gentzsch, W., Harms, U. (eds) High-Performance Computing and Networking. HPCN-Europe 1994. Lecture Notes in Computer Science, vol 797. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57981-8_91

Download citation

  • DOI: https://doi.org/10.1007/3-540-57981-8_91

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57981-6

  • Online ISBN: 978-3-540-48408-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics