skip to main content
article
Free Access

Using the SimOS machine simulator to study complex computer systems

Published:01 January 1997Publication History
First page image

References

  1. BEDICHECK, R. 1995. Talisman: Fast and accurate multicomputer simulation. In Proceedings of the 1995 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems (May), 14-24. Google ScholarGoogle Scholar
  2. BENNETT, g. AND FLYNN, M. 1995. Performance factors for superscalar processors. Tech. Rep. CSL-TR-95-661, Stanford University. Google ScholarGoogle Scholar
  3. BORG, A., KESSLER, R., LAZANA, G., AND WALL, D. 1989. Long address traces from RISC machines: Generation and analysis. Tech. Rep. 89/14, DEC Western Research Laboratory.Google ScholarGoogle Scholar
  4. BREWER, E., DELLAROCAS, C., COLBROOK, A., AND WEIL, W. 1991. Proteus: A high-performance parallel-architecture simulator. Tech. Rep. MIT/LCS/TR-516, MIT. Google ScholarGoogle ScholarCross RefCross Ref
  5. BUGNION, E., ANDERSON, J., MOWRY, T., ROSENBLUM, M., AND LAM, M. 1996. Compilerdirected page coloring for multiprocessors. In Proceedings of the Seventh International Conference on Architectural Support for Programming Languages and Operating Systems (Oct.), 244 -257. Google ScholarGoogle Scholar
  6. BUNDE, M. Z., METCALFE, D., AND NOTTINGHAM, Z. Visual tools unlock peak performance. Web site, h ttp ://www.cra y.com/P UB LI C/prod uct-info/sw/P E/vistoo ls.h tml.Google ScholarGoogle Scholar
  7. CHAPIN, J., HERROD, S. A., ROSENBLUM, M., AND GUPTA, A. 1995. Memory system performance of UNIX on CC-NUMA multiprocessors. In Proceedings of the 1995 Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS '95~PERFORMANCE '95) (May), 1-13. Google ScholarGoogle Scholar
  8. CHAPIN, J., ROSENBLUM, M., DEVINE, S., LAHIRI, T., TEODOSIU, D., AND GUPTA, A. 1995. Hive: Fault containment for shared-memory multiprocessors. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles (Dec.), 12-25. Google ScholarGoogle Scholar
  9. CHEN, D.-K. 1995. MaxPar: An execution driven simulator for studying parallel systems. Ph.D. Thesis, University of Illinois at Urbana-Champaign.Google ScholarGoogle Scholar
  10. CHEN, J. B., ENDO, Y., CHAN, K., MAZIERES, D., DIAS, A., SELZER, M., AND SMITH, M.D. 1995. The measured performance of personal computer operating systems. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles (Dec.), 299-313. Google ScholarGoogle Scholar
  11. CMELIK, R. F. AND KEPPEL, D. 1994. Shade: A fast instruction set simulator for execution profiling. In Proceedings of the 1994 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, 128-137. Google ScholarGoogle Scholar
  12. DUBOIS, M., SKEPPSTEDT, J., RICCIULLI, L., RAMAMURTHY, K., AND STENSTROM, P. 1993. The detection and elimination of useless misses in multiprocessors. In Proceedings of the Twentieth International Symposium on Computer Architecture (May), 88-97. Google ScholarGoogle Scholar
  13. EUSTACE, A. AND SRIVASTAVA, A. 1995. ATOM: A flexible interface for building high performance program analysis tools. In Proceedings of the Winter 1995 USENIX Technical Conference on UNIX and Advanced Computing Systems (Jan.). Google ScholarGoogle Scholar
  14. GOLDSCHMIDT, S. 1993. Simulation of multiprocessors: Accuracy and performance. Ph.D. Thesis, Stanford University. Google ScholarGoogle Scholar
  15. GRIMSRUD, K., ARCHIBALD, J., RIPLEY, M., FLANAGAN, K., AND NELSON, B. 1993. BACH: A hardware monitor for tracing microprocessor-based systems. Microprocess. Microsyst. 17, 443-459.Google ScholarGoogle ScholarCross RefCross Ref
  16. INSIGNIA SOFTWARE. SoftPC product information. Web Site, http://www.insignia.com.Google ScholarGoogle Scholar
  17. IRVIN, R. B. AND MILLER, B. P. 1996. Mapping performance data for high-level and data views of parallel program performance. In Proceedings of the International Conference on Supercomputing (May). Google ScholarGoogle Scholar
  18. KOTZ, D., TOH, S. B., AND RADHAKRISHNAN, S. 1994. A detailed simulation of the HP 97560 disk drive. Tech. Rep. PCS-TR94-20, Dartmouth College. Google ScholarGoogle Scholar
  19. KUSKIN, J., OFELT, D., HEINRICH, M., HEINLEIN, J., SIMONI, R., GHARACHORLOO, K., CHAPIN, J., NAKAHIRA, D., BAXTER, J., HOROWITZ, M., GUPTA, A., ROSENBLUM, M., AND HENNESSY, J. 1994. The Stanford FLASH multiprocessor. In Proceedings of the 21st International Symposium on Computer Architecture (April), 302-313. Google ScholarGoogle Scholar
  20. LEBECK, A. R. AND WOOD, D.A. 1995. Active memory: A new abstraction for memory-system simulation. In Proceedings of the 1995 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, 220-230. Google ScholarGoogle Scholar
  21. MAGNUSSON, P. AND WERNER, B. 1995. Efficient memory simulation in SimICS. In Proceedings of the 28th Annual Simulation Symposium (April). Google ScholarGoogle Scholar
  22. MARTONOSI, M., GUPTA, A., AND ANDERSON, T.E. 1992. Memspy: Analyzing memory system bottlenecks in programs. In Proceedings of the 1992 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems (June), 1-12. Google ScholarGoogle Scholar
  23. MARTONOSI, M., OFELT, D., AND HEINRICH, M. 1996. Integrating performance monitoring and communication in parallel computers. In Proceedings of the 1996 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems (May), 138-147. Google ScholarGoogle Scholar
  24. MILLER, B. P., CALLAGHAN, M. D., CARGILLE, J. M., HOLLINGSWORTH, J. K., IRVIN, R. B., KARAVANIC, K. L., KUNCHITHAPADAM, K., AND NEWHALL, T. 1995. The Paradyn parallel performance measurement tools. IEEE Computer (Nov.), 37-46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. NAYFEH, B., HAMMOND, L., AND OLOKOTUN, K. 1996. Evaluation of design alternatives for a multiprocessor microprocessor. In Proceedings of the 23rd International Symposium on Computer Architecture (May). Google ScholarGoogle Scholar
  26. OLOKOTUN, K., NAYFEH, B., AND HAMMOND, L. 1996. The case for a single-chip multiprocessor. In Proceedings of the Seventh International Symposium on Architectural Support for Programming Languages and Operating Systems (Oct.), 2-11. Google ScholarGoogle Scholar
  27. OUSTERHOUT, J.K. 1994. Tcl and the Tk Toolkit. Addison-Wesley, Reading, MA. Google ScholarGoogle Scholar
  28. PERL, S. E. AND SITES, R. L. 1996. Studies of Windows NT performance using dynamic execution traces. In Proceedings of the Second Symposium on Operating System Design and Implementation, 169-184. Google ScholarGoogle Scholar
  29. REED, D. A., AYDT, R. A., MADHYASTHA, T. M., NOE, R. J., SHIELDS, K. A., AND SCHWATZ, B. W. 1995. An overview of the Pablo performance analysis environment. Tech. Rep., University of Illinois at Urbana-Champaign.Google ScholarGoogle Scholar
  30. REINHARDT, S., HILL, M., LARUS, J., LEBECK, A., LEWIS, J., AND WOOD, D. 1993. The Wisconsin Wind Tunnel: Virtual prototyping of parallel computers. In Proceedings of the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems (May), 48-60. Google ScholarGoogle Scholar
  31. ROSENBLUM, M., BUGNION, E., HERROD, S. A., WITCHEL, E., AND GUPTA, A. 1995a. The impact of architectural trends on operating system performance. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles, 285-298. Google ScholarGoogle Scholar
  32. ROSENBLUM, M., CHAPIN, J., DEVINE, S., TEODOSIU, D., LAHIRI, T., AND GUPTA, A. 1995b. Hive: Fault containment for shared-memory multiprocessors. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles (Dec.), 12-25. Google ScholarGoogle Scholar
  33. ROSENBLUM, M., HERROD, S. A., WITCHEL, E., AND GUPTA, A. 1995. Complete computer simulation: The SimOS approach. In IEEE Parallel Distrib. Technol., (Winter), 34-43. Google ScholarGoogle Scholar
  34. SILICON GRAPHICS. gprof. IRIX 5.2 man page.Google ScholarGoogle Scholar
  35. SMITH, M.D. 1991. Tracing with pixie. Tech. Rep. CSL-TR-497, Stanford University.Google ScholarGoogle Scholar
  36. SUN MICROSYSTEMS. Wabi 2.2 product overview. Web Site, http://www.sun.com/solaris/products/wabi.Google ScholarGoogle Scholar
  37. TORRELLAS, J., GUPTA, A., AND HENNESSY, J. 1992. Characterizing the cache performance and synchronization behavior of a multiprocessor operating system. In Proceedings of the Fifth International Conference on Architectural Support for Programming Languages and Operating Systems (Oct.), 162-174. Google ScholarGoogle Scholar
  38. VEENSTRA, J.1993. Mint tutorial and user manual. Tech. Rep. 452 (May), University of Rochester. Google ScholarGoogle Scholar
  39. VERGHESE, B., DEVINE, S., GUPTA, A., AND ROSENBLUM, M. 1996. Operating system support for improving data locality on cc-numa computer servers. In Proceedings of the Seventh International Conference on Architectural Support for Programming Languages and Operating Systems (Oct.), 279-289. Google ScholarGoogle Scholar
  40. WILSON, K. M., OLUKOTUN, K., AND ROSENBLUM, M. 1996. Increasing cache port efficiency for dynamic superscalar microprocessors. In Proceedings of the 23rd International Symposium on Computer Architecture (June), 147-157. Google ScholarGoogle Scholar
  41. WILSON, R., FRENCH, R., WILSON, C., AMARASINGHE, S., ANDERSON, J., TJIANG, S., LIAO, S.-W., TSENG, C.-W., HALL, M., LAM, M., AND HENNESSY, J. 1994. SUIF: An infrastructure for research on parallelizing and optimizing compilers. ACM SIGPLAN Not. 29, 12 (Dec.). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. WITCHEL, E. AND ROSENBLUM, M. 1996. Embra: Fast and flexible machine simulation. In Proceedings of the 1996 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems (May), 68-79. / Google ScholarGoogle Scholar

Index Terms

  1. Using the SimOS machine simulator to study complex computer systems

                Recommendations

                Reviews

                Maurice S. Elzas

                SimOS is a computer simulation environment whose goal is to provide a platform for measuring and evaluating design alternatives of computer systems that have yet to be built. The environment is set up in such a way that both hardware and the software running on it can be simulated to the required level of detail needed for a particular study. In fact, the simulation environment allows users to implement a complete virtual machine in a wide range of levels of detail, allowing explicit control of the tradeoff between simulation speed and simulation detail. This paper is subdivided into seven main parts and an extensive reference list. Part 1, “Introduction,” presents the goals, general properties, and challenges of the SimOS project. Part 2 details the main aspects of the SimOS computer simulation environment. Part 3 highlights the qualities of SimOS for the simulation of complex workloads, the key properties of scalability of this environment, and its advantages for the user. Part 4 is devoted to the important topic of data collection possibilities. Part 5 contains two case studies that give relevant information about the actual use of SimOS. Part 6 compares key aspects of SimOS to those of some other tools for studying the behavior of computer systems. Part 7 presents conclusions. The paper is well written and gives a clear picture of the SimOS environment, its capabilities, the challenges that were faced during its development, the solutions that were adopted, and the practical uses for the tool. It will not, however, be easy reading for the casual (though interested) reader who has had no experience either in computer architecture or in its simulation. The achievement of the team that has made SimOS a practical tool is considerable. Many simulation engines have been built with the purpose they had in mind, some dating back almost a quarter of a century, but none of these achieved the versatility and efficiency needed to make the simulation of complete computer systems (including the software running on them) an effective and affordable tool in computer design. In this sense, SimOS is most probably the first of its kind. It enables users to carry out the type of work they would engage in when evaluating an actual hardware prototype of a computer system, including measuring transfer rates of processor pipelines and monitoring cache behavior, the I/O handling properties of the (intended) operating system, and the transaction handling rate of an application on top of all this. Of course, for every one of these studies, the environment will have to be run with models that represent the appropriate level of detail and are available under SimOS, but if an actual system were being measured, a similar approach would have to be taken, with a separate experiment being needed for most types of measurement.

                Access critical reviews of Computing literature here

                Become a reviewer for Computing Reviews.

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                Full Access

                • Published in

                  cover image ACM Transactions on Modeling and Computer Simulation
                  ACM Transactions on Modeling and Computer Simulation  Volume 7, Issue 1
                  Jan. 1997
                  151 pages
                  ISSN:1049-3301
                  EISSN:1558-1195
                  DOI:10.1145/244804
                  Issue’s Table of Contents

                  Copyright © 1997 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 1 January 1997
                  Published in tomacs Volume 7, Issue 1

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • article

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader