- 1 H.E. Bal, Programming Distributed Systems, Prentice Hall Int'l, Hemel Hempstead, UK (1991). Google ScholarDigital Library
- 2 H.E. Bal, M.F. Kaashoek, and A.S. Tanenbaum, "Orca: A Language for Parallel Programming of Distributed Systems," IEEE Trans. on Software Engineering 18(3), pp. 190-205 (March 1992). Google ScholarDigital Library
- 3 A.S. Tanenbaum, M.F. Kaashoek, and H.E. Bal, "Parallel Programming using Shared Objects and Broadcasting," IEEE Computer 25(8), pp. 10-19 (Aug. 1992). Google ScholarDigital Library
- 4 H.E. Bal, M.F. Kaashoek, A.S. Tanenbaum, and J. Jansen, "Replication Techniques for Speeding up Parallel Applications on Distributed Systems," Concurrency Practice & Experience 4(5), pp. 337-355 (Aug. 1992). Google ScholarDigital Library
- 5 G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel, U. Kremer, C-W. Tseng, and M-Y. Wu, "FORTRAN-D Language Specification," TR90-141, Rice University (Dec. 1990).Google Scholar
- 6 C. Koelbel, P. Mehrota, and J. van Rosendale, "Supporting Shared Data Structures on Distributed Memory Architectures," Proc. 2nd Symposium on Principles and Practice of Parallel Programming, Seattle, WA, pp. 177- 186 (March 1990). Google ScholarDigital Library
- 7 M. Rosing, R. Schnabel, and R. Weaver, "The DINO parallel programming language," Journal of Parallel and Distr. Computing 13(1), pp. 30-42 (Sept. 1991). Google ScholarDigital Library
- 8 A. Li and P. Hudak, "Memory Coherence in Shared Virtual Memory Systems," A CM Trans. Comp. Syst. 7(4)(Nov. 1989). Google ScholarDigital Library
- 9 S. Ahuja, N. Carriero, and D. Gelernter, "Linda and Friends," IEEE Computer 19(8), pp. 26-34 (Aug. 1986). Google ScholarDigital Library
- 10 J.K. Bennett, J.B. Carter, and W. Zwaenepoel, "Munin: Distributed Shared Memory Based on Type-Specific Memory Coherence," Proc. 2nd Symposium on Principles and Practice of Parallel Programming, Seattle, WA, pp. 168- 176 (March 1990). Google ScholarDigital Library
- 11 E. Jul, H. Levy, N. Hutchinson, and A. Black, "Fine-Grained Mobility in the Emerald System," A CM Trans. Comp. Syst. 6(1), pp. 109- 133 (Feb. 1988). Google ScholarDigital Library
- 12 J.S. Chase, F.G. Amador, E.D. Lazowska, H.M. Levy, and R.J. Littlefield, "The Amber System: Parallel Programming on a Network of Multiprocessors," Proc. of the 12th ACM Symp. on Operating System Principles, Litchfield Park, AZ, pp. 147-158 (Dec. 1989). Google ScholarDigital Library
- 13 D. Lenoski, J. Laudon, K. Gharachorloo, W- D. Weber, A. Gupta, J. Hennessy, M. Horowitz, and M.S. Lam, "The Stanford Dash Multiprocessor," IEEE Computer, pp. 63-79, Stanford Univ (March 1992). Google ScholarDigital Library
- 14 D. Chaiken, C. Fields, K. Kurihara, and A. Agarwal, "Directory-Based Cache Coherence in Large-Scale Multiprocessors," IEEE Computer (June 1990). Google ScholarDigital Library
- 15 L. Lamport, "How to Make a Multiprocessor Computer that Correctly Executes Multiprocess Programs," IEEE Trans. on Computers C-28(9), pp. 690-691 (Sept. 1979).Google ScholarDigital Library
- 16 L. Uljee and H.-J. Visscher, "C++/Orca," Master's thesis, Vrije Universiteit, Amsterdam (Sept. 1992).Google Scholar
- 17 A.S. Tanenbaum, Modern Operating Systems, Prentice-Hall, Englewood Cliffs, NJ (1992). Google ScholarDigital Library
- 18 M.F. Kaashoek, "Group Communication in Distributed Computer Systems," Ph.D. thesis, Vrije Universiteit, Amsterdam (Dec. 1992).Google Scholar
- 19 H.T. Kung, "Gigabit Local Area Networks: a Systems Perspective," IEEE Communications Magazine 30(4), pp. 79-89 (April 1992).Google ScholarDigital Library
- 20 E. Biagioni, E. Cooper, and R. Sansom, "Designing a Practical ATM LAN," IEEE Network 7(2), pp. 32-39 (March 1993).Google ScholarDigital Library
- 21 T.E. Jeremiassen and S.J. Eggers, "Computing Per-Process Summary Side-Effect Information," Fifth Workshop on Languages and Compilers for Parallel Computing, New Haven, CT, pp. 115-122 (Aug. 1992). Google ScholarDigital Library
- 22 J.-F. Jenq and S. Sahni, "All Pairs Shortest Paths on a Hypercube Multiprocessor," Proc. 1987 Int. Conf. Parallel Processing, St. Charles, IL, pp. 713-716 (Aug. 1987).Google Scholar
- 23 T. von Eicken, D.E. Culler, S.C. Goldstein, and K.E. Schauser, "Active Messages: a Mechanism for Integrated Communication and Computation," Proc. 19th Int. Symp. on Computer Architecture, Gold Coast, Australia, pp. 256-266 (May 1992). Google ScholarDigital Library
- 24 A.H. Karp, "Programming for Parallelism," IEEE Computer 20(5), pp. 43-57 (May 1987). Google ScholarDigital Library
- 25 B. Chapman, P. Mehrota, and H. Zima, "User Defined Mappings in Vienna FORTRAN," A CM SIGPLAN Notices (Proc. Workshop on Languages, Compilers, and Run-Time Environments for Distributed Memory Multiprocessors) 28(1), pp. 72-75 (Jan. 1993). Google ScholarDigital Library
- 26 S.K.S. Gupta, S.D. Kaushik, C.-H. Huang, J.R. Johnson, R.W. Johnson, and P. Sadayappan, "A Methodology for Generating Data Distributions to Optimize Communication," Proc. 4th IEEE Symp. on Parallel and Distributed Processing, pp. 436-441 (Dec. 1992).Google Scholar
- 27 M. Gupta and P. Banerjee, "Demonstration of Automatic Data Partitioning Techniques for Parallelizing Compilers on Multicomputers," IEEE Trans. on Parallel and Distributed Systems 3(2), pp. 179-193 (March 1992). Google ScholarDigital Library
- 28 E.M. Paalvast, A.J. van Gemund, and H.J. Sips, "A Method for Parallel Program Generation with an Application to the Booster Language," Proc. 1990 ACM Int. Conf. on Supercomputing, Amsterdam (June 1990). Google ScholarDigital Library
- 29 M.W. Hall, S. Hirananani, K. Kennedy, and C. Tseng, "Interprocedural Compilation of FORTRAN-D for MIMD Distributed-Memory Machines," Proc. Supercomputing'92, Minneapolis, pp. 522-534 (Nov. 1992). Google ScholarDigital Library
- 30 A. Rogers and K. Pingali, "Process Decomposition Through Locality of Reference," A CM SIGPLAN Notices (Proc. SIGPLAN'89 Conf. on Progr. Lang. Design & Impl.), Portland, Oregon 24(7), pp. 69-80 (July 1989). Google ScholarDigital Library
- 31 M. Chen, Y. Choo, and J. Li, "Compiling parallel programs by optimizing performance," Journal of Supercomputing 1(2), pp. 171-207 (July 1988).Google ScholarCross Ref
- 32 B.K. Totty and D.A. Reed, "Dynamic Object Management for Distributed Data Structures," Proc. Supercomputing '92, Minneapolis, pp. 692-701 (Nov. 1992). Google ScholarDigital Library
- 33 N. Carriero, "The Implementation of Tuple Space Machines," Research Report 567 (Ph.D. dissertation), Yale University, New Haven, CT (Dec. 1987). Google ScholarDigital Library
- 34 P. America, "POOL-T: A Parallel Object- Oriented Language," pp. 199-220 in Object- Oriented Concurrent Programming, ed. A. Yonezawa and M. Tokoro, M.I.T. Press, Cambridge, MA (1987). Google ScholarDigital Library
- 35 B.N. Bershad and M.J. Zekauskas, "Midway: Shared Memory Parallel Programming with Entry Consistency for Distributed Memory Multiprocessors," CMU-CS-91-170, CMU (Sept. 1991).Google Scholar
- 36 D. Mosberger, "Memory Consistency Models," A CM Operating Systems Reviews 28(1), pp. 18-26 (Jan. 1993). Google ScholarDigital Library
- 37 M.J. Freeley and H.M. Levy, "Distributed Shared Memory with Versioned Objects," Proc. Conf. Object-Oriented Programming Systems, Languages and Applications, pp. 247-262 (1992). Google ScholarDigital Library
Index Terms
- Object distribution in Orca using Compile-Time and Run-Time techniques
Recommendations
Combining Compile and Run-Time Dependency Resolution in Data-Driven Multithreading
DFM '11: Proceedings of the 2011 First Workshop on Data-Flow Execution Models for Extreme Scale ComputingThreaded Data-Flow systems schedule threads based on data-availability i.e. a thread can be scheduled for execution only after all its inputs have been generated by its producer threads. This requires that all data dependencies are resolved. Two ...
Implementing reusable exception handling patterns with compile-time metaprogramming
SERENE'12: Proceedings of the 4th international conference on Software Engineering for Resilient SystemsWe investigate in depth the adoption of compile-time metaprogramming to implement exception handling patterns. It is based on logic that is executed at compile-time and outputs source fragments which substitute the meta-code before compilation. ...
An integrated compile-time/run-time software distributed shared memory system
On a distributed memory machine, hand-coded message passing leads to the most efficient execution, but it is difficult to use. Parallelizing compilers can approach the performance of hand-coded message passing by translating data-parallel programs into ...
Comments