Skip to main content

Proposal and Evaluation of APIs for Utilizing Inter-Core Time Aggregation Scheduler

  • Conference paper
  • 658 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6253))

Abstract

This paper proposes and evaluates APIs for Inter-Core Time Aggregation Scheduler (IAS). IAS is a kernel-level thread scheduler to enhance performance of multi-threaded programs on multi-core processors. IAS combines time-multiplexing and space-multiplexing scheduling to utilize caches existing per processing core and shared between processing cores.

We present the effect of APIs in two aspects. Firstly, we show that we can effectively and easily set the aggregation strength in IAS based on the quantum time. Secondly, we show that we can gain the effect of space-multiplexing without setting processor affinity of each thread by grouping processing cores and running IAS per group. We implement IAS and its APIs by modifying a Linux kernel and present its effect on a commodity multi-core processor.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kim, S., et al.: Fair Cache Sharing and Partitioning in a Chip Multiprocessor Architecture. In: Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, pp. 111–122 (2004)

    Google Scholar 

  2. Jiang, Y., et al.: Analysis and Approximation of Optimal Co-Scheduling on Chip Multiprocessors. In: Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques, pp. 220–229 (2008)

    Google Scholar 

  3. Chen, S., et al.: Scheduling Threads for Constructive Cache Sharing on CMPs. In: Proceedings of 19th ACM symposium on Parallel Algorithms and Architectures, pp. 105–115 (2007)

    Google Scholar 

  4. DaCapo benchmark suite, http://dacapobench.org/

  5. The PARSEC Benchmark Suite, http://parsec.cs.princeton.edu/

  6. Chishti, Z., et al.: Optimizing Replication, Communication, and Capacity Allocation in CMPs. In: Proceedings of the 32nd International Symposium on Computer Architecture, pp. 357–368 (2005)

    Google Scholar 

  7. Yamada, S., et al.: Development of a Thread Scheduler for Global Aggregation of Sibling Threads. Research Reports on Information Science and Electrical Engineering of Kyushu University 1(2), 69–74 (2008)

    Google Scholar 

  8. Yamada, S., et al.: Impact of Priority Bonuses of Inter-Core Aggregation Scheduler on a Commodity CMP Platform. In: Workshop on Managed Many-Core Systems (MMCS) co-located with ASPLOS (2009), http://www.cercs.gatech.edu/mmcs09/program.htm

  9. RUBiS: Rice University Bidding System, http://rubis.ow2.org/

  10. Keeton, K., et al.: Performance Characterization of a Quad Pentium Pro SMP Using OLTP Workloads. In: Proceedings of the 25th Annual International Symposium on Computer Architecture, pp. 15–26 (1998)

    Google Scholar 

  11. Redstone, J., et al.: An Analysis of Operating System Behavior on a Simultaneous Multithreaded Architecture. In: Proceedings of the 9th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 245–256 (2000)

    Google Scholar 

  12. DeVuyst, M., et al.: Exploiting Unbalanced Thread Scheduling for Energy and Performance on a CMP of SMT Processors. In: Proceedings of 20th IEEE International Parallel & Distributed Processing Symposium (2006)

    Google Scholar 

  13. Yamada, S., et al.: Effect of Context Aware Scheduler on TLB. In: Workshop on Multi-Threaded Architectures and Applications, Published in CD (2008)

    Google Scholar 

  14. SysBench: a system performance benchmark, http://sysbench.sourceforge.net/

  15. Intel 64 and IA-32 Architectures Software Developers Manual, Volume 3B: System Programming Guide, Part 2, http://www.intel.com/products/processor/manuals/index.htm

  16. Parekh, S., et al.: Thread-Sensitive Scheduling for SMT Processors, Technical report, Dept. of Computer Science and Engineering, University of Washington (2000)

    Google Scholar 

  17. Snavely, A., et al.: Symbiotic Jobscheduling with Priorities for a Simultaneous Multithreading Processor. In: Proceedings of International Conference on Measurement and Modeling of Computer Systems, pp. 66–76 (2002)

    Google Scholar 

  18. Fedorova, A., et al.: Performance of Multithreaded Chip Multiprocessors and Implications for Operating System Design. In: Proceedings of USENIX 2005 Annual Technical Conference, pp. 395–398 (2005)

    Google Scholar 

  19. Chandra, D., et al.: Predicting Inter-Thread Cache Contention on a Chip Multi-Processor Architecture. In: Proceedings of 11th International Symposium on High-Performance Computer Architecture, pp. 340–351 (2005)

    Google Scholar 

  20. Ziemba, S., et al.: Analyzing the Effectiveness of Multicore Scheduling Using Performance Counters. In: Proceedings of Workshop on the Interaction between Operating Systems and Computer Architecture (2008)

    Google Scholar 

  21. SPECweb, http://www.spec.org/web2009/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yamada, S., Kusakabe, S. (2010). Proposal and Evaluation of APIs for Utilizing Inter-Core Time Aggregation Scheduler. In: Frachtenberg, E., Schwiegelshohn, U. (eds) Job Scheduling Strategies for Parallel Processing. JSSPP 2010. Lecture Notes in Computer Science, vol 6253. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16505-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16505-4_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16504-7

  • Online ISBN: 978-3-642-16505-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics