skip to main content
10.1145/3213846.3229502acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
short-paper

Managing concurrent testing of data race with ComRaDe

Published:12 July 2018Publication History

ABSTRACT

As a result of the increasing number of concurrent programs, the researchers put forward a number of tools with different implementation strategies to detect data race. However, confirming data races from the collection of true and false positives reported by race detectors is extremely the time-consuming process during the evaluation period.

In this paper, we presented ComRaDe, a management platform for concurrent testing of data race with three main functions: manage and filter data races, run evaluation programs to select race detectors, generate detection report automatically. We integrated and compared three different race detectors on ComRaDe in terms of race detection capability. The results demonstrated the potential of ComRaDe on effectively identifying the advantages and limitations of different race detectors, and in further helping researchers to select and improve the capability of detectors for its convenience.

References

  1. Sebastian Burckhardt, Pravesh Kothari, Madanlal Musuvathi, and Santosh Nagarakatte. 2010. A randomized scheduler with probabilistic guarantees of finding bugs. In ACM Sigplan Notices, Vol. 45. ACM, 167–178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Eitan Farchi, Yarden Nir, and Shmuel Ur. 2003. Concurrent bug patterns and how to test them. In Parallel and Distributed Processing Symposium, 2003. Proceedings. International. IEEE, 7–pp. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cormac Flanagan and Stephen N Freund. 2009. FastTrack: efficient and precise dynamic race detection. In ACM Sigplan Notices, Vol. 44. ACM, 121–133. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jian Gao, Xin Yang, Yu Jiang, Han Liu, Weiliang Ying, and Xian Zhang. 2018. JBench: A Dataset of Data Races for Concurrency Testing. In Proceedings of the 15th International Conference on Mining Software Repositories. IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Jeff Huang, Patrick O’Neil Meredith, and Grigore Rosu. 2014. Maximal sound predictive race detection with control flow abstraction. ACM SIGPLAN Notices 49, 6 (2014), 337–348. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Pallavi Joshi, Mayur Naik, Chang-Seo Park, and Koushik Sen. 2009. CalFuzzer: An extensible active testing framework for concurrent programs. In International Conference on Computer Aided Verification. Springer, 675–681. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Baris Kasikci, Cristian Zamfir, and George Candea. 2012. Data races vs. data race bugs: telling the difference with portend. ACM SIGPLAN Notices 47, 4 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Jie Liang, Mingzhe Wang, Yuanliang Chen, Yu Jiang, and Renwei Zhang. 2018. Fuzz testing in practice: Obstacles and solutions. In 25th International Conference on Software Analysis, Evolution and Reengineering.Google ScholarGoogle ScholarCross RefCross Ref
  9. Shan Lu, Zhenmin Li, Feng Qin, Lin Tan, Pin Zhou, and Yuanyuan Zhou. 2005. Bugbench: Benchmarks for evaluating bug detection tools. In Workshop on the evaluation of software defect detection tools, Vol. 5.Google ScholarGoogle Scholar
  10. Lorna A Smith, J Mark Bull, and J Obdrizalek. 2001. A parallel java grande benchmark suite. In Supercomputing, ACM/IEEE 2001 Conference. IEEE, 6–6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mingzhe Wang, Jie Liang, Yuanliang Chen, Yu Jiang, Xun Jiao, Han Liu, Xibin Zhao, and Jiaguang Sun. 2018. SAFL: Increasing and Accelerating Testing Coverage with Symbolic Execution and Guided Fuzzing. In 2018 IEEE/ACM 40th International Conference on Software Engineering. IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Abstract 1 Introduction 2 ComRaDe Tool 2.1 Dataset Preparation 2.2 Manage and Filter Module 2.3 Run and Log Module 2.4 Compare and Generate Module 3 Evaluation 4 Conclusion ReferencesGoogle ScholarGoogle Scholar

Index Terms

  1. Managing concurrent testing of data race with ComRaDe

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ISSTA 2018: Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis
        July 2018
        379 pages
        ISBN:9781450356992
        DOI:10.1145/3213846
        • General Chair:
        • Frank Tip,
        • Program Chair:
        • Eric Bodden

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 12 July 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        Overall Acceptance Rate58of213submissions,27%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader