ABSTRACT
Spidermine monitors the rate at which read requests are issued by an application, and thus detects bursts of disk reads. It then determines an address at which to insert a breakpoint into the application code or a library before each burst, and logs each breakpoint, together with the data required for the subsequent burst. When the application is subsequently run, Spidermine inserts breakpoints at each logged address by temporarily replacing the instruction. Spidermine is then invoked at each breakpoint, and prefetches the corresponding data blocks into the page cache. This use of breakpoints as triggers for prefetching eliminates the need for monitoring to determine when to prefetch data during program execution. Also, by operating at the user level, Spidermine avoids interference with other applications. Experiments on 11 benchmark applications demonstrated that Spidermine can reduce the time for launch by up to 54.1%, and for run-time data-loading by up to 70.1% on a hard disk drive, 13.3% and 47.0% respectively, on a solid-disk drive.
- A. Bovenzi, J. Alonso, H. Yamada, S. Russo, and K. S. Trivedi. 2013. Towards fast OS rejuvenation: an experimental evaluation of fast OS reboot techniques. In in Proc. 24th IEEE International Symposium on Software Reliability Engineering. 61--70.Google Scholar
- A. D. Brown, T. C. Mowry, and O. Krieger. 2001. Compiler-based I/O prefetching for out-of-core applications. ACM Transactions on Computer Systems 19, 2 (May 2001), 111--170.Google ScholarDigital Library
- X. Ding, S. Jiang, F. Chen, K. Davis, and X. Zhang. 2004. DiskSeen: Exploiting disk layout and access history to enhance I/O prefetch. In in Proc. USENIX 2004 Annual Technical Conference. 173--186.Google Scholar
- B. Esfahbod. 2006. Preload---An Adaptive Prefetching Daemon. Master's thesis. University of Toronto, Canada.Google Scholar
- B. Hubert. 2005. On faster application startup times: cache stuffing, seek profiling, adaptive preloading. In in Proc. Ottawa Linux Symposium. 245--248.Google Scholar
- S. Jiang, X. Ding, Y. Xu, and K. Davis. 2013. A prefetching scheme exploiting both data layout and access history on disk. ACM Transactions on Storage 9, 3, Article 10 (August 2013).Google ScholarCross Ref
- Y. Joo, J. Ryu, S. Park, and K. G. Shin. 2011. FAST: quick application launch on solid-state drives. In in Proc. 9th USENIX Conference on File and Storage Technologies (FAST). 259--272.Google Scholar
- H. Kim, N. Agrawal, and C. Ungureanu. 2012. Revisiting storage for smartphones. In in Proc. 10th USENIX Conference on File and Storage Technologies (FAST). 209--222.Google Scholar
- T. M. Kroeger and D. D. E. Long. 1996. Predicting file system actions from prior events. In in Proc. USENIX 1996 Annual Technical Conference. 319--328.Google ScholarDigital Library
- T. M. Kroeger and D. D. E. Long. 2001. Design and implementation of a predictive file prefetching algorithm. In in Proc. USENIX 2001 Annual Technical Conference. 105--118.Google ScholarDigital Library
- H. Lei and D. Duchamp. 1997. An analytical approach to file prefetching. In in Proc. USENIX 1997 Annual Technical Conference. 275--288.Google Scholar
- Z. Li, Z. Chen, S. M. Srinivasan, and Y. Zhou. 2004. C-Miner: Mining block correlations in storage systems. In in Proc. 3rd USENIX Conference on File and Storage Technologies. 173--186.Google Scholar
- K. Lichota. 2007. Prefetch: Linux solution for prefetching necessary data during application and system startup. https://code.google.com/p/prefetch/Google Scholar
- A. Parate, M. Bőhmer, D. Chu, D. Ganesan, and B. Marlin. 2013. Practical prediction and prefetch for faster access to applications on mobile phones. In in Proc. 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 275--284.Google Scholar
- J. Ryu, D. Lee, K. G. Shin, and K. Kang. 2018. ClusterFetch: A lightweight prefetcher for intensive disk reads. IEEE Trans. Comput. 67, 2 (September 2018), 284--290. Google ScholarCross Ref
- J. Schindler, J. L. Griffin, C. R. Lumb, and G. R. Ganger. 2002. Track-aligned extents: Matching access patterns to disk drive characteristics. In in Proc. 1st USENIX Conference on File and Storage Technologies. 259--274.Google Scholar
- V. Tarasov, G Sim, A Povzner, and E Zadok. 2012. Efficient I/O scheduling with accurately estimated disk drive latencies. In in Proc. 8th annual workshop on Operating Systems Platforms for Embedded Real-Time. 36--45.Google Scholar
- Steve VanDeBogart, Christopher Frost, and Eddie Kohler. 2009. Reducing seek overhead with application-directed prefetching. In Proceedings of the 2009 Conference on USENIX Annual Technical Conference. 299--312.Google ScholarDigital Library
- J. Won, O. Kwon, J. Ryu, J. Hur, I. Lee, and K. Kang. 2017. A Breakpoint-based Prefetcher for BothLaunch and Run-time. In in Proc. IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2766--2771.Google Scholar
- T. Yan, D. Chu, D. Ganesan, A. Kansal, and J. Liu. 2012. Fast app launching for mobile devices using predictive user context. In in Proc. 10th International Conference on Mobile Systems, Applications, and Services. 113--126.Google Scholar
Index Terms
- Spidermine: Low Overhead User-Level Prefetching
Recommendations
Increasing hardware data prefetching performance using the second-level cache
Techniques to reduce or tolerate large memory latencies are critical for achieving high processor performance. Hardware data prefetching is one of the most heavily studied solutions, but it is essentially applied to first-level caches where it can ...
Stealth prefetching
Proceedings of the 2006 ASPLOS ConferencePrefetching in shared-memory multiprocessor systems is an increasingly difficult problem. As system designs grow to incorporate larger numbers of faster processors, memory latency and interconnect traffic increase. While aggressive prefetching ...
Stealth prefetching
ASPLOS XII: Proceedings of the 12th international conference on Architectural support for programming languages and operating systemsPrefetching in shared-memory multiprocessor systems is an increasingly difficult problem. As system designs grow to incorporate larger numbers of faster processors, memory latency and interconnect traffic increase. While aggressive prefetching ...
Comments