Abstract
The cache size tends to grow in the embedded processor as technology scales to smaller transistors and lower supply voltages. However, larger cache size demands more energy. Accordingly, the ratio of the cache energy consumption to the total processor energy is growing. Many cache energy schemes have been proposed for reducing the cache energy consumption. However, these previous schemes are concerned with one side for reducing the cache energy consumption, dynamic cache energy only, or static cache energy only. In this paper, we propose a hybrid scheme for reducing dynamic and static cache energy, simultaneously. For this hybrid scheme, we adopt two existing techniques to reduce static cache energy consumption, drowsy cache technique, and to reduce dynamic cache energy consumption, way–prediction technique. Additionally, we propose a early wakeup technique based on instruction PC to reduce penalty caused by applying these two schemes. We focus on level 1 data cache. Our experimental evaluation shows the total extra cycles due to using drowsy cache scheme can be reduced by 29.6%, on average, through our suggested early wakeup scheme and the ratio of drowsy cache lines is over 87%. The total dynamic energy of the processor can be reduced by 2.2% to 6.8%. Energy-delay about total dynamic processor energy is, on average, reduced by 3% versus a processor using base cache scheme, not using any schemes for energy reduction.
This work was supported by the Brain Korea 21 Project.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Inoue, K., Ishihara, T., Murakami, K.: Way-predicting set-associative cache for high performance and low energy consumption. In: Proceedings of the International Symposium on Low Power Electronics and Design, August 1999, pp. 273–275 (1999)
Powell, M.D., Agarwal, A., Vijaykumar, T.N., Falsafi, B., Roy, K.: Reducing setassociative cache energy via way-prediction and selective direct-mapping. In: Proceedings of international Symposium on Microarchitecture, December 2001, pp. 54–65 (2001)
Powell, M., et al.: Gated-Vdd: A circuit technique to reduce leakage in deepsubmission cache memories. In: Proceedings of International Symposium on Low Power Electronics and Design, pp. 90–95 (2000)
Kaxiras, S., Hu, Z., Martonosi, M.: Cache decay: Exploiting generational behavior to reduce leakage power. In: Proceedings of International Symposium on Computer Architecture, July 2001, pp. 240–251 (2001)
Flautner, K., Kim, N.S., Martin, S., Blaauw, D., Mudge, T.: Drowsy caches: Simple techniques for reducing leakage power. In: Proceedings of International Symposium on Computer Architecture, July 2002, pp. 148–157 (2002)
Kim, S., Vijaykrishnan, N., Irwin, M.J., John, L.K.: On load latency in lowpower caches. In: Proceedings of International Symposium Low Power Electronics and Design, August 2003, pp. 258–261 (2003)
Brooks, D., Tiwari, V., Martonosi, M.: Wattch: A framework for architecturallevel power analysis and optimizations. In: Proceedings of the 27th Annual International Symposium on Computer Architecture, June 2000, pp. 83–94 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Shim, S., Kim, C.H., Kwak, J.W., Jhon, C.S. (2004). Hybrid Technique for Reducing Energy Consumption in High Performance Embedded Processor. In: Yang, L.T., Guo, M., Gao, G.R., Jha, N.K. (eds) Embedded and Ubiquitous Computing. EUC 2004. Lecture Notes in Computer Science, vol 3207. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30121-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-30121-9_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22906-3
Online ISBN: 978-3-540-30121-9
eBook Packages: Springer Book Archive