Can everybody be happy in the cloud? Delay, profit and energy-efficient scheduling for cloud services

https://doi.org/10.1016/j.jpdc.2016.05.013Get rights and content

Highlights

  • We propose new scheduling algorithms for cloud services.

  • Multidimensional tradeoff: provider profit maximization, users’ QoS, energy efficiency.

  • Use of CloudSim to validate the results and compare with other approaches.

  • We discuss why our proposed algorithms excel in comparison to other works.

Abstract

The rapid development of Cloud Computing provides consumers and service providers with a wide range of opportunities and challenges. Considering the substantial infrastructure investments being made by cloud providers, the reduction of operating expenses (OPEX) while maximizing the profit of the provided services is of great importance. One way to achieve this is by maximizing the efficiency of resource utilization. However, profit maximization does not necessarily coincide with the improvement of a user’s Quality of Service (QoS); users generating higher profit for the provider may be scheduled first, causing high delays to low-paying users. Further, the contradictory nature of users’ and providers’ needs also extends to the energy consumption problem, as the minimization of service delays could cause cloud resources to be constantly “on”, leading to high energy consumption, high costs for providers and undue environmental impact. The objective of our work is to analyze this multi-dimensional trade-off. We first investigate the problem of efficient resource allocation strategies for time-varying traffic, and propose a new algorithm, MinDelay, which aims at achieving the minimum service delay while taking into account provider’s profit. Then, we propose E-MinDelay, an energy-efficient approach for CPU-intensive tasks in cloud systems. Furthermore, we propose an improved version of the Energy Conscious Task Consolidation (ECTC) algorithm, which combines task consolidation and migration techniques with E-MinDelay. Our results demonstrate that energy consumption and service delays corresponding to profit loss can be simultaneously decreased using an efficient scheduling algorithm.

Introduction

During the past few years, the concept of cloud computing as utility has been widely adopted in both academia and industry. A cloud infrastructure consists of an assortment of networked and virtualized physical resources, allowing dynamic provisioning of applications that are delivered as services to cloud users. In particular, cloud computing is based on the practical fulfillment of catering services over the Internet, by providing on-demand services to end users, including data storage, and online access to computer infrastructures and resources, under a pay-per-use model regardless of the location of the cloud users  [29], [11]. Most of the best-known IT companies, such as Salesforce, Amazon, IBM, Google, Microsoft, and Akamai, have already introduced cloud services. Cloud computing is becoming a trend that IT companies are willing to follow in order to take advantage of the benefits of cloud services, which are highlighted by the enormously increased popularity of large-scale Internet services, such as social networking and e-commerce. New cloud computing technologies aim at better resource utilization, significantly reduced operation costs for application developers in the long run, and improved service quality to end users  [6], [16].

In addition to time and cost minimization (or, alternatively, profit maximization), another major problem associated with cloud computing is that of energy consumption. With the rapid advance of cloud services, the establishment of large-scale data centers keeps growing, causing serious concerns regarding the high energy consumption and carbon dioxide emissions of such systems. In 2007, 2% of the world’s total CO2 emissions were caused by the IT industry, and this cost has risen so significantly in the past few years that it is estimated to have surpassed 10 Billion dollars for data centers in the US alone, as the energy consumed by data centers represents 1%–2% of the total US power consumption  [23], [5].

Compute resources, and especially servers, are a major part of the problem due to their high operating and cooling energy costs (the other main factor responsible for energy consumption in data centers is storage; techniques for power-reduction according to the disk-power factor and the storage-stack layer are presented in  [5], but they are out of the scope of this work, which focuses on CPU-intensive service requests in the cloud). The reason for this high energy consumption is not only the quantity of compute resources but also their inefficient utilization. Therefore, the industry is shifting towards a green cloud computing paradigm, in order to reduce electrical energy, carbon emissions to the environment, and costs of data centers. One of the basic problems contributing to the increase in energy consumption is that the utilization of servers in data centers rarely reaches 100%. Most servers operate at a utilization rate lower than 50%, and this leads to extra expenses. Additionally, servers in idle mode consume about 70% of their peak power. Thus, the need of keeping more servers switched off or at a lower power mode and trying to achieve better utilization rates of switched on servers is imperative.

Our first goal in this work is to show how, with the use of an efficient scheduling algorithm, we can minimize service delays while taking into account the providers’ profit in cloud computing. We then investigate the tradeoff between balancing providers’ profit and service delays and decreasing energy consumption, via decisions on service requests allocation and migration.

The paper is organized as follows. Section  2 provides the necessary background and related work. Section  3 focuses on profit and delay-based resource allocation in the cloud and introduces our MinDelay scheduling algorithm. Section  4 presents the proposed algorithm E-MinDelay, which is specifically targeted towards decreasing energy consumption and delays related to profit, and discusses the migration techniques, which we use to implement the improved version of the ECTC algorithm. Section  5 introduces the performance evaluation of our proposed algorithms, which are compared to existing algorithms for cloud services, and the conclusions are provided in Section  6.

Section snippets

Prior Work and Contribution

Efficient resource allocation and service scheduling are considered to be the key components of most emerging cloud computing environments. Depending on the point of view, i.e., user or provider, the goal of scheduling algorithms in cloud computing systems varies from maximizing their utilization while minimizing services’ delays, to achieving the maximum profit while minimizing energy consumption. The contradictory nature of the three aforementioned factors, i.e., time, profit, and energy

Profit driven scheduling

In this section, we briefly introduce the two profit-driven service scheduling algorithms, i.e., MaxProfit and MaxUtil, which were proposed in  [17] and will be compared against our algorithm MinDelay. Then, we present and discuss MinDelay, the main goal of which is to achieve minimum service delays by making efficient assignments of services to available resources in the cloud system. All three algorithms share the same pivot in their attempt to distribute services to VMs, i.e., earning the

Energy-driven scheduling

In this section, initially, we describe the system model, including application and energy models that we chose in order to implement our approach. Then, we briefly discuss the migration mechanism, the Energy Conscious Task Consolidation algorithm (ECTC) algorithm, and we introduce our proposed E-MinDelay algorithm, in order to present the improved version of the ECTC algorithm, named ECTC.

Performance evaluation

In this section, we introduce our experimental setup and the simulation parameters chosen in order to conduct a number of different experiments to evaluate the performance of our proposed algorithms. We present the results of our proposed MinDelay algorithm, and compare them with the ones derived from the implementation of the other two algorithms, i.e., MaxProfit and MaxUtil. All of our simulations in Section  5.1 have the goal of testing the system in situations where the number of VMs barely

Conclusions and future work

Scheduling for CPU-intensive tasks in the cloud is a multi-parametric problem, involving efficient resource allocation techniques, users’ QoS requirements, provider profit and energy consumption. In our work, we focused on the efficiency of scheduling service requests on both sides, i.e., providers’ and users’, in regards to delay, profit, and energy consumption. We proposed a scheduling algorithm (MinDelay) that accommodates service requests in a cloud system by taking into account both the

Georgia Koutsandria received her Diploma degree (5 years program) in Electronic and Computer Engineering from the Technical University of Crete, Chania, Greece in 2012, and the M.S. degree in Electrical and Computer Engineering from the University of California Davis, CA, USA in 2014. She is currently pursuing the Ph.D. degree at the department of Computer Science at University of Rome “La Sapienza”, Rome, Italy. Her research interests include wireless sensor networks, computer networks, and

References (30)

  • C. Clark et al.

    Live migration of virtual machines

  • X. Fan et al.

    Power provisioning for a warehouse-sized computer

    SIGARCH Comput. Archit. News.

    (2007)
  • A. Fox et al.

    Above the clouds: A Berkeley view of cloud computing. Tech. Rep.

    (2009)
  • C.-H. Hsu, S.-C. Chen, C.-C. Lee, H.-Y. Chang, K.-C. Lai, K.-C. Li, C. Rong, Energy-aware task consolidation technique...
  • P. Jain

    Algorithms for task consolidation problem in a cloud computing environment

    Int. J. Comput. Appl.

    (2013)
  • Cited by (14)

    • On the profits of competing cloud service providers: A game theoretic approach

      2021, Journal of Computer and System Sciences
      Citation Excerpt :

      Ghamkhari and Mohsenian-Rad proposed a novel optimization-based profit maximization strategy for green data centers, taking into account service-level agreements and availability of local renewable power generation at data centers [17]. Aiming at achieving the minimum service delay while taking into account a provider's profit, Koutsandria et al. investigated the problem of efficient resource allocation strategies for time-varying traffic, and also proposed an energy-efficient approach for CPU-intensive tasks in cloud systems [24]. Mei et al. considered customer-satisfaction-aware optimal multiserver configuration for profit maximization in cloud computing by incorporating the impact of customer satisfaction on profit into their model [37].

    • Long-term optimization for MEC-enabled HetNets with device–edge–cloud collaboration

      2021, Computer Communications
      Citation Excerpt :

      Different from VANETs [6], IoT sensor networks [7] or single edge MEC sharing systems [8], where processing delay and system energy consumption are addressed separately, both the delay and energy consumption on mobile devices should be addressed to meet the quality of service demands of mobile applications. For joint optimization, there have been works on delay-energy trade-off in cloud computing systems [9–11] and device–edge involved MEC systems [12,13]. A QoE-aware energy-delay-price joint trade-off scheme was proposed by Hong et al. [12], where tasks from mobile devices can be offloaded to the edge server by selecting distinct service qualities and prices.

    • Experimental and quantitative analysis of server power model for cloud data centers

      2018, Future Generation Computer Systems
      Citation Excerpt :

      Lee et al. [21] proposed an energy conscious task consolidation (ECTC) algorithm and Koutsandria G et al. [54] proved experimentally that ECTC was nearly optimal in terms of energy saving.

    View all citing articles on Scopus

    Georgia Koutsandria received her Diploma degree (5 years program) in Electronic and Computer Engineering from the Technical University of Crete, Chania, Greece in 2012, and the M.S. degree in Electrical and Computer Engineering from the University of California Davis, CA, USA in 2014. She is currently pursuing the Ph.D. degree at the department of Computer Science at University of Rome “La Sapienza”, Rome, Italy. Her research interests include wireless sensor networks, computer networks, and security of cyber–physical systems.

    Emmanouil Skevakis received his Diploma degree in Electronic and Computer Engineering from the Technical University of Crete, Greece in 2012. He is currently pursuing the Ph.D. degree in the Department of Systems and Computer Engineering at Carleton University, ON, Canada, since his admission at 2013. His research interests are in the areas of communication networks and more specifically in cloud computing architectures, anomaly detection, software defined networks and network coding.

    Amir A. Sayegh received his B.Sc., M.Sc. in Electronics and Communications Engineering from Cairo University (1999, 2005), and Ph.D. in Electrical and Computer engineering from McMaster University (2008). He is currently with TELUS Communications Inc. where he works on technology strategy. His research interests currently lie in modeling the impacts of convergence of big data, cloud, device, mobile on new technologies and business models.

    Polychronis Koutsakis received his 5-year Diploma in Electrical Engineering from the University of Patras, Greece and his Ph.D. degree in Electronic and Computer Engineering from the Technical University of Crete, Greece. From July 2006 till December 2008, he was an assistant professor (tenure-track) at the Electrical and Computer Engineering Department of McMaster University, Canada. In January 2009, he joined the School of Electronic and Computer Engineering of the Technical University of Crete, as an assistant professor, and in April 2014, he received tenure as an associate professor in the School. Since January 2016, he joined the School of Engineering and Information Technology of Murdoch University, Australia, as a Senior Lecturer.

    He is a senior member of the IEEE. He was honored for two consecutive years (2012, 2013) as exemplary editor of the IEEE Communications Society, for his work as an editor of the IEEE Communication Surveys and Tutorials Journal.

    His research interests focus on the design, modeling and performance evaluation of computer networks, as well as on Machine Learning techniques for big data analysis and their implementation on Computational Linguistics.

    Dr. Koutsakis has authored more than 100 peer-reviewed papers in the above mentioned research areas, and is the co-inventor of 1 US patent.

    View full text