Abstract

Cloud computing provides on-demand computing and storage services with high performance and high scalability. However, the rising energy consumption of cloud data centers has become a prominent problem. In this paper, we first introduce an energy-aware framework for task scheduling in virtual clusters. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. Secondly, based on this framework, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS). As a heuristic algorithm, VPEGS estimates task energy by considering factors including task resource demands, VM power efficiency, and server workload before scheduling tasks in a greedy manner. We simulated a heterogeneous VM cluster and conducted experiment to evaluate the effectiveness of VPEGS. Simulation results show that VPEGS effectively reduced total energy consumption by more than 20% without producing large scheduling overheads. With the similar heuristic ideology, it outperformed Min-Min and RASA with respect to energy saving by about 29% and 28%, respectively.

1. Introduction

Cloud computing gains its popularity since it satisfies the elastic demands of computing capability from both individual and enterprise users. Cloud platforms not only support a diversity of applications, but also provide a virtualized environment for the applications to run in an efficient and low-cost manner [1]. As cloud computing is getting prevailing in IT industry, the huge amount of electricity consumed by cloud data centers also becomes a rising concern. According to the previous statistics, globally there are over 5 million data centers [2], which account for about 1.5% of the global energy consumption [3]. The figure may continuously go up as our demands for computing are still growing. Hence, in order to minimize the negative impact brought by energy wasting and overconsumption, it is of great necessity to improve resource utilization and to reduce energy consumption for cloud data centers.

Applying energy-aware resource scheduling is an effective way to save energy. Cloud data centers are usually virtualized. Thus in an IaaS (Infrastructure-as-a-Service) cloud, virtual machine (VM) is the basic unit for resource provisioning. After a user-defined job is submitted, it is first “sliced” into a number of tasks and generally each task will be assigned to one VM for execution. During the execution, the virtual resources allocated to the VM can be thought of being occupied by the task. The mapping from tasks to VMs is one-to-one. On the one hand, we do not consider a many-to-one mapping because resource competition often causes SLA (Service-Level Agreement) violations. On the other hand, one-to-many mapping can be avoided by a fine-grained job decomposition. Although the jobs or tasks may not contain any attributes initially, we can exploit available techniques to estimate their resources demands including total instructions, amount of disk I/O, and the data throughput on network. Besides, to attain the goal of saving task execution energy, it is of great necessity to consider servers’ power efficiency. Assigning tasks to high-performance servers may enhance the data centers’ overall performance but at the same time can cause extra energy consumption (i.e., operational cost). This is because some servers of high processing speed may not be power-efficient. Hence, we argue that the power efficiency of servers and VMs should be regarded as an important metric in today’s energy-aware resource management.

Resource scheduling can be separated into two phases: task scheduling and VM scheduling. The first phase of mapping shown in Figure 1 represents task scheduling, which is the focus of this paper. Previous task scheduling algorithms (e.g., [46]) allocated tasks directly to physical servers. However, these algorithms are not rather feasible and effective since currently virtualization has been widely deployed on physical servers. The running environment for tasks is the virtual cluster. Besides, the majority of task scheduling algorithms use the strategy that VM is dynamically created on a selected server only when new tasks arrive. This kind of strategy is useful to aggregate workload in order to avoid too many idle servers. But it lowers the system’s response ability as powering on a new VM takes time. An ideal target for task scheduling is to reduce system energy consumption with acceptable efficiency. Thus, in this paper, we propose to build a virtual cluster maintenance mechanism which combines VM “precreating” and “delayed shutdown.” To be detailed, “precreating” means virtual machine can be started up on servers under relatively light workload before tasks arrive. “Delayed shutdown” allows a VM to stay alive for a certain period after it finishes its task. In cloud environment, this mechanism can maintain a large-scale idle VM cluster and thus allows a shorter task response time without bringing big overhead cost. At the same time, this mechanism is helpful to reduce migration operations, so it can be used to simplify VM consolidation (the 2nd mapping phase in Figure 1) strategies such as [7, 8].

Supported by VM “precreating” and “delayed shutdown” mechanism, we in this paper propose an energy-aware task scheduling framework for virtualized cloud environment. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. The buffer works as an improvement on simple FIFO queue of arriving tasks. The size of the buffer is designed to be adaptive to the arrival rate of tasks. Receiving the output from the task resource requirements prediction module, task energy estimate module is responsible for estimating the energy consumption of executing. As the key part, the scheduler adopts a VM power efficiency-aware greedy scheduling algorithm (VPEGS) to schedule the tasks in the buffer heuristically. Experiments were conducted to evaluate the performance of VPEGS in a simulated heterogeneous virtual cluster. The results show that VPEGS averagely reduced more than 20% energy consumption and outperformed Min-Min [9], RASA [10], and Random-Mapping [11].

Task scheduling has been proved to be a NP-problem [12]. Even with the mechanism of VM “precreating” and “delayed shutdown,” task scheduling in a heterogeneous cloud is still a nontrivial problem. Heuristic scheduling algorithms such as Min-Min [9] and ant colony optimization [13, 14] are widely used in cloud task scheduling because they are quite efficient and sometimes able to approach optimal solutions [15]. Min-Min is a typical task scheduling algorithm oriented to heterogeneous infrastructures. Gutierrez-Garcia and Sim [11] compare 14 heuristic scheduling algorithms with respect to average task makespan. The results show that Min-Min and Max-Min [9] are the most effective among the algorithms using batch mode. Besides, Etminani and Naghibzadeh [16] proved that dynamically selecting Min-Min or Max-Min as the scheduler according to the standard deviation of expected task execution time can improve system performance. Priya and Subramani [10] propose a heuristic scheduling named RASA that consists of 3 phases. In initialization phase the execution efficiency matrix is initialized, while the scheduler finds the best-fit VM and returns its ID in the second and third phase. The idea of RASA is using Min-Min and Max-Min alternatively to schedule the tasks that arrived. Uddin et al. [17] tested and analyzed the performance of RASA, TPPC, and PALB in CloudSim considering power efficiency and cost as well as CO2 emissions. They concluded that TPPC is most effective but neglected the detailed parameter settings of these algorithms.

Cloud servers are usually virtualized. Thus it is of great necessity to perform task scheduling in virtual clusters. Sampaio and Barbosa propose POFARE [4], considering both VM reliability and efficiency. This heuristic algorithm promotes the energy utilization (MFLOPS/Joules) but pays no attention to server virtualization. Lakra and Yadav [18] conducted task scheduling by solving a multiobjective optimization via nondominated sorting after quantifying the QoS values of tasks and VMs. However, it has the drawbacks of not being energy-aware and evaluating VM performance merely by MIPS (Million Instructions per Second). VM consolidation is another effective way to save energy with the basic ideology that powering off idle servers can reduce energy consumption. For example, HHCS [19], an energy-saving scheduling strategy, makes use of the advantages of two open-source schedulers (Condor and Haizea) in order to further increase CPU utilization of physical servers. In addition, there are also implements (e.g., [2022]) based on setting thresholds and constrains. Ding et al. [23] adopt this method to perform resource provisioning at the VM-level. Current technology allows dynamic VM migration, which is helpful to balance workload between servers. However, VM migration causes extra time and energy overheads. Hence, it is a better scheme to precreate and maintain a number of VMs on servers under light workload. Then these idle VMs can respond quickly when a new batch of tasks arrives.

3. Energy-Aware Task Scheduling Framework

3.1. Energy Estimate Module

In an IaaS cloud, virtualization makes physical resources “transparent” as the applications are run in VMs. To some extent, virtual machine provides independent runtime environment and it is also the basic unit allocated to user applications. In the proposed framework, the energy estimate module predicts the expected task energy consumption on each available VM and sends the data to the scheduler. For energy estimation, the required information includes task resource demands and the power efficiency of each VM.

Job submitted to the cloud will first be decomposed into several tasks. The decomposition principle can be data-based or function-based. Practically, total number of instructions and I/O data size can be estimated by analyzing the submitted code or exploiting other existing techniques. Actually there are many ways to estimate the resource demands of a task. The methods mentioned in [24] can be applied to process I/O-intensive tasks while, according to [25], the required amount of resources by the tasks belonging to the same job are usually similar. In this paper, we use four “static” attributes to profile a task: number of instructions, the size of data through disk input/output, the size of data through network transmission, and job_id indicating the job it is generated from. The values of these attributes remain unchanged despite the decisions of the scheduler. On the contrary, “dynamic” attributes, including the execution time and energy consumption of a task, are dependent on the features of the VM that executes it.

VM’s power features are directly related to the features of its host. According to the definition of power efficiency, the power efficiency of a server can be defined in three aspects: where denotes the processor performance, which can be quantified using MIPS (Million Instructions per Second). and represent the max disk I/O rate and max network transmission rate, respectively. Their metric is MB/s. , , and are the power consumption of the corresponding functional components. All these data can be sampled on physical servers (e.g., we can obtain by measuring CPU power). It is worth noting that denotes the power efficiency in transport data between servers and it is stored in a matrix. are exploited to calculate the energy cost in multitask communications. In order to shield the complexity of network, we use a simple star network topology in designing the way that tasks communicate with each other, assuming that only tasks decomposed from the same job will conduct data transport between each other. We select one of them to be a “designated task” and other tasks follow the principle that they only send data to or receive data from the “designated task” (Figure 2).

There exists a difference between the power efficiency of a VM and its host server because of virtualization. For example, different types of hypervisors suffer different degrees of degradation in VM performance. We use , , and to represent the degradation in VM power efficiency of processing, disk I/O, and data transmission, respectively. Thus the power efficiency of a VM can be expressed as below:

As a summary, Table 1 lists the power features of VMs.

The dynamic power consumption of cloud data centers is mainly produced from the workload on each running server, while the resource demands of tasks are the major sources that drive server workloads. In cloud environment, the demands of tasks can be generally modeled by the task attributes mentioned above. However, it is very difficult to precisely predict the workload as a whole because actually a server has several components (e.g., CPU, memory, disk, and NIC) that keep producing static (idle) and dynamic power. Thus a possible way is to consider the workload of each component separately. We adopted this ideology and propose to calculate separately the power of computing, storage accessing, and communicating. Particularly in this paper we take the load of the whole server into account and use it to model performance loss.

Let , , and denote the VM’s power consumption in processing, disk I/O, and network data transfer, respectively. We assume that VMs stay busy when executing the tasks assigned. So we regard , , and as unchanged values during execution. Considering task resource demands, VM power features, and the workload on host servers, we can estimate the energy consumption of a task run on a VM viawherewhere denotes the number of instructions, while and represent the amount of disk data throughput and the amount of data transferred through network, respectively. These task attributes can be estimated by existing techniques. is the performance loss caused by high workload on the server. It is intuitive that the higher the load a server works under, the greater value has. The correlation between the workload of CPU and other components is quite complex, but there is a basic knowledge that the performance of the whole system probably degrades when CPU is working under high load. So as a simplification, we model as follows:where represents the current CPU utilization of the host server. is the high-load penalty factor and . With (3) and (4), we finally have where , , and represent the power efficiency of VM regarding instructions processing, disk I/O, and data transmission. From (6) we can see that assigning tasks to virtual machines with high power efficiency is of great significance to reduce energy consumption. Meanwhile, the workload on servers should also be considered because high load leads to great performance degradation, which increases the energy required to finish a task.

3.2. Task Buffer

There are two methods to determine the scheduling order: FIFO mode and buffer mode (or batch mode). In completely FIFO mode, all tasks are organized and scheduled sequentially according to the arrival time. Thus FIFO mode provides best fairness but may fail to satisfy the QoS (Quality of Service) of some specific tasks. As an improvement, buffer mode allows buffering a certain number of tasks and schedules them by some principles. Buffer mode is similar to priority queue but it is not global, which guarantees the scheduler’s efficiency and enhances its effectiveness at the same time. Algorithms that adopt buffer or batch mode include Min-Min, Max-Min [9], and RASA [10]. Practically, it is not easy to determine the buffer’s size because oversized buffer causes low efficiency while making it too small may reduce the chance to find better scheduling solutions.

In this paper, a variable-sized task buffer is adopted on the basis of a global FIFO queue. To be more detailed, tasks at the head of the FIFO queue are put in the buffer then their minimum energy consumption (relevant to currently available VMs) will be estimated. Tasks with lower predicted energy consumption will be scheduled with higher priorities. Assume that the arriving of cloud tasks is a Poisson process with its intensity equal to ; then the expectation of task arrival interval is . Hence, it is a feasible way to set the buffer size to a multiple of (and round it):where is a system parameter that can be set empirically. Increasing the size of buffer is helpful to find better (more energy-saving) scheduling solutions when tasks arrive intensively. Meanwhile a smaller buffer can make the scheduler more efficient in the condition that the arrival rate is relatively low.

3.3. Task Scheduling Framework

Now we briefly depict the entire energy-aware task scheduling framework. After being submitted to the cloud, users’ jobs are first decomposed into several tasks. These tasks are put in a FIFO queue and then those at the head are transferred to the task buffer. The energy estimate module is in charge of estimating the energy consumption of each task in the buffer. After receiving the output from energy estimate module, the scheduler finishes the scheduling of this batch of tasks. Then the next batch is pushed into the buffer and the above process is repeated. Figure 3 illustrates the whole energy-aware task scheduling framework.

The algorithm inside the scheduler is the key part for making energy-saving task allocations. Thus we propose an energy-saving heuristic task scheduling algorithm.

4. VM Power Efficiency-Aware Greedy Scheduling Algorithm (VPEGS)

With the expansion of cloud data centers and the increase of computing demands from users, it is of great significance to consider the heterogeneity of both infrastructures and task demands. Currently, many researches (e.g., [23, 26]) only cast their sight on VM consolidation because it is an effective way to reduce wasted energy by controlling the workload on servers. However, if much load is imposed on servers with low power efficiency, it will cause higher energy cost to warrant the QoS of tasks, which is the situation that service providers are unwilling to face.

A feasible and effective solution is to consider power efficiency in task scheduling. In virtualized environment, colocated VMs can be regarded to have equal power efficiency, which can be calculated by applying (2). Thus, assuming that the infrastructure supports VM precreating and delayed shutdown, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS). The algorithm takes VM power efficiency and task demands into account and provides a sort of energy-saving task scheduling. We first list the parameters used in the algorithm and give brief descriptions (Table 2).

VPEGS is heuristic and takes the estimated task execution energy as the evaluation function. We exploit (6) to estimate the execution energy consumption () of task on VM , considering VM efficiency, efficiency loss caused by virtualization, and the performance loss caused by high server workload. Since we adopt task buffer, the process of scheduling is similar to Min-Min and RASA. In other words, the program attempt to search the buffer for a satisfies where and . is the number of VMs currently available. Then in this round, the scheduler assigns task to VM . The pseudocode of VPEGS is shown in Algorithm 1.

Input: , ,
Output:    Mapping
(1)Initialize Buffer
(2)Initialize min_energy = MAX_FLOAT
(3)while   is not empty do
(4)for    to    do
(5)   = dequeue()
(6)  add into Buffer
(7)end
(8)while  Buffer is not empty do
(9)  for each task in Buffer  do
(10)    for each VM in   do
(11)     calculate task_energyt,k
(12)     if    then
(13)      
(14)      
(15)       =
(16)     end if
(17)    end for
(18)   end for
(19)   assign selected_task to selected_VM
(20)   remove task t from Buffer
(21)   update the states of and
(22)  end while
(23) end while
(24) return    Mapping

The task buffer is initialized first and then the global FIFO queue which dequeues the tasks at the head. After the buffer is filled (or FIFO queue becomes empty), the scheduler computes for each task on every available VM (line ). Its time complexity equals inspecting a sized matrix. The minimum element is found and the corresponding VM ID and task ID are recorded (lines (14)~(15)). Then the selected task is assigned to the selected VM. This process repeats until the buffer is clear. Then the next batch of tasks will be sent into it.

We analyze the complexity of VPEGS as below: each decision of assignment has to check the whole matrix whose size is , so the complexity of assigning one task is . Suppose the total number of tasks that arrived is . Thus the overall time complexity of finishing the scheduling is .

5. Algorithm Evaluation

5.1. Experimental Setup

We implemented VPEGS and evaluated it in a simulated environment. We also implemented Min-Min [9], RASA [10], and Random-Mapping [11] in order to compare their effectiveness. The algorithms and test programs were written in Java (JDK version 1.8.0_65). The simulation was run on a PC equipped with a dual-core Pentium CPU (2.10 GHz) and 4.0 GB memory.

For every task decomposed from a job, the experimental setting of its attributes is listed in Table 3. Where the metric of is Million instructions, while the unit of and is MB. In order to simulate the difference of power efficiency between heterogeneous physical servers, we set 5 types of servers and the corresponding configurations are listed in Table 4. In the experiment, we suppose the power efficiency of data transfer is infinity (i.e., zero overhead) if two VMs are server-local. Otherwise, it equals 50. Meanwhile, the task with the smallest task_id is always appointed to be the “designated task” when multiple tasks that belong to the same job are active in the virtual cluster. Thus the elements in matrix are defined as

5.2. Experimental Results

In the experiment we set the number of servers () to 100 according to Table 4 fixedly. The program randomly generated 250 to 300 VMs in the initialization phase. High-load penalty factor was set to 0.15. The intervals of task arrivals followed exponential distribution with and the buffer size was set to 15 initially. After initialization, the test program utilized VPEGS, Min-Min, RASA, and Random-Mapping (RM) separately as the scheduling strategy to run the simulation. Each test repeated 30 times and we took the average as our results. The comparison regarding total system energy consumption is shown in Figure 4.

The result illustrates that VPEGS performed the best among the four scheduling algorithms with respect to energy saving (Figure 4). Min-Min and RASA had similar performance since their heuristic principles behind are similar. VPEGS saved 29.1% and 28.6% energy when compared to them on average. As for the reason, we argue that Min-Min and RASA in some way can be energy-saving because shortening overall execution time reduces the consumption brought by server idle power. However, as power efficiency is not taken into account, assigning tasks to those high-performance nodes may cause extra energy consumption. On the contrast, VPEGS considers both the performance and power features of VMs and exploits power efficiency as the prime metric. Specifically speaking, Min-Min and RASA are more likely to utilize the servers with great processing speed or throughput rate, whereas VPEGS prefers those with high power efficiency. In our experiment, actually a big number of high-performance servers were of comparatively low power efficiency. As a result, VPEGS showed its advantage in energy saving. It is also noticed that Random-Mapping (RM) seemed to be slightly more energy-saving than Min-Min and RASA. Essentially this is because RM assigns tasks evenly so usually high workload would not be imposed on servers with low power efficiency. Averagely, VPEGS outperforms Random-Mapping by about 23.0%.

We also see that when the number of servers is fixed, it seems to be tougher to maintain energy-saving performance as the number of tasks increases (Figure 5). When virtual resources are sufficient to satisfying tasks’ demands, using VPEGS can reduce total energy consumption by more than 20%. However, as the task arrival rate remained unchanged (), the workload of the whole cluster went high as the total number of tasks increased. In other words, comparatively energy-efficient VMs were gradually used up. Onto the fixed-scale simulated data centers with 100 heterogeneous servers, the performance of VPEGS degraded in our experiment when the number of tasks is more than 90 (Figure 5).

We mentioned that the size of task buffer may influence the performances of scheduling algorithms. To verify it, we changed the task arrival rate, namely, , to 6. Correspondingly, the size of buffer was adjusted to 30. We reinitialized the clusters and conducted the experiment again. Figures 6 and 7 show the results. In this case, compared with Min-Min, RASA, and RM, VPEGS averagely saved 29.8%, 29.0%, and 23.3% energy, respectively. It is a little surprising to find that enlarging the task buffer does not have big impact. But we also see that the performance of VPEGS in this case was slightly improved when the number of tasks exceeded 100.

Tasks may not gain their earliest completion time in VPEGS since energy consumption is considered primarily. There is a kind of conflict, as mentioned in [27], between optimizing execution time and energy consumption. VPEGS, to some degree, sacrifices the efficiency of task execution to attain the goal of saving more energy. However, Min-Min and RASA pay more attention to reducing total makespan and task execution time. Figure 8 shows the total task execution time of Min-Min, RASA, Random-Mapping, and VPEGS with the buffer size equal to 15. As a result, Min-Min and RASA are effective in shortening the overall execution time of all the tasks. The reason is simple: Min-Min and RASA take predicted task completion time as the heuristic. Besides, short tasks outnumbered long tasks in our experiment; thus RASA yielded no better performance in reducing total execution time than pure Min-Min. We ran the test again after changing the task arrival rate and the buffer’s size (Figure 9). Combining Figure 8 and Figure 9, we can see that the change of buffer size did not affect the total task execution time for VPEGS. But the time for RASA was shortened when we reduced the number of tasks per batch (Figure 9). This is because when the number of tasks in the buffer was reduced, RASA was more likely to make the same decisions with Min-Min.

We also carried out experiments to test the impact of the buffer’s size on scheduling overheads (Figure 10). Scheduling overhead represents the average time that the scheduler takes to make a task assignment decision. We use the average time that a task stays in the buffer to evaluate the scheduler’s efficiency in the experiment.

Though theoretically Min-Min, RASA as well as VPEGS have the same time complexity; it can be pointed out from Figure 10 that Min-Min suffers the least scheduling overheads among these three heuristic algorithms. The reason is that VPEGS spends extra time on estimating task energy. RASA checks whether the number of available VMs is odd before assigning tasks. VPEGS is slightly more efficient than RASA since it does not check the odd-even property and only considers current availability of VMs. In other words, tasks will not wait for occupied VMs in VPEGS even though sometimes waiting helps to shorten the makespan and execution time. On this point, we conclude that VPEGS, as a heuristic algorithm, only suffers small scheduling overheads when adopting an appropriate size of task buffer.

As a summary, the experimental results illustrate that as the scheduler of our proposed energy-aware scheduling framework, VPEGS is effective to schedule tasks in an energy-saving manner. Compared with traditional scheduling algorithms focusing on optimizing overall makespan and task execution time, VPEGS takes into account the task resource demands, VM power efficiency, and server workload. The target of algorithms like Min-Min and RASA are to shorten the makespan and total execution time of a batch of tasks. This is in some way helpful to save system energy when the differences between servers’ power efficiencies are small and server idle power makes great impact on the total energy consumption. However, with the fast expanding on data centers’ scale, cloud infrastructures probably consist of hundreds of different types of servers. This heterogeneity makes it necessary to consider more factors including server performance, power efficiency, and server workloads. Aiming at reducing the energy consumption of heterogeneous clusters, VPEGS provides a highly feasible way to conduct energy-aware task scheduling. We list the main advantages of VPEGS as follows:(i)Multiple factors that influence system energy consumption are considered. VPEGS conducts scheduling according to the estimation on task energy, which takes into account the information about task resource demands, VM power efficiency, server workload, and performance loss.(ii)VPEGS realizes a fine-grained resource provisioning and task scheduling at the level of virtual machine clusters supporting “precreating” and “delayed shutdown.”(iii)VPEGS has high feasibility since it works without any training. Besides, we set the size of the task buffer to an adaptive value to balance the scheduler’s performance and efficiency.(iv)Greedy strategy is made use of to realize low-overhead task scheduling.

6. Conclusion and Future Work

Cloud computing is believed to have great potential in satisfying diverse computing demand from both individuals and enterprises. But at the same time the overconsumption of electricity by cloud data centers becomes a big worry. Considering the virtualized environment in cloud data centers, in this paper, we propose an energy-aware task scheduling framework consisting of a task resource requirements prediction module, an energy estimate module, and a scheduler. Based on this framework, we propose a heuristic task scheduling algorithm named VPEGS. VPEGS takes into account task resource demands, server/VM power efficiency as well as server workload. Oriented to heterogeneous cloud environment, the proposed algorithm does not need training and is able to schedule tasks in an energy-saving manner. VPEGS shares the similar heuristic ideology with Min-Min and RASA, but it prominently saves system energy by sacrificing some efficiency in task execution. Experiment based on simulation was carried out to evaluate VPEGS. The results illustrate its novelty that VPEGS reduced system energy consumption by over 20% when compared to the strategy of Random-Mapping. It also outperformed Min-Min and RASA in saving energy by approximately 29% and 28%, respectively, without producing large scheduling overheads.

Future research will focus on how to effectively combine task scheduling and VM consolidation strategies in order to further enhance the effectiveness of energy saving. Besides, we plan to make a deeper investigation into the factors or technologies (e.g., Dynamic Voltage Frequency Scaling) that influence server’s power efficiency.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

Thanks are due to the helpful comments and suggestions from the anonymous reviewers. This work is partially supported by the National Natural Science Foundation of China (Grant no. 61402183), Guangdong Natural Science Foundation (Grant no. S2012030006242), Guangdong Provincial Scientific and Technological Projects (Grants nos. 2016A010101007, 2016B090918021, 2014B010117001, 2014A010103022, 2014A010103008, 2013B090200021, and 2013B010202001), Guangzhou Science and Technology Projects (Grants nos. 201601010047 and 20150504050525159), and Fundamental Research Funds for the Central Universities, SCUT (no. 2015ZZ0098).