Abstract

Limited battery life and poor computational resources of mobile terminals are challenging problems for the present and future computation-intensive mobile applications. Wireless powered mobile edge computing is one of the solutions, in which wireless energy transfer technology and cloud server’s capabilities are brought to the edge of cellular networks. In wireless powered mobile edge computing systems, the mobile terminals charge their batteries through radio frequency signals and offload their applications to the nearby hybrid access point in the same time slot to minimize their energy consumption and ensure uninterrupted connectivity with hybrid access point. However, the smart division of application into subtasks as well as intelligent partitioning of time slot for harvesting energy and offloading data is a complex problem. In this paper, we propose a novel deep-learning-based offloading and time allocation policy (DOTP) for training a deep neural network that divides the computation application into optimal number of subtasks, decides for the subtasks to be offloaded or executed locally (offloading policy), and divides the time slot for data offloading and energy harvesting (time allocation policy). DOTP takes into account the current battery level, energy consumption, and time delay of mobile terminal. A comprehensive cost function is formulated, which uses all the aforementioned metrics to calculate the cost for all number of subtasks. We propose an algorithm that selects the optimal number of subtasks, partial offloading policy, and time allocation policy to generate a huge dataset for training a deep neural network and hence avoid huge computational overhead in partial offloading. Simulation results are compared with the benchmark schemes of total offloading, local execution, and partial offloading. It is evident from the results that the proposed algorithm outperforms the other schemes in terms of battery life, time delay, and energy consumption, with 75% accuracy of the trained deep neural network. The achieved decrease in total energy consumption of mobile terminal through DOTP is 45.74%, 36.69%, and 30.59% as compared to total offloading, partial offloading, and local offloading schemes, respectively.

1. Introduction

Due to the advancements in cellular technologies and applications of Internet of things (IoT), our daily life activities, such as smart healthcare, smart homes, and smart driving, are becoming dependent on mobile terminals (MTs) [1, 2]. The aforementioned applications are computation-intensive and energy-hungry and require large storage and faster execution [3]. To meet these requirements, an attempt is the cloud computing system that contains centralized servers having higher processing powers and storage capabilities [4, 5]. However, the delay-sensitive applications such as online simulators, live streaming, and online games cannot be executed through cloud servers due to the inherited problem of high latency caused by long distances of cloud servers from the MTs [6, 7]. The problems of high latency and centralized architecture can be overcome to a certain limit through fog nodes closer to the request generators in fog computing. However, there are challenges of tasks offloading, resource distribution, and degraded performance of network links [8].

An alternative for the delay-sensitive applications is mobile edge computing (MEC) that offers services similar to that of cloud computing in a distributed manner and can be called the cloud computing at the edge of cellular networks. In the MEC environment, an MT can offload its computational applications to the nearby mobile edge server (MES) available with the lowest delays as compared to the cloud servers. The challenging problems of MTs, that is, limited computational power, storage, and higher time delays in execution, are entirely dependent on the battery life of MT. Unfortunately, the MTs have limited battery life that leads to the compromised performance [9]. To ensure the connectivity of MT with a nearby base station (BS), the battery of MT needs regular charging that requires more hardware and physical cost. The technology of wireless energy transfer (WET) is explained in [10], in which the battery of MTs can be charged wirelessly through radio frequency (RF) signals. However, energy harvesting from RF signals transmitted for information exchange leads to security issues at physical layer. Hence, either the deployment of multiple antennas at MT and exploiting diversity techniques or energy transmitter (ET) at the BS is preferred [11]. Recently, incorporation of WET technology in MEC system is a hot area of research. A model that utilizes the advantages of both WET technology and MEC is called wireless powered MEC (WP-MEC) [12]. The use of such model can facilitate the MT with the harvesting of energy to improve the battery life and with the computational offloading to enhance the computational capability of MT simultaneously.

There are two main types of computational offloading. In coarse-grained computational offloading, the whole application is either executed locally at the MT or offloaded to the remote MES [13]. Meanwhile, in fine-grained computational offloading [14], the whole application of size is first divided into a number of subtasks and can be offloaded with possible policies. For example, if an application of 10 GB is divided into 4 subtasks, then these 4 subtasks can be offloaded according to offloading policies as shown in Table 1.

Number of subtasks per application and selection of offloading policy mainly contribute to energy consumption and time delay of MT. Therefore, it is important to divide the application in an optimal number of subtasks and select an offloading policy that has minimum cost.

In WP-MEC system, an MT offloads its computational application to the hybrid access point (HAP) and harvests energy from RF signals of energy transmitter (ET). The MT harvests energy in some part of time slot and offloads data in the remaining time of the same time slot. Therefore, the intelligent division of time slot into fractions for energy harvesting and data offloading is a crucial point of consideration, known as time allocation policy. The possible number of time allocation policies is dependent on resolution of time, , and is equal to . Decrease in increases the number of time allocation policies. We can find the cost of all possible policies and then select the optimal one that has minimum cost. For example, the possible time allocation policies for are given in Table 2. Both the battery life and computation power can be improved through the optimal selection of number of subtasks per application and time allocation policy for energy harvesting and data offloading. The time allocation policy is investigated through the Lyapunov function optimization technique in [15] for WP-MEC system. However, such numerical solutions suffer from huge computational overhead, which increases the complexity of algorithm and time delay of execution. To avoid the problem of huge computational overhead in such offloading, the emerging artificial intelligence (AI) concepts can be used.

In this paper, we propose a deep learning approach to find the best time allocation policy and partial offloading policy with optimal number of subtasks per application. Our proposed cost function depends on time delay, energy consumption, and current battery level of the MT. Initially, if the battery level is , our cost function focuses on time delay of application execution. The cost function gives more importance to the energy consumption when the battery is low. The considered WP-MEC system consists of an HAP (containing an ET, MES, and an AP) and MTs. Due to the size limitation, MT is considered to have one antenna that can be used for both energy harvesting and computational offloading. In contrast, the larger size of HAP allows separate antennas for transmitting energy and data communication, as shown in Figure 1. The target of the proposed approach is to utilize the WET and partial offloading technique with optimal number of subtasks per application to improve the battery life of MT and make the execution of applications faster. Novelty and contributions of the proposed work are outlined as follows:(i)We find the time allocation policy for energy harvesting and data offloading, partial offloading policy, and optimal number of subtasks per application, simultaneously, through deep learning approach in WP-MEC system to improve the battery life of MT and execution delay.(ii)We formulate a cost function which depends on energy consumption, time delay, and current battery life of MT. There are two main parts of the cost function: energy consumption part and time delay part. The minimum cost value means that the energy consumption and time delay will be minimum; however, our cost function updates itself through weighting coefficients dependent on battery level, which decide the contribution of time delay part or energy consumption part in the cost function.(iii)Simulation results show that our proposed algorithm prolongs the battery life and ensures faster execution and minimum energy consumption.

The remainder of the paper is organized as follows: Section 2 provides the related work of the proposed technique. Section 3 presents the mathematical model and the proposed algorithm. In Section 4, the numerical simulations and results are discussed. Section 5 concludes the paper and suggests some future directions.

For the improvement of MT’s battery life and ensured connectivity with HAP during partial offloading technique, the time slot for data offloading is further divided into two parts: (a) the time fraction in which the MT will charge its battery through RF signal and (b) the time fraction in which the MT will offload data to the HAP. This division of time slot, referred to as time allocation policy, is discussed in [15, 16]. The problem of time allocation policy (for power transfer and data offloading) and offloading mode selection (for offloading or local execution of an application) is studied in [12], and a total computational offloading technique for all energy harvesting MTs is proposed. Based on computation rate, a maximization problem is studied in [17] for the decrease in propagation loss that severely affects harvested energy and computation performance of MT. For the joint optimization of time allocation among MTs, CPU frequencies, transmitted energy at the AP, and number of offloaded bits, a multiuser WP-MEC is considered in [18]. The authors in [15] propose an algorithm for the improvement of computation rate by joint consideration of computational offloading and time allocation policies in a WP-MEC system with one MT. The enhancement in computational performance of active MTs by the user cooperation technique is investigated in [19], where the inactive MTs use their harvested energy for the help of active ones. For the computational offloading policy, the authors in [19] consider frequency division multiple access (FDMA) to improve computation rate. The maximum-minimum energy efficiency optimization problem (MMEP) with joint optimization of energy consumption, time slots for computational offloading and energy transfer, and transmitted power at HAP in WP-MEC system are focused on in [20], and, by the application of block coordinate descent (BCD) and fractional programming theory, the authors present algorithms with lower complexity. In [21], the computational energy efficiency of entire system is improved by the joint consideration of optimal allocation for MT’s transmitted power, CPU frequency, and time for transmission in WP-MEC system. In [22], the maximization problem of system energy efficiency by the joint consideration of optimal time allocation, local computing capacity, energy consumption, and application offloading is discussed. In [23], the authors consider a stochastic method for battery management and resource allocation decisions in a time slot and propose an algorithm derived from Lyapunov optimization technique. In [24], the authors utilize the Lyapunov optimization for the evaluation of tradeoff between delay and energy efficiency of a multiuser WP-MEC system. In all the abovementioned work, the improvement in computation rate is mainly focused. However, the battery life of MT is ignored and the number of subtasks in which an application should be divided for partial offloading is assumed to be a fixed value.

In most studies on WP-MEC, the investigation of time allocation and resource allocation policies is based on the numerical optimization techniques such as Lyapunov and Lagrange multipliers. However, for the complex optimization problems of allocating resources in WP-MEC, the said techniques do not guarantee global minima/maxima, and thus the performance of algorithms based on these methods becomes poor. Additionally, these approaches make use of long calculations and thus involve larger computational overheads that affect the time delays for computations and energy consumption of MTs. The greater amount of energy consumption badly affects the battery life of MTs and consequently the connectivity of MTs with HAP becomes limited. An alternate solution for fast computation of application division in subtasks and provision of optimal decisions for time allocation and resource allocation policies with minimum energy consumption and improved battery life is required. This requirement leads to the deep learning approach that decreases computational complexity by learning from predefined dataset training.

Little work is done for optimal time allocation and offloading policies in WP-MEC systems using deep learning approach. In [25], the authors investigate deep reinforcement learning for online offloading to reduce the computational complexity of large-size networks for offloading decisions and resource allocations, while ignoring the cost function. The main focus of the authors in [25] is on the minimization of execution latency, while battery life of MT, energy consumption, and optimal division of application are not taken into account. In [26, 27], the authors consider metrics such as subtask energy consumption, MT’s remaining energy, computational load, and network conditions and propose a deep-learning-based offloading policy. However, their work ignores the energy harvesting capability of HAP and considers only MES at the edge of cellular network. A comparison table of the work related to deep-learning-based offloading and time allocation policy (DOTP) is given in Table 3.

From the study of related work on binary offloading policies in WP-MEC systems, we find two options for an application execution. (1) If an application is locally executed at the MT, it can reduce energy consumption; however, due to limited computational resources, the time delay will be higher. (2) If all the subtasks of an application are remotely executed on MES, the computational time delay will be smaller as the MES is assumed to have sufficient computational resources; however, in offloading data, the energy consumption of the MT will be higher. The third option is the partial offloading policy, that is, to divide the application into a number of subtasks. Then, some of the subtasks will be locally executed, while some subtasks are offloaded to the MES for remote execution. In this case, the option for subtask to be executed either locally on MT or remotely on the MES is dependent on the cost calculated through the cost function. This paper focuses on a deep-learning-based approach to find the optimal number of subtasks (optimal in terms of cost) in which we should divide the application and then apply the partial offloading policy. In the offloading option of a subtask, the time allocation policy is applied to find the time fraction for MT’s energy harvesting and data offloading per time slot. The proposed cost function jointly considers the remaining battery level, time delay, and energy consumption of MT in the cost calculation for an application. Simulation results show that the proposed DOTP technique performs better than the benchmark schemes in terms of battery life, time delay, and energy consumption.

3. System Model

In our WP-MEC system model, we consider an HAP communicating with single MT. From implementation point of view, the HAP is assumed to have two antennas used in a time division duplex (TDD) manner for energy transmission and data communication separately, in the same frequency band. Meanwhile, the MT is considered to have single antenna; therefore, it utilizes the protocol of first-harvest-then-offload through the same antenna on downlink (DL) and uplink (UL), respectively. During the remote execution, MT first harvests energy for some part of the time slot and then transmits its subtask for execution to the MES in the rest of time slot. Moreover, the HAP is assumed to have sufficient resources of energy and processing, while the MT is considered to have limited computational resources and battery. Thus, the MT will use the energy transmitted by HAP to prolong its battery life for offloading and local execution of subtasks. In partial offloading, some of the components are offloaded to MES for execution. We divided a time slot for offloading in two parts, called time allocation policy. In the first part, the MT harvests energy and in the second part it offloads its data to the MES. The downloading time is small as compared to the transmitting time; therefore, the downloading time is assumed to be negligible. The time allocation policy is denoted by , where is the percent time of a single time slot for which the MT will harvest energy from HAP and is the percent time of the same time slot in which the MT will offload its subtask to the MES. We introduce a binary variable for subtask which defines the decision of local execution or remote execution. indicates the local execution of a subtask on MT and indicates the remote execution of a subtask at the MES. A detailed list of notations used in this paper is given in Table 4. Two models are considered, namely, local execution model and remote execution model.

3.1. Local Execution

For local execution of subtask , that is, , the time delay of local execution, , is calculated aswhere is the CPU frequency of MT for subtask , is the number of CPU cycles used for the execution of one byte of data, and is the input data of subtask , executed locally at the MT. The energy consumption of MT for a single subtask in local execution is formulated aswhere , a dimensionless quantity, represents the effective switching capacitance factor that is dependent on the architecture of chip [28]. The product shows the computation power of MT. The total cost for subtask executed locally at the MT is defined as the summation of normalized energy consumption and normalized time delay for subtask and is given bywhere and are the unit balancing coefficients defined, respectively, as

For the whole application, and are the maximum energy consumption and maximum time delay, respectively, having fixed values. and are the weighting coefficients that are dependent on the current level of MT’s battery. These coefficients are used for providing priority to either data offloading or energy harvesting of MT. The cost function checks the current level of battery and decides the time fraction for energy harvesting. Initially, when the battery level is high, the time fraction for energy harvesting is low and the time fraction for offloading is high. However, when the battery level decreases and falls below a threshold value, our cost function increases the time fraction for energy harvesting to its maximum value.

3.2. Remote Execution

For remote execution, that is, , the subtask is offloaded to the HAP to utilize the processing power of MES. The processing power of MES and transmission power of HAP are assumed to be sufficient. Consequently, the delay of downloading from HAP to the MT is taken as zero and the total time is divided into two parts only, that is, [22]. The UL channel data rate from MT to HAP, , can be obtained using the Shannon-Hartley theorem as follows:where denotes the transmitted power of MT for subtask . For the UL, noise spectral density is represented by and is the available bandwidth. The wireless channel between MT and HAP is considered to be independent and identically distributed block fading channel. represents the UL small scale fading channel power gain. The channel power gain from MT to HAP is calculated as , where is the path loss constant, is the reference distance, is the path loss exponent, and is the distance between MT and HAP [29]. In a similar way, the DL channel data rate from HAP to the MT, , can be formulated aswhere represents the transmitted power of HAP for subtask .

For calculating the time delay for remote execution, the UL time for data transmission, the DL time for data reception, and the execution time for data at MES are considered. The UL time delay in transmission of data from MT to the HAP, , is given as

In a similar way, the DL time delay in reception of data from HAP to the MT, , is formulated aswhere is the output data of subtask and the input data of subtask . The execution time delay for data at the MES, , is calculated aswhere is the CPU frequency of MES. The offloading time delay for a subtask , , is obtained as

The MES is considered to have high computation power due to a large number of CPUs in it. Thus, due to high CPU frequency of MES, the execution time delay, , may be ignored. Moreover, the output data for subtask, , is very small as compared to the input data, , and hence the DL time delay, , can be considered as approximately zero. Consequently, the offloading time delay of subtask , , can be approximated as

The MT’s offloading energy consumption of subtask , , in remote execution model is formulated as

The total cost for subtask , executed remotely at the MES, is given bywhere and are the unit balancing coefficients, as in (3).

The total time delay of a subtask , , can be defined as

The cost of subtask either locally executed on MT or remotely executed at the MES is formulated as

The total time delay for an application containing number of subtasks, , can be obtained as

In this paper, it is considered that the MT has a chargeable battery, which can provide and store energy for the consumption of MT and can harvest energy from HAP. The harvested energy of MT is calculated aswhere is a constant for energy harvesting and describes how efficiently the MT receives the transmitted energy of HAP [6]. For broadcasting RF signals to the MTs, is the transmitted power of HAP. represents the DL channel power gain. For the purpose of simplicity, it is assumed that is equal to the UL channel power gain . For subtask , the total energy consumption of MT, , is written as

The current available energy of MT’s battery is denoted by and satisfies the condition

The available energy of MT’s battery is updated as follows:where is the energy of MT’s battery for coming time slot. The total energy consumption of MT for single application, containing number of subtasks, , is defined as

For the comparison of different techniques, a cost function is modeled aswhere is the cost for a single application execution.

Initially, if the battery level is , the energy consumption weight should be less than the weight of time delay because, in that case, faster execution is more important. The energy consumption part in the cost function becomes more important when the battery of MT is low. Therefore, we calculate these weighting coefficients by using the current battery level of MT aswhere represents the percent current level of battery.

3.3. Proposed DOTP Algorithm

The number of possible values for energy harvesting time is represented by and is given as . In the proposed DOTP technique, described in Algorithm 1, we calculate the local and remote costs for all possible number of subtasks per application, , number of possible values for , that is , and offloading policies . Thus, the complexity of algorithm is . is a matrix of all possible offloading policies and has the order of . For subtask , local and remote costs are calculated using (3) and (13), respectively. With optimal number of subtasks, , optimal time allocation policy, , and optimal offloading policy, , the minimum cost is denoted by in Algorithm 1, where is the harvesting time for subtask 1, is the harvesting time for subtask 2, and so on.

Input:
Output:
(1)fordo
(2)fordo
(3)  
(4)  
(5)  fordo
(6)   ifthen
(7)    whiledo
(8)      (1)
(9)      (2)
(10)      (3)
(11)    end while
(12)    
(13)   else
(14)    
(15)    fordo
(16)     whiledo
(17)       (11)
(18)       (12)
(19)       (17)
(20)       (20)
(21)       (13)
(22)     end while
(23)    end for
(24)   end if
(25)   
(26)   
(27)  end for
(28)  
(29)  
(30)end for
(31)
(32)
(33)
(34)end for
(35)
(36)
(37)
(38)
(39)Save inputs:
(40)Save outputs:
(41)Repeat this for different size of applications
(42)Train the DNN:
(43)Test trained DNN
(44)Accuracy  number of correct decisions/number of total decisions

Due to huge computational overhead of finding , , , and , the service delay and energy consumption of MTs increase. Computations of these values are also explained through the flowchart in Figure 2. With the help of cost function, we generate a training dataset and train a DNN to reduce the computational overhead, energy consumption, and service delay of MTs.

The input layer of our DNN takes the size of application , number of subtasks per application , distance of MT from HAP , CPU frequency , transmitted power , and the available battery level of MT , while the output layer gives the optimal number of subtasks , partial offloading policy , and the optimal time for harvesting energy . For the two hidden layers and the output layer, the rectified linear unit (ReLU) and Softmax activation functions are used, respectively.

We calculate the minimum cost and store the corresponding inputs and outputs as a training dataset. After obtaining a training dataset of size 5000, these data are divided into training dataset (70% of training dataset), validation dataset (15% of the training dataset), and test dataset (15% of the training dataset). The DNN is trained on training dataset and then its performance is validated through validation dataset. When the DNN is trained, the accuracy of DNN is checked through test dataset. The results of our DOTP algorithm are compared with the exiting benchmark schemes, that is, local execution, total offloading, and partial offloading.

4. Simulation and Results

The simulations environment used was MATLAB (R2019a) and the processor was Intel Core i7 with clock speed of 3.4 GHz. An application can be divided into more than 10 subtasks; however, for simulation purpose, we have assumed that an application can be divided into a maximum of 10 subtasks. For time allocation policy, the time resolution is assumed to be 0.01. Practically, the distance for WET is up to 12 m. However, to check the mobility effect of MT in a more comprehensive way, the maximum distance of MT from HAP is assumed to be 200 m. The reference distance of MT from HAP is assumed to be 1 m. The experimentation is performed on the basis of randomly generated data to show the stochastic nature of the problem and have more realistic results. The randomly generated input data for our graphs are the CPU frequency of MT, transmitted power of MT, distance of MT from HAP, and the application size. The advanced smart phones have the heterogeneous frequency capability and use different frequencies in different scenarios [30]. Thus, the CPU frequency of MT is taken randomly from a uniform distribution in the range of  GHz. Similarly, the transmitted power of smartphones is variable and changes as the distance of MT from HAP varies. This stochastic nature of transmitted power of MT is incorporated by considering it as a uniform distribution in  Watts. Different applications may have different sizes and that is why we have considered the size of application to be randomly selected from a uniform distribution in  G bits. To incorporate the mobility effect of MT in the simulation, the distance of MT from HAP is selected randomly from a uniform distribution in the range of  m.

CPU cycles of MT to execute one byte of data, , are taken as 737.5 cycles/byte, each application is divided into number of subtasks, and the value of lies between 1 and 10. The value of effective switching capacitance factor, , is taken as , MT’s efficiency of energy harvesting, , is taken as 0.8, path loss constant, , has the value of −30 dB, path loss exponent , and reference distance of MT from HAP, , is 1 m. The available bandwidth, , and noise spectral density, , are 0.5 MHz and −174 dBm/Hz, respectively. The trained DNN has 32 neurons in the input layer (one for MT’s currently available battery level, , one for application size, , ten for MT’s CPU frequencies, , ten for MT’s transmitted powers, , and ten for the distances of MT from HAP, ). The simulation parameters are given in Table 5. The number of subtasks in which an application of size will be divided is taken from 1 to 10. Therefore, in the output layer, 10 neurons are specified for the number of subtasks per application. In the partial offloading policy, as considered in this paper, a maximum of 9 subtasks out of 10 can be offloaded to the remote MES. Thus, there are 9 neurons for the optimal time allocation policy, . In one neuron at the output layer, the optimal partial offloading policy, , is stored. Thus, the total number of neurons in the output layer of DNN is 20. There are two hidden layers and 100 neurons in each hidden layer. The accuracy of trained DNN achieved is . The detailed architecture of DNN is shown in Figure 3.

The benchmark techniques considered for comparison of simulation results in this paper are described as follows:(i)Total offloading technique [18]: in total offloading, no policy for offloading decisions is considered and the whole application is offloaded to the MES.(ii)Local execution [19]: in local execution, the whole application is executed locally on MT and there is no offloading to MES.(iii)Partial offloading technique [14]: in partial offloading, the application is divided into a number of subtasks. An offloading policy is made for the decision of subtasks to be executed either locally or remotely.

All the above techniques find the offloading policy that has minimum cost and consider time delay and energy consumption for a fixed number of subtasks per application. They also ignore the battery life of MT and optimal time allocation policy for harvesting. Our proposed DOTP considers the optimal number of subtasks per application and the optimal offloading policy. Furthermore, the DOTP also considers the current battery level of MT for improvement through harvesting energy from HAP by utilizing an optimal time allocation policy.

In Figure 4, the remaining energy of MT is plotted against various sizes of application. The battery of MT for benchmark schemes dissipates faster as compared to the proposed DOTP due to their high energy consumption. The battery level in the benchmark schemes approaches negative (practically, battery of MT does not get negative values, and it is possible only in the simulations; in simulation results, negative values for MT’s battery denote the shutdown state of MT) that results in the shutdown of MT, and consequently disconnection with HAP occurs. The total remaining energy for DOTP increases after a task size of 12 GBs because the harvesting energy time fraction of MT is dependent upon the remaining battery of MT. As the battery of MT decreases below a threshold level, the MT increases the time for harvesting energy and decreases the time for data offloading by changing the weighting coefficients’ values described in the mathematical model. In contrast, there is no intelligent division of time slot for energy harvesting and data offloading in the benchmark schemes. Therefore, the remaining energy of MT for other schemes further decreases after task size of 12 GBs. Thus, in the DOTP scheme, a reserved battery level always remains available in the MT, which ensures that neither the MT will be disconnected from HAP nor the battery level will go to negative values.

Figure 5 depicts the plot of total energy consumption with varying application sizes for all the mentioned techniques. The minimum energy consumption of DOTP can be observed from the figure. This is due to the consideration of different energy consumptions in the cost function (ignored in the benchmark schemes) and then the selection of offloading policy based on optimal number of subtasks. The different energy consumption behavior of first increasing and then decreasing when a threshold level is reached is due to the reason that the amount of energy being harvested is greater than that of the consumed energy. As MT prefers energy harvesting, the total energy consumption decreases at the task size of 12 GB. It is due to the energy consumption for comparatively smaller time fraction in the data offloading, while the energy consumption for the other schemes continuously increases due to the absence of smart time allocation policy. To the best of our knowledge, this smart time allocation policy is ignored in all the considered benchmark techniques.

Figure 6 compares the time delay versus different application sizes plot of DOTP with the benchmark schemes. The better performance of the proposed DOTP is evident from the figure through achievement of smaller time delays. This better performance of DOTP is due to the selection of optimal number of subtasks per application, which is considered constant in the mentioned benchmark schemes. The total offloading has minimum time delay, but this technique’s energy consumption is the highest, as depicted in Figure 5. The greater rise in the execution time of DOTP is due to the lower battery level of MT; that is, most of the time the MT will harvest energy to level up its battery and in the same time slot little offloading of data takes place. The MT harvests energy for greater part of a time slot; therefore, an application will be offloaded in more time.

In Figure 7, the cost of the proposed DOTP is compared with those of the three benchmark schemes. From the figure, it is evident that the DOTP has the lowest cost as compared to the other techniques. This minimum cost is the result of selecting optimal number of subtasks per application and considering the energy consumption, time delay, and energy harvesting in the cost function. In the cost function, weighting coefficients are dependent on the remaining energy of MT. These coefficients are used for providing priority to either data offloading or energy harvesting of MT. In contrast, the benchmark schemes consider fixed number of subtasks per application and focus on either energy consumption or time delay but not both simultaneously. The minimum cost of DOTP demonstrates the better performance for computational offloading in terms of energy consumption, time delay, and battery life.

5. Conclusions

In this paper, the battery life of MT, time allocation policy for energy harvesting and data offloading, and offloading policy based on optimal number of subtasks are investigated for single MT in WP-MEC system. The application for computational offloading is divided into optimal number of subtasks and then, through offloading policy, the decision to follow either remote execution or local execution is made. A detailed cost function is formulated, which jointly considers time delay and energy consumption for offloading policy. In addition, the cost function includes the current battery level of MT and thus prolongs the battery life of MT. The deep learning approach is used to find the optimal number of subtasks per application and partial offloading policy along with time allocation policy for energy harvesting simultaneously. The DOTP algorithm provides time allocation policy that results in extended battery life as compared to the benchmark schemes. A DNN is trained on a pregenerated dataset through mathematical model for the achievement of aforementioned objectives in order to reduce the energy consumption and time delay that incur for traditional optimization methods and make the decisions faster. It is evident from the numerical results that our proposed DOTP demonstrates extended battery life, lower energy consumption, and lower time delay for application execution in comparison with the benchmark techniques. In the future, the scenario can be extended to multiple MTs and HAPs, as well as to vehicular networks. In addition, reinforcement approach can be used, and routing strategies can be investigated for handover management during mobility from one HAP to another.

Data Availability

The data used in this study are random and are generated through our proposed mathematical model and Algorithm 1. Using the simulation parameters, given in Table 5, the required data can be generated as explained in Section 3.3.

Conflicts of Interest

The authors declare that they have no conflicts of interest.