Multi-user multi-objective computation offloading for medical image diagnosis

View article
PeerJ Computer Science

Introduction

Medical imaging examinations are currently required for over 70% of clinical diagnostic behaviors in hospitals (Jayashree & Bhuvaneswaran, 1970; Maglogiannis et al., 2017; Teng, Kong & Wang, 2019). However, medical data grows unusually fast with the advancement of information technology in smart medicine. There will be a total of 40 trillion GB of medical data in 2020, with 85–90% of that coming from medical imaging, which exacerbates the burden of doctors’ imaging diagnosis work. Doctors hope to use intelligent image diagnostic models (IIDM) for accelerating image diagnosis (Zivkovic et al., 2022), but the hospital personal computer configuration is not high enough to meet the demand.

A new hospital service architecture known as the medical imaging cloud has emerged (Lakshmi et al., 2021). It sends medical images to the IIDM in the central cloud for processing by utilizing cloud computing, big data, the Internet of Things, digital imaging technology, and Internet technology. Despite the rapid availability of diagnostic results, there are still questions. For instance, deep learning-based IIDM will generate various parameters, which will lead to a significant increase in computation. In addition, the extremely long-distance image transfer between the central cloud and the hospital takes up a huge amount of network bandwidth, resulting in large delay, energy consumption and communication overhead.

To overcome the above shortcomings, several researchers have recently found that computation offloading (Shakarami, Shahidinejad & Ghobaei-Arani, 2020, 2021; Xu et al., 2020; Tong et al., 2020; uz Zaman et al., 2021, 2022b) is a promising technology to solve this dilemma. Computation offloading, as one of the key technologies for edge computing (uz Zaman et al., 2022a), refers to the technology by which resource-constrained terminal devices (TDs) offload part or all of the computing tasks to the edge server execution. It comprises an offloading strategy and resource allocation (Li et al., 2020c). This article focuses on the former. Specifically, the diagnosis tasks of the TDs are first uploaded and then processed on the edge servers. Finally, the corresponding terminal device receives the results (Zhang et al., 2019b).

Many researchers have conducted extensive research on computation offloading. There are the following major issues, however, with the existing offloading works: (1) the edge server will provide resources to the terminal devices without charge. However, in a real communication and computing environment, the cost of computing resources and wireless communication is unavoidable. Doctors cannot enjoy edge server services for free, but have to pay a fee. (2) It is not possible to effectively integrate real environmental concerns into their decision-making. Although performing diagnostic tasks on the edge nodes decreases latency, it might increase energy consumption and payment. Therefore, it is important to propose a trade-off offloading model between latency, energy, and cost. (3) Users hold a risk-neutral attitude. However, it has been argued recently that users are risk awareness when using edge server’s resources, especially in a resource-constrained environments (Vamvakas, Tsiropoulou & Papavassiliou, 2019a). Specifically, users can be classified into aggressive and conservative according to their behavior characteristics. For aggressive users, they will exhibit risk-seeking behavior, who want to offload diagnosis tasks to edge servers to avoid using resources on terminal devices, even though edge servers may not provide data processing services for all users. Another type of conservative user exhibits risk-aversion behavior, who prefers to process diagnosis tasks on terminal devices. The reason is that the computing resources of edge server will be overused when multiple users use it simultaneously.

To be closer to the real communication and computing environment, therefore, when making the offloading decision of each user in the medical image cloud scenario, multiple factors are considered. The factors include the user’s risk awareness and a set of innovative objectives: delay, energy consumption and payment. Our ultimate optimization goal is to maximize the prospect theoretic utility of each user. To achieve this goal, we propose a multi-user multi-objective computation offloading for medical image diagnosis. First, we design the user’s utility function based on the Prospect Theory principle by combining the multi-objective of delay, energy consumption, and payment, which simulates the risk awareness behavior of each user during diagnosis task offloading. Second, to maximize the utility, the computation offloading problem first is expressed as a distributed optimization problem, then is transformed into a non-cooperative game among the users. Third, we prove that this game has Nash equilibrium (NE) points based on exact potential game and propose a low complexity computation offloading algorithm based on best response dynamics (BRD-CO) to reach an NE point. Finally, we conduct detailed simulation experiments. The results show that the BRD-CO algorithm can guarantee that each user has a higher prospect theoretic utility and a faster convergence speed when compared with four benchmarks and four heuristic algorithms.

Therefore, according to the above, this article proposes a computation offloading method that employs the following two element problems as a guide for investigation:

  • The possibility of designing a more realistic optimization goal function based on user risk awareness and multiple objectives.

  • The possibility of further improving the convergence speed of the offloading algorithm in a distributed manner.

Based on the system model constructed, the framework designed and the experimental results, the main contributions of this work are summarized as follows.

  • We develop a more specific and detailed computation offloading model using the formal method. It more clearly reflects the execution process of the user’s diagnosis task on the edge server and terminal devices, respectively.

  • We achieve a more realistic optimization goal. The multi-user and multi-objective computation offloading method are closer to the real world, which not only reflects the risk attitude of each user but also trade-off for delay, energy consumption and payment.

  • We design a distributed offloading algorithm with a faster convergence speed. Each user wants more for computation and wireless communication resources during the execution of a diagnostic task, the computation offloading problem therefore is considered as a distributed optimization problem. We propose an optimal computation offloading algorithm based on best response dynamics, which requires only a few iterations to converge to the Nash equilibrium point.

  • We have achieved a higher prospect theoretic utility. We implement the proposed BRD-CO algorithm and conduct detailed studies. The experimental results show that the proposed algorithm has statistical superiority and provides a higher prospect theoretic utility.

The rest of this article is organized as follows. “Related work” presents the related work. “Computation offloading system model” illustrates the computation offloading model and discusses the delay, energy consumption and payment under different offloading modes. “Multi-user multi-objective computation offloading for medical image diagnosis” introduces a multi-user multi-objective computation offloading for medical image diagnosis. “Numerical results” designs the simulation experiment and presents the numerical results. Finally, a summary of our work and future plan is presented in “Conclusions”.

Related work

Massive medical image data is becoming more challenging to process and manipulate as the advancement of medical information (Zhang et al., 2017b). As a way of managing and procedure big data, cloud computing plays an important role (El-Seoud et al., 2017; Rahman, Khalil & Yi, 2019). Zhang et al. (2020) proposed a normal distribution splitting-based method for executing plenty of medical data parallel. On the other hand, we can use the parallel computing and data distribution functions of related systems, such as the MapReduce model and Hadoop model (Khezr & Navimipour, 2017; Mo, 2019; Duan, Edwards & Dwivedi, 2019). Based on the Hadoop, MapReduce and Spark, the researcher uses machine learning to predict and analyze the future complications of diabetic patients, which improved processing speed (Vineetha & Nandhana, 2022). In the framework of medical imaging cloud based on cloud computing, however, the distance between the central cloud and the hospital is so far that the transmission will consume a large amount of bandwidth and cause huge latency.

Recently, computation offloading has received more and more attention as one of the most promising solutions to this issue, and various offloading strategies have been proposed (Mao, Zhang & Letaief, 2016; Zhang et al., 2017a, 2019a; Guo, Li & Guan, 2019; Li et al., 2019b, 2020b, 2020a; Messous et al., 2019; Meng et al., 2019; Mitsis, Tsiropoulou & Papavassiliou, 2020; Zhu et al., 2020a, 2020b; Alioua et al., 2020; Tang & Wong, 2022; Wang et al., 2021; Chen & Liu, 2021). The differences between various computation offloading methods are shown in Table 1. There are currently only a few studies on accelerating the processing of medical image data by computation offloading, mostly focusing on areas such as the internet of vehicle, unmanned aerial vehicles, etc. On the grounds of the optimization goal, these strategies can be divided into four categories: reducing delay, reducing energy consumption (EC), balancing delay with energy consumption, and maximizing utility.

Table 1:
Comparison of different computational offloading model.
References Utilized technique Performance
metrics
Evaluation
tools
Case study Advantages Disadvantages
Meng et al. (2019)
  • Markov decision process

  • Learning approach

  • Delay

  • Simulation (NA)

  • Machine translation

  • Single server single user

  • Multiserver multiuser

  • Dynamic instantaneous rate estimation

  • High complexity

  • Only the delay is considered

Mao, Zhang & Letaief (2016)
  • Lyapunov optimization

  • Delay

  • Simulation (NA)

  • Mobile apps

  • Decisions depend only on the current system state

  • Only the delay is considered

Zhu et al. (2020a)
  • Deep Reinforcement Learning

  • Delay

  • Simulation (python)

  • Internet of vehicles

  • Multiagent

  • Distrubuted offloading decision making

  • Only the delay is considered

Tang & Wong (2022)
  • Deep Reinforcement Learning

  • Delay

  • Simulation (NA)

  • Mobile apps

  • Distributed offloading decision making

  • Only the delay is considered

Wang et al. (2021)
  • Deep Reinforcement Learning

  • Convex optimization

  • Energy consumption

  • Simulation (python)

  • Unmanned aerial vehicle

  • Fast acquisition of UAV trajectory

  • Low complexity matching algorithm

  • Only the energy consumption is considered

Zhang et al. (2019a)
  • Lyapunov optimization

  • Energy consumption

  • Simulation (matlab)

  • Mobile apps

  • Ensure high network stability

  • Only the energy consumption is considered

Chen & Liu (2021)
  • Deep Reinforcement Learning

  • Energy consumption

  • Simulation (NA)

  • Augmented reality

  • Multiagent

  • Only the energy consumption is considered

Li et al. (2020b)
  • Convex approximation

  • Energy consumption

  • Simulation (NA)

  • Unmanned aerial vehicle

  • An on-demand offloading service in emergency scenarios

  • Only the energy consumption is considered

Zhu et al. (2020b)
  • Convex optimization

  • Delay

  • Energy consumption

  • Simulation (NA)

  • Unmanned aerial vehicle

  • Multiserver multiuser

  • cooperative offloading algorithm

  • Considerations should be more comprehensive

Guo, Li & Guan (2019)
  • Lyapunov optimization

  • Delay

  • Energy consumption

  • Simulation

(matable & C++)
  • Internet of things

  • Dynamic computation requests

  • Considerations should be more comprehensive

Zhang et al. (2017a)
  • Artificial fish swarm algorithm

  • Delay

  • Energy consumption

  • Simulation (NA)

  • Small cell networks

  • Multiserver multiuser

  • The fronthual and backhual links are joint considered

  • Considerations should be more comprehensive

Li et al. (2019b)
  • Nonlinear programming

  • queue theory

  • Delay

  • Energy consumption

  • Simulation (matlab)

  • Internet of things

  • tradeoff between delay and energy consumption

  • Considerations should be more comprehensive

Messous et al. (2019)
  • Non-cooperative game

  • Utility

  • Simulation (NA)

  • Unmanned aerial vehicle

  • Distributed offloading decision making

  • Not considered a dynamic selection of the weighting parameters in utility function

Alioua et al. (2020)
  • sequential game

  • Utility

  • Simulation (C++)

  • Unmanned aerial vehicle

  • Cooperative offloading mechanism

  • Comprehensive network parameters is not considered

Li et al. (2020a)
  • 0–1 programming

  • Utility

  • Simulation (NA)

  • Internet of Vehicles

  • Vehicle mobility

  • Offloading tasks simultaneously through multicast

  • Energy consumption is not considered

Mitsis, Tsiropoulou & Papavassiliou (2020)
  • Non-cooperative game

  • Prospect theory

  • Utility

  • Simulation (python)

  • Unmanned aerial vehicle

  • Risk awareness of user

  • Energy consumption and payment is not considered

DOI: 10.7717/peerj-cs.1239/table-1

Mao, Zhang & Letaief (2016), Meng et al. (2019), Zhu et al. (2020a), Tang & Wong (2022) are offloading strategies to reduce the delay. For instance, Mao, Zhang & Letaief (2016) proposed a dynamic offloading method based on Lyapunov optimization, considering the execution latency and task failure, which can decrease the task time by 64%. However, these offloading strategies are only designed to minimize the overall delay, without considering the potential energy consumption.

Zhang et al. (2019a), Li et al. (2020b), Wang et al. (2021), Chen & Liu (2021) are offloading strategies to reduce energy consumption. For instance, Wang et al. (2021) proposed a trajectory control algorithm based on convex optimization and deep reinforcement learning by combining the motion trajectory, user association, and resource allocation of UAVs. Similar to the previous optimization goal, these offloading strategies are only effective in reducing the overall energy consumption of the task. However, in some systems, users prefer to achieve relative stability between delay and energy consumption.

Zhang et al. (2017a), Guo, Li & Guan (2019), Li et al. (2019b), Zhu et al. (2020b) are offloading strategies to balance delay and energy consumption. For instance, Zhang et al. (2017a) introduced predation behavior, swarm behavior and following behavior into the artificial fish swarm algorithm, which saves 30% energy consumption. While these offloading strategies achieve a tradeoff between latency and energy consumption, they may not be applicable to all systems. The reason for this is that each system has different performance requirements, not all of which are latency and energy consumption.

Messous et al. (2019), Mitsis, Tsiropoulou & Papavassiliou (2020), Alioua et al. (2020), Li et al. (2020a) are offloading strategies to maximizing utility. For instance, Messous et al. (2019) used game-theoretical to reach a balance of energy consumption, delay and payment. Similarly, Alioua et al. (2020) also proposed a sequential game-based computation offloading strategy. Li et al. (2020a) proposed an algorithm jointly optimizing the delay and payment for task offloading. Mitsis, Tsiropoulou & Papavassiliou (2020) proposed a resource-based pricing and user’s risk awareness computation offloading scheme. The above-mentioned offloading strategies have been widely concerned because they can design different utility functions according to different scenarios, and create an appropriate offloading strategy to meet the needs of users.

In the view of prior works, few studies conjointly consider risk awareness, delay, energy consumption, and payment. Most of which focus on two or three aspects, and assumes that the risk-neutral behavior of the users in the process of task offloading. To simulate the resource consumption in the real-world environment, in this article, we propose a computation offloading model for maximizing the prospect theoretic utility of each user, which jointly considers: (1) a clearer formal description for the computation offloading model; (2) more realistic optimization goals; (3) a distributed offloading algorithm with a faster convergence speed; (4) higher prospect theoretic utility. We also conduct experiments to evaluate the BRD-CO algorithm under various parameters.

Computation offloading system model

The scenario in this article is a medical image diagnosis in a medical image cloud. In this section, we construct a computation offloading system model and introduce three offloading modes.

Notation description

For readability, Table 2 summarizes the notation used in this article.

Table 2:
Notations.
Notation Description Notation Description Notation Description
TD The finite set of terminal devices tdi The device used by the i-th user B The finite set of diagnosis tasks
btdi The computation task of tdi μ The finite set of offloading proportions μtdi The offloading proportion of tdi
ϕ Task computational complexity ξ The expected profits PU0 The user’s anticipated profit
PUtdi The user’s actual profit ktdi The loss aversion coefficient λ1 Delay weight
λ2 Energy consumption weight λ3 Payment weight ω Payment factor
Pr The failure probability of the edge server γ Gain attitude δ Loss attitude
paraL The finite set of local computing parameters FL The finite set of the computational capability ftdiL The computational capability of tdi
χL The finite set of the energy coefficient χtdiL The consumed energy per CPU cycle of tdi TL_ct The finite set of the computation delay locally
ttdiL_ct The computation delay required by tdi to process btdi locally EL_ce The finite set of the computation energy consumption locally etdiL_ce The computation energy consumption required by tdi to process btdi locally.
paraS The finite set of full offloading parameters fS The computational capability of the edge server χS The energy coefficient
PS_ct The finite set of computation delay pricing ptdiS_ct The computation delay pricing of tdi TPS The finite set of transmission power
tptdiS The transmission power between tdi and the edge server TRS The finite set of the transmission rate trtdiS Transmission rate between tdi and the edge server,
TS The finite set of the total delay on the edge server αtdi, βtdi The risk attitude coefficient ttdiS The total delay required by tdi to process btdi on the edge server
TS_ct The finite set of the computation delay on the edge server ttdiS_ct The computation delay required by tdi to process btdi on the edge server TtdiS_tt The finite set of the transmission delay on the edge server
ttdiS_tt The transmission delay required by tdi to process btdi on the edge server ES The finite set of the total energy consumption on the edge server etdiS The total energy consumption required by tdi to process btdion the edge server.
ES_ce The finite set of the computation energy consumption on the edge server. etdiS_ce The computation energy consumption required by tdi to process btdi on the edge server ES_te The finite set of the transmission energy consumption on the edge server
etdiS_te The transmission energy consumption required by tdi to process btdi on the edge server CS_ct The finite set of the payment TS_ct The finite set of the computation delay on the edge server
ctdiS_ct The computation delay cost required by tdi to process btdi on the edge server paraPO The finite set of partial offloading parameters TPO The finite set of the total delay in partial offloading
ttdiPO Denotes the delay required by tdi to process btdi in partial offloading, EPO The finite set of the total energy consumption in partial offloading etdiPO The energy consumption required by tdi to process btdi in partial offloading
cPO The finite set of the total delay in partial
offloading
CPO_ct The computation delay cost required by tdi to process btdi in partial offloading
DOI: 10.7717/peerj-cs.1239/table-2

System model description

In the concerned scenario, we consider a medical image cloud that includes an edge server and multiple terminal devices. The edge server provides storage and computing services for users, solving the problem of limited resources for terminal devices. The terminal devices are used by doctors, which include desktops, laptops, tablets and super beans. Each terminal device is equipped with computing resources for processing diagnosis tasks.

As the computing resources of terminal devices are limited, they cannot meet the needs of massive medical image diagnosis tasks. Therefore, users will offload part or all diagnosis tasks to the edge server. Such behavior of users carries a risk-aversion or risk-seeking attitude. The edge server typically charges a payment to share their resources. In addition, terminal devices and the edge server cause delay and energy consumption when performing diagnosis tasks. To clearly explain the offloading process of the image diagnosis task in the medical image cloud, it is formalized as follows:

Definition 1. Computation offloading system model.

The computation offloading system model COSH=(TD,B,μ,ϕ,ξ,paraL,paraS,paraPO) is an eight-tuple, the details of which are shown in the Supplemental Information.

Offloading modes

Each terminal device has one or more medical image diagnosis tasks to perform. As shown in Fig. 1, there are three offloading modes for the diagnosis tasks: local computing, full offloading and partial offloading. Each offloading mode can be conceived as three-stages, including the sending, processing and feedback steps. First, the part or full diagnosis tasks are sent from tdi to the edge server. Second, the diagnosis task offloaded is processed on the edge server. Third, the processed results are feedback to tdi. Next, the computing methods of three objectives in different modes are as follows.

Three offloading modes for diagnosis tasks: local computing, full offloading and partial offloading.

Figure 1: Three offloading modes for diagnosis tasks: local computing, full offloading and partial offloading.

Local computing

In the local computing mode, as illustrated in Fig. 1, users execute the diagnosis task btdi [bits] only using the computing resources of terminal devices, where the offloading proportion μtdi=0. For terminal device tdi, the local computation delay ttdiL_ct [s] of processing btdi can be given by

ttdiL_ct=(btdiϕ)/ftdiL

Besides the required computation delay, each diagnosis task also consumes some computation energy. Therefore, the local computation energy consumption etdiL_ce [J] required by tdi to process btdi can be given by

etdiL_ce=χtdiLbtdiϕ

Full computing

In the full offloading mode, as illustrated in Fig. 2, the diagnosis task had to be performed completely on the edge server, where the offloading proportion μtdi=1. Therefore, the transmission delay ttdiS_tt [s] required by tdi to process btdi on the edge server via the uplink channel can be given by

The framework of the multi-user multi-objective computation offloading method for medical image diagnosis task.
Figure 2: The framework of the multi-user multi-objective computation offloading method for medical image diagnosis task.

ttdiS_tt=btdi/trtdiS

The transmission energy consumption etdiS_te [J] required by tdi to process btdi on the edge server can be given by

etdiS_te=tptdiSttdiS_tt=(tptdiSbtdi)/trtdiS

Next, the edge server will use some of the computing resources to perform btdi. Therefore, the computation delay ttdiS_ct [s] required by tdi to process btdi on the edge server can be given by

ttdiS_ct=(btdiϕ)/fS

Meanwhile, computation energy consumption is also generated. Therefore, the computation energy consumption etdiS_ce [J] required by tdi to process btdi on the edge server can be given by

etdiS_ce=χSbtdiϕ

After the diagnosis task is completed, the results will be sent back to terminal devices via the downlink channel. However, resembling many studies (Xian, Lu & Li, 2007; Wang et al., 2017; Cui et al., 2017; Rudenko et al., 1998), we ignore the downlink transmission delay because the results are insufficient compared to the original image data.

In summary, the total delay ttdiS [s] required by tdi to process btdi on the edge server can be given by

ttdiS=ttdiS_ct+ttdiS_tt

The total energy consumption etdiS [J] required by tdi to process btdi on the edge server can be given by

etdiS=etdiS_ce+etdiS_te

We assume that the user has to pay a fee for the edge server based on the computation delay pricing ptdiS_ct [$/s] (see “Prospect Theoretic Utility”) and the computation delay ttdiS_ct. Therefore, the payment ctdiS_ct [$] required by tdi to process btdi on the edge server can be given by

ctdiS_ct=ptdiS_ctttdiS_ct=(ptdiS_ctbtdiϕ)/fS

To simplify the model, we assume that the configuration and transmission setting are the same for each terminal device in this article (i.e., iϵ1,2,,n, ftdil=ftdl, χtdil=χtdl, tptdiS=tptdS, and trtdiS=trtdS).

Partial computing

In the partial offloading mode, as illustrated in Fig. 3, the diagnosis task is divided into two subtasks, where the offloading proportion μtdi(0,1). Subtask μtdibtdi performed on the edge server while subtask (1μtdi)btdi will be executed on tdi. Therefore, the total delay ttdiPO, total energy consumption etdiPO and payment ctdiPO_ct required by tdi to process btdiin partial offloading can be given by

The user’s prospect theoretic utility.
Figure 3: The user’s prospect theoretic utility.

ttdiPO=max{(1μtdi)ttdiL_ct,μtdittdiS}etdiPO=(1μtdi)etdiL_ce+μtdietdiSctdiPO_ct=μtdictdiS_ct

It can be clearly seen that when the computation delay pricing ptdiS_ct is constant, the greater amount of subtask μtdibtdi offload, the more payment users will pay.

Multi-user multi-objective computation offloading for medical image diagnosis

In this section, we propose a multi-user multi-objective computation offloading for medical image diagnosis. First, the proposed framework of this system model is presented in detail. We then construct the user’s prospect theoretic utility function, formulate the offloading problem as a non-cooperative game among users, and design an algorithm to solve the problem.

Proposed framework overview

In the above computation offloading system model, users are not always risk-neutral when deciding where to process diagnosis tasks. The reason is that the different offloading modes will yield different profits for the users. Faced with possible future gains and losses, users can hardly be in a completely neutral attitude, but will indicate different risk attitudes depending on situation.

Therefore, considering the delay, energy consumption, payment and user’s risk awareness behavioral characteristics required to complete the diagnosis tasks, we propose a multi-user multi-objective computation offloading for medical image diagnosis. The proposed framework consists of two parts, as shown in Fig. 2. (1) Problem formulation: computation offloading problem involving delay, energy consumption, payment and risk awareness behavioral characteristics, whose optimization goal is to maximize the user’s utility based on Prospect Theory. This problem is regarded as a DO problem and then formulated as an NCG among users. (2) Problem solution: the complete proof of the existence of the NE is provided by the exact potential game (EPG). Then, we propose a low-complexity computation offloading algorithm based on best response dynamics (BRD-CO), to determine the optimal data offloaded μtdibtdi in a distributed manner for each user.

Problem formulation

Computation offloading problem involving risk awareness and multi-objective is regarded as a distributed optimization problem. Its optimization goal is to maximize the user’s prospect theoretic utility. Then, it is formulated as a non-cooperative game among users and solved by exact potential game.

Prospect theoretic utility

When users make an offloading strategy, we will analyze their risk-aware behavior using the prospect theory (PT). The prospect theory was first proposed after revising the expected utility theory based on absorbing “Allais Paradox” in 1979 (Kahneman & Tversky, 1988). The theory combines psychology and behavioral science, explicitly states that humans exhibit “loss aversion” when deciding. When faced with gains, users exhibit an attitude of risk-aversion. When faced with losses, users exhibit a risk-seeking attitude and have the principle of being more sensitive to losses than gains (Wu & Gonzalez, 1996).

Specifically, prospect theory simplifies the results by establishing appropriate reference points and a preliminary analysis of various outcomes during the editing phase. Then, the decision with the highest PTU is selected in the evaluation phase by figuring the results of the previous phase through a value function (VF) and a weight function (WF).

Each user offloads part or all of the diagnosis task to the edge server, as shown in Fig. 3. We then calculate the value of the edge server in different states based on the VF. When all users have offloaded, we calculate the failure probability of the edge server. Then, the probability is modified to a weight according to the WF. Finally, we obtain the user’s PTU by multiplying the corresponding value and weight. The specific definitions of VF and WF are as follows.

Value function

The value function mainly reflects the subjective value of users, following the principle of PT (Vamvakas, Tsiropoulou & Papavassiliou, 2019b), which can be given by

v(PUtdi)={(PUtdiPU0)αtdi,ifPUtdiPU00ktdi(PU0PUtdi)βtdi,ifPUtdiPU0<0

Inspired by Tram, Tham & Niyato (2014), Zhou, Tham & Motani (2017), Li et al. (2019a), PU0=ξlog(1+btdi)λ3 denotes the reference point, expressing the user’s anticipated profit by fully processing diagnosis task btdi at tdi. PUtdi represents the user’s actual profit after offloading part or all of the diagnosis task btdi to the edge server, and is given by (12) below. αtdi and βtdi represent risk attitude coefficient, which are 0αtdi,βtdi1. As the αtdi and βtdi increase, the risk taken by the user becomes greater. αtdi=βtdi=1, the user is risk-neutral. ktdi is the loss aversion coefficient. ktdi>1 indicates that users are more stimulated by losses instead of than gains. Moreover, the user can adjust αtdi, βtdi and ktdi in different environments. For simplicity, we attempt to assume αtdi=βtdiin this article.

λ1, λ2 and λ3 denote the multi-objective weight coefficient, i.e., delay weight, energy consumption weight and payment weight. We map these different measures into the same dimension, where 0λ1, λ2, λ31. For delay-sensitive tasks, λ1is larger than λ2 and λ3. For energy-sensitive tasks and payment-sensitive tasks, λ2and λ3 are relatively large.

PUtdi(μtdibtdi)={ξlog(1+btdi)λ3,ifμtdi=0(ttdiL_ctttdiPO)λ1+(etdiL_ceetdiPO)λ2+(ξlog(1+btdi)ctdiPO_ct)λ3,ifμtdi0andedgeserversurvives(etdiL_ceetdiPO)λ2+((1μtdi)ξlog(1+btdi)ctdiPO_ct)λ3,ifμtdi0andedgeserverfails

Given the weak computing capacity of a terminal device, it cannot meet the computing needs of a massive medical image. As a common resource, the edge server can provide services for all users. Every user can enjoy edge server services, but the computing resources of the edge server are limited. There will be serious negative effects when the resources use exceeds the boundary. Here, we mainly divide it into two situations.

Situation 1: edge server survives. There may be some signal interference or channel congestion, resulting in reduced transmission efficiency, but the edge server remains capable of diagnosis tasks of terminal devices.

Situation 2: edge server failures. Excessive competition for computing resources on edge server, terminal devices will no longer be able to enjoy services once edge server is shutdown.

The first branch of (12) denotes the actual profit of the user performing tasks entirely on the terminal device. The second branch of (12) corresponds to situation 1, where the user’s actual profit depends primarily on the delay, energy consumption and payment after executing all the images. Quite the opposite, the third branch of (12) corresponds to situation 2, where the user’s actual benefit is determined by the energy consumption and payment. The reason is that the edge server is shutdown and is no longer able to get a delay gain after processing the image.

Therefore, the living state of the edge server directly affects the user’s actual profits. In situation 1, the value function of the user should be determined by the first branch of (11) and the second branch of (12), which is defined as

vsurv(PUtdi(μtdibtdi))=(PUtdiPU0)αtdi=((ttdiLctttdiPO)λ1+μtdi((etdiLceetdiS)λ2ptdiSctttdiSctλ3))αtdi

In situation 2, the value function of the user should be resolute by the second branch of (11) and the third branch of (12), which is defined as

vfail(PUtdi(μtdibtdi))=ktdi(PU0PUtdi)αtdi         =ktdiμtdiαtdi((ξlog(1+btdi)+ptdiS_ctttdiS_ct)λ3αtdi         (etdiL_ceetdiS)λ2)

According to the math characteristics of the value function, (13) must be a positive constant, and (14) must be a negative constant. Thus, we can determine the boundaries of the computing delay pricing ptdiS_ct imposed by the edge server on the user, which can be given by

{ptdiS_ct>0(ttdiLttdiPO)λ1+μtdi((etdiL_ceetdiS)λ2ptdiS_ctttdiS_ctλ3)>0(ξlog(1+btdi)+ptdiS_ctttdiS_ct)λ3(etdiL_ceetdiS)λ2>00<ptdiS_ct<(ttdiL_ctttdiPO)λ1+(etdiL_ceetdiS)λ2λ3ttdiS_ct

For simplicity, φ=λ1(ttdiL_ctttdiPO)+(etdiL_ctttdiPO)λ2λ3ttdiS_ct, so ptdiS_ct=ωφ, where payment factor ω(0,1).

Weight function

The weight function reflects the degree of perception of probability. Users have different risk behaviors towards different failure probabilities of the edge server during the diagnosis task offloading (i.e., Gains and losses). The failure probability of the edge server is directly related to the size of the processed data. The reason is that the larger the offloaded amount, the higher the computing demand for the terminal devices. This will lead to a greater failure probability of the edge server. Inspire by Mitsis, Tsiropoulou & Papavassiliou (2020), the failure probability of the edge server Pr can be given by

Pr(μtdibtdi)=(1+21+eςi=1nϕμtdibtdi)2where μtdibtdi represents the offload image data of the user on the edge server. ς>0 is a positive constant calibrating the sigmoidal curve based on the computing capabilities of the edge server (Mitsis, Tsiropoulou & Papavassiliou, 2020). The failure probability of the edge server, 0Pr1, is a continuous, strictly increasing, convex, and twice differentiable function (Mitsis, Tsiropoulou & Papavassiliou, 2020).

Following the principle of PT, we convert the probability function P into the weight function π(P), which is defined as

π(P)={Pr(Pr+(1P)r)1/1rr,ifPUtdiPU00Pδ(Pδ+(1P)δ)1/1δδ,ifPUtdiPU0<0

The parameter γ,δ<1denote the risk attitude of the user to gains and losses in making an offloading strategy. π(P) is the increment function of P. When P is a small probability event approaching 0, users show risk-seeking attitude (i.e., π(P)>P). For the events with medium probability and high probability, the users show the attitude of risk aversion (i.e., P>π(P)). In other words, low-probability events tend to be overestimated, and the converse holds (Monderer & Shapley, 1996).

Combining (11)(17), following the principle of PT, the user’s prospect theoretic utility function comprises the value function and the weight function, which is defined as follows

E(PUtdi(μtdibtdi))=vsurv(PUtdi(μtdibtdi))πsurv(1Pr)+vfail(PUtdi(μtdibtdi))πfail(Pr)where πsurv(1Pr) denotes the weight of the edge server survives (i.e., Gains). πfail(Pr) represents the weight of the edge server fails (i.e., Losses). The definition is as follows

πsurv(1Pr)=Pr(Pr+(1P)r)1/1rr,ifPUtdiPU00

πfail(Pr)=Pδ(Pδ+(1P)δ)1/1δδ,ifPUtdiPU0<0

Problem model

To maximize the prospect theoretic utility of each user, the computation offloading problem for medical image diagnosis task involving risk awareness and multi-objective (i.e., delay, energy consumption and payment) is, therefore, represented as a distributed optimization problem as follows

maxE(PUtdi(μtdibtdi,μtdibtdi))s.t.0μtdi1where μtdibtdi is the amount of image data offloaded by the rest of the terminal devices except for the terminal device tdi. The above problem is defined as a non-cooperative game among users as follows

Gdop=[TD,OStdi,E(PUtdi(μtdibtdi,μtdibtdi))]where TD is the finite set of the user terminal devices, OStdi is the offloading strategies space of tdi, and E(PUtdi(μtdibtdi,μtdibtdi)) reflects the prospect theoretic utility of the user i. The solution of Gdop for the user’s optimal computation offloading strategy μtdibtdi, the meaning of which is that PTU is greatest when the amount of data offloaded is μtdibtdi.

Definition 2. Nash equilibrium.

An image data offloading vector μtdibtdi={μtd1btd1,μtd2btd2,,μtdnbtdn} in the strategy space μtdibtdiOStdi=[0,btdi] is a Nash Equilibrium point if for every user i the following condition holds true

E(PUtdi(μtdibtdi,μtdibtdi))>E(PUtdi(μtdibtdi,μtdibtdi))forallμtdibtdiOStdi

The meaning of the Nash equilibrium point is that, no player (users in our problem) can further increase the cost (user’s prospect theoretic utility in our problem) by unilaterally changing his strategy while the other player’s strategy (computation offloading strategy in our problem) remains unchanged.

Problem solution

In this section, we first prove the existence of NE points for the NCG by EPG. Then, a computation offloading algorithm based on best response dynamics is proposed to solve the problem. Finally, we discuss the time complexity of the proposed algorithm.

The existence of NE point

To prove Gdop has at least one NE point, which means as a solution to maximize the distributed optimization problem, the exact potential game is adopted. The main reason for this design is that not all NCGs have an NE point and can reach algorithmic convergence. An exact potential game with a limited set of strategies converges to at least one NE point, regardless of the starting point.

Definition 3. Exact potential game. The Gdop=[TD,OStdi,E(PUtdi(μtdibtdi,μtdibtdi))] is an EPG if there is an exact potential function Φ(μtdibtdi) that for all tdiTD satisfies the following conditions

Φ(μtdibtdi)Φ(μtdibtdi)=E(PUtdi(μtdibtdi,μtdibtdi))E(PUtdi(μtdibtdi,μtdibtdi))forallμtdibtdi,μtdibtdiϵOStdi,μtdibtdiϵOStdi

Theorem 1. The Gdop=[TD,OStdi,E(PUtdi(μtdibtdi,μtdibtdi))] is an exact potential game and has at least one Nash equilibrium point μtdibtdi={μtd1btd1,μtd2btd2,,μtdnbtdn}. (Due to space limitation, the proof of theorem 1 is shown in the Supplemental Information).

Computation offloading algorithm based on best response dynamics

Given that we have already proven that the Gdop belongs to the class of EPG as stated above, and exists at least one NE point. In an exact potential game, the NE point can always be reached after a finite number of iterations, which is called the finitely increasing property (Yang et al., 2020). Therefore, the best response dynamics is adopted to determine each user’s optimal computation offloading strategy μtdibtdi (i.e., converged to a NE point) in a distributed manner through a finite number of iterations, when the computation offloading strategy of other users is determined (Topkis, 1998; Milgrom & Roberts, 1990), as follows

BR(μtdibtdi,μtdibtdi)=μtdibtdi=argmaxE(PUtdi(μtdibtdi,μtdibtdi))

From the above discussion, we propose a low-complexity computation offloading algorithm based on best response dynamics (BRD-CO) to determine each user’s computation offloading strategy. To more clearly describe the workflow of the BRD-CO algorithm, described briefly in Table 3, the algorithm follows a pseudo-code. The BRD-CO algorithm comprises three parts: the first part is a line 1–4, the initial definition of parameters, including the number of iterations, the user i’s computation offloading strategy and the convergence of the algorithm. The second part is lines 9–14, which calculates the user i’s prospect theoretic utility. In each iteration, first, we calculate the delay, energy consumption and payment in three offloading modes via (1)(10). Second, using (13) and (14), the user i’s value in the survival and failure state of the edge server can be obtained. Then, we calculate the failure probability of the edge server via (16) and use (19) and (20) convert probability to weight. Finally, taking advantage of (18), we get the user is prospect theoretic utility. The third part is 16–20 lines, which determine the user is optimal offloading strategy. We calculate the user i’s offload strategy via (25) each time. If two adjacent times strategies are the same, the strategy is called the optimal offload strategy for the user i.

Table 3:
BRD-CO algorithm.
Algorithm 1: BRD-CO algorithm
Input: all paraments of COSH model, btdi, ϕ, μtdi, ξ, ftdiL, χtdiL, fS, χS, tptdiS, trtdiS, λ1, λ2, λ3, iϵn
Output: optimal computation offloading strategy, μtdibtdi
1. // Initialization Paraments
2. ite0 //iterations
3. (μtdibtdi)ite0 //The initial amount of image data offloaded for the user i
4. convergence false //Whether the algorithm converges
5. While convergence == false do
6.    iteite+1
7.   While i<n do
8.     // computer prospect theoretic utility
9.     computer delay, energy consumption and payment via (1)(10)
10.     computer vsurv(PUtdi(μtdibtdi))ite1 via (13)
11.     computer vfail(PUtdi(μtdibtdi))ite1 via (14)
12.     obtain Pr(μtdibtdi)ite1 via (16)
13.     convert (16) into weight πsurv(1Pr)ite1 and πfail(Pr)ite1 and via (19) and (20)
14.     computer E(PUtdi(μtdibtdi))ite1 via (18)
15.     // determine the optimal strategy
16.     user i determines (μtdibtdi)ite is based on (μtdibtdi)ite1 via (25)
17.   End while
18.   If (μtdibtdi)ite==(μtdibtdi)ite1 then
19.     convergence true
20.   End if
21. End while
DOI: 10.7717/peerj-cs.1239/table-3

Time complexity of BRD-CO algorithm

In this section, the time complexity of the proposed BRD-CO algorithms is discussed. From the above pseudo-code analysis, it can be seen that the solution process of the BRD-CO algorithm is iterative, and its time complexity is mainly determined by three factors: the number of iterations, the number of users, and the complexity of the utility function. Specifically, assume that the number of iterations required for Algorithm 1 is the complexity of (25) is, the number of users is. In each iteration, the formula (25) is calculated for all users. In addition, the difference between the current and previous offloading amounts is compared. If the difference is within the error range, the convergence state is adjusted to true; otherwise, it is adjusted to false. Therefore, the time complexity of the BRD-CO algorithm is O(F ite n).

Numerical results

In this section, we introduce the simulation setting to build COSH. The parameter influence analysis then is exhibited from five aspects. Next, we discuss the convergence of the algorithm. Finally, we compare the proposed BRD-CO algorithm with four benchmarks and four heuristic methods.

Simulation setup

To evaluate the parameter influence and convergence of the algorithm, we use PyCharm as the development tool for Python IDE. The performed simulations were executed on an Intel® Xeon (R) Silver 4114 CPU @ 2.20 GHz × 40 with 128 GB RAM.

The contents of the simulation are as follows: Suppose that 25 users simultaneously offload part of the annotation tasks to the edge server. The data used in the experiment came from the dynamic panoramic PET data set of Henan Provincial People’s Hospital. Each user’s prospect theoretic utility is calculated via (18). Using the BRD-CO algorithm to explore the optimal offloading strategy for each user to maximize their PTU. Inspired by Apostolopoulos, Tsiropoulou & Papavassiliou (2020), the main parameters are given in Table 4.

Table 4:
The value for simulation parameters.
Parameters Value Parameters Value Parameters Value
btdi 10×106±106bits χtdiL 4×109±109 J/CPU-cycles ϕ 1,000 CPU-cycles/bit
ω 0.5 ktdi 1.2 αtdi 0.2
ξ 5 $/bit χS 4×108 J/CPU-cycles fS 6×1010 CPU-cycles/s
ftdiL 6×107±107CPU-cycles/s tptdiS 109dbm trtdiS 0.1 bits/s
λ1 1 λ2 0.001 λ3 0.1
γ 0.61 δ 0.69
DOI: 10.7717/peerj-cs.1239/table-4

Parameter influence analysis

In “Impact of Computing Delay Pricing”, we consider the number of users (denoted by ‘N’) is set to 25 and parameter values as indicated in Table 4. However, in the rest of the analysis, a wide range of computing delay pricing ptdiS_ct, number of users, multi-objective weight coefficients ( λ1, λ2 and λ3) and prospect theoretic parameters (risk attitude αtdi, gain attitude γ, loss aversion ktdi and loss attitude δ) are considered.

Impact of computing delay pricing

In this section, we discuss the impact of the computing delay pricing ptdiS_ct=ωφ, imposed by the edge server on the user’s computation offloaded strategy, where the payment factor ω is from 0.001 to 0.9. The simulation results are shown in Fig. 4. It can see that as ω increases, the average prospect theoretic utility of users gradually increases to its maximum after that slowly decreases (Fig. 4A). Specifically, when the ω is small (i.e., ω = 0.001), the edge server will charge a lower ptdiS_ct and users will offload a large amount of image data (Fig. 4B), which results in lower delay (Fig. 4C) and lower payment (Fig. 4E), higher energy consumption (Fig. 4D), lower prospect theoretic utility (Fig. 4A). But the edge server will bear huge computing pressure, which causes a higher Pr (Fig. 4F). When ω further increases, the edge server will charge a higher ptdiS_ct. Users are not willing to use the computing resources of the edge server, the delay and payment will gradually increase, the energy consumption and Pr will reduce, a lower prospect theoretic utility again. Therefore, we need to balance the computing delay pricing to maintain the user’s high-quality experience. In addition, Figs. 4G4I indicate the joint distribution between average utility and average delay, average energy consumption and average cost, respectively. It is clear from the regression line that the average utility shows a decreasing, increasing and decreasing trend with the increase of the three, respectively.

The relationship between computing delay pricing and user’s average prospect theoretic utility, average offloaded image data amount, average delay, average energy consumption, average payment and failure probability of the edge server.
Figure 4: The relationship between computing delay pricing and user’s average prospect theoretic utility, average offloaded image data amount, average delay, average energy consumption, average payment and failure probability of the edge server.

Impact of the number of users

In this section, we discuss the impact of the number of users on the user’s computation offloading strategy, where the number of users is from 1 to 100. The simulation results are shown in Fig. 5. When the N is small (i.e., N = 1, 2, 5), the Pr is very low (Fig. 5F) because the computing resources on the edge server are far greater than the needs of users. Users are inclined to offload a large amount of image data to the edge server (Fig. 5B), resulting in lower delay (Fig. 5C), higher payment (Fig. 5E), higher energy consumption (Fig. 5D), and higher prospect theoretic utility (Fig. 5A). As N further increases, the edge server is under more and more computing pressure. Users tend to offload a small amount of image data to the edge server, while the remaining image process on the local device, which results in higher delay, lower payment, lower energy consumption, and lower prospect theoretic utility. Similarly, Figs. 5G5I indicate the joint distribution between average utility and average delay, average energy consumption and average cost, respectively. It is clear from the regression line that the average utility shows a decreasing, increasing and increasing trend with the increase of the three, respectively.

The relationship between the number of users and user’s average prospect theoretic utility, average offloaded image data amount, average delay, average energy consumption, average payment and failure probability of the edge server.
Figure 5: The relationship between the number of users and user’s average prospect theoretic utility, average offloaded image data amount, average delay, average energy consumption, average payment and failure probability of the edge server.

The relationship between computing delay pricing and number of users

In this section, we discuss the relationship between computation delay pricing ptdiS_ct=ωφ and the number of users N. The simulation results are shown in Fig. 6.

The relationship between computation delay pricing and number of users.
Figure 6: The relationship between computation delay pricing and number of users.

Especially, in Figs. 6A6C, the x-axis shows the payment factor ω, and the y-axis shows the average PTU under N = 25, 55 and 95, respectively. It is noted that the average PTU of users gradually increases to its maximum with the increase of ω after that slowly decreases. Moreover, the payment factor ω is different when the maximum PTU is reached under different user’s numbers. To explore the relationship between the two, we performed the following experiments. Figure 6D shows the computation delay pricing ptdiS_ct=ωφ corresponding to the maximum utility of a different number of users. When the number of users is small (i.e., 1, 2), the computing pressure on the edge server is very small, so it will impose a smaller payment factor ω, i.e., a lower computation delay pricing ptdiS_ct. As the number of users further increases, the edge server is under more and more computing pressure. To reduce the possibility of failure, the edge server will control the number of users by increasing ptdiS_ct. Since that N = 85, there is no change in both convergence speed and convergence result.

Impact of the multi-objective weight coefficients

In this section, we discuss the impact of the multi-objective weight coefficients on the user’s computation offloaded strategy, where multi-objective weight coefficients λ1>λ2>λ3. The simulation results are shown in Fig. 7.

The relationship between multi-objective weights and user’s average offloaded image data amount and failure probability of the edge server.
Figure 7: The relationship between multi-objective weights and user’s average offloaded image data amount and failure probability of the edge server.

In the subfigure of Figs. 7A and 7B, the x-axis shows the delay weight λ1, and the y-axis shows the user’s average offloaded image data amount and failure probability of the edge server in each λ1, where λ2=0.1, λ3=0.01. As the λ1 increases from 0.5 to 1, users are inclined to offload an enormous amount of image data (Fig. 7A). The reason is that the scenario set in this article is a delay-sensitive task, and users choose a larger λ1 rather than smaller λ1. This will bring tremendous pressure to the edge server and increase the failure probability (Fig. 7B).

In the subfigure of Figs. 7C and 7D, the x-axis shows the energy consumption weight λ2, and the y-axis shows the user’s average offloaded image data amount and failure probability of the edge server in different λ2, where λ1=1, λ3=0.01. The reason is that users are insensitive to energy consumption for delay-sensitive tasks. As the λ2 increases from 0.001 to 0.1, a fewer images will be offloaded (Fig. 7C), which reduces the failure probability (Fig. 7D).

In the subfigure of Figs. 7E and 7F, the x-axis shows the payment weight λ3, and the y-axis shows the user’s average offloaded image data amount and failure probability of the edge server in different λ3, where λ1=1, λ2=0.1. For the same reason, users are also relatively insensitive to payments. As the λ3 increases from 0.001 to 0.1, a little image will be offloaded (Fig. 7E) and the failure probability of the edge server also will reduce (Fig. 7F).

Impact of the prospect theoretic parameters

In this section, we discuss the impact of the prospect theoretic parameters, including risk attitude αtdi, gain attitude γ, loss aversion ktdi and loss attitude δ on the user’s offloading strategy. The simulation results are shown in Fig. 8.

The relationship between prospect theoretic parameters and user’s average offloaded image data amount and failure probability of the edge server.
Figure 8: The relationship between prospect theoretic parameters and user’s average offloaded image data amount and failure probability of the edge server.

In the subfigure of Figs. 8A and 8B, the x-axis shows the αtdi, and the y-axis shows the user’s average offloaded image data amount and failure probability of the edge server in each αtdi. As the risk attitude αtdi increases from 0 to1, users are inclined to offload a larger amount of image data (Fig. 8A). The reason is that they will choose larger gains, not smaller gains. As the αtdi increases, the failure probability of the edge server will also increase (Fig. 8B). For the same reason, As the gain attitude γ increases from 0.1 to 1, users will have a larger average offloaded image data amount (Fig. 8C) and larger failure probability of the edge server (Fig. 8D).

In the subfigure of Figs. 8E and 8F, the x-axis shows the ktdi, and the y-axis shows the user’s average offloaded image data amount and failure probability of the edge server in each ktdi. As the loss aversion ktdi increases from 0.1 to 2, they offload fewer images to the edge server (Fig. 8E). The reason is that the ktdi and user’s loss aversion are positively correlated. This will lead to the failure probability of the edge server decreasing (Fig. 8F). For the same reason, As the loss attitude δ increases from 0 to 1, the user will have a lower average offloaded image data amount (Fig. 8G) and larger failure probability of the edge server (Fig. 8H).

Convergence analysis

We evaluate the convergence of the BRD-CO algorithm. The simulation results are shown in Fig. 9. In the subfigure, the x-axis shows the number of iterations, and the y-axis shows each user’s offloaded image data amount, PTU, delay, energy consumption, payment and failure probability of the edge server.

The relationship between the number of iterations and each user’s offloaded image data amount, prospect theoretic utility, delay, energy consumption, payment and failure probability of the edge serve.

Figure 9: The relationship between the number of iterations and each user’s offloaded image data amount, prospect theoretic utility, delay, energy consumption, payment and failure probability of the edge serve.

There are two lines on the above and below of the boxes, which represent the maximum and minimum offloaded amounts. The red solid line is the median of the amount of offloaded image data, representing the average level, and the green dotted line represents the average level of offloaded image data after using the BRD-CO algorithm. Figures 9A9E shows the convergence of each user’s offloaded image data amount, prospect theoretic utility, delay, energy consumption and payment after the image is offloaded to the edge server. The results show that the BRD-CO algorithm converged faster with fewer iterations.

Figure 10 shows the user’s average prospect theoretic utility, average delay, average energy consumption and average payment with the increase of iteration number. Specifically, each user will offload a large image on the edge server instead of the terminal device when ite = 1 (Figs. 9A and 10A). This behavior is highly likely to trigger the overuse of computing resources, which makes the Pr increase dramatically (Fig. 9F). Therefore, the edge servers will curb this occurrence by charging users for higher energy consumption (Fig. 10C) and higher payment (Fig. 10D). Surprisingly, the algorithm will reach the lowest delay (Fig. 10B) because of the high computing power of the edge server and performing massive diagnosis tasks.

The relationship between iteration number and user’s average prospect theoretic utility, average delay, average energy consumption and average payment.

Figure 10: The relationship between iteration number and user’s average prospect theoretic utility, average delay, average energy consumption and average payment.

Considering the failure probability of the edge server, as ite increases from 2 to 5, the convergence speed of the BRD-CO algorithm is increase. It will lead to the rapid decrease of image data offloaded amount (Fig. 9A), prospect theoretic utility value (Fig. 10A), energy consumption (Fig. 10C) and payment (Fig. 10D) and the rapid increase of delay (Fig. 10B). As it further increases from 5 to 14, there is no change in both convergence speed and convergence result.

Method comparison

In this session, to evaluate the proposed BRD-CO method, we provide a comparative study between the proposed method with the following four benchmarks and four heuristic methods.

(1) Local computing (denoted by L. Comp.): all tasks are executed on the user terminal without offloading.

(2) Full offloading (denoted by F. Offl.): all tasks are executed on the edge server.

(3) Random offloading (denoted by R. Offl.): each task is randomly offloaded to the user terminal or edge server.

(4) Greedy offloading (denoted by G. Offl.): find the best offloading location for each task by selecting the current optimal solution each time.

(5) Particle swarm optimization-based offloading (denoted by PSO. Offl.) (Yuan et al., 2022): PSO simulates the foraging behavior of a flock of birds, using collaboration and information sharing among individuals in the flock to find the best decision to determine the offloading position of each task.

(6) Differential evolution-based offloading (denoted by DE. Offl.) (Hussain & Beg, 2021): DE simulates biological evolution by iterating repeatedly so that those individuals that are adapted to the environment are retained and the offloading position of each task is determined.

(7) Simulate anneal-based offloading (denoted by SA. Offl.) (Li, 2021): SA algorithm draws on the similarities, which exist between the annealing process of solids in statistical physics and general combinatorial optimization problems, to find the execution position of each task.

(8) Ant colony optimization-based offloading (denoted by ACO. Offl.) (Lin, Pankaj & Wang, 2018): each ant in the ACO algorithm uses pheromones to search simultaneously and independently at multiple points in the problem space, eventually finding the offloading position for each task.

Since the optimization goal of this article is to maximize the user’s prospect theoretic utility, we choose average utility as the performance metric. To eliminate the stochastic introduced by the heuristic algorithms, we conduct 50 runs and used the mean and standard deviation of PTU value as the final result. Table 5 and Table 6 in the Supplemental Information show the results of various heuristic algorithms for different payment factor ω and different numbers of users, respectively.

In Fig. 11A, as the payment factor ω increases, the proposed BRD-CO algorithm can always maintain a higher average utility when compared with the benchmark methods. When the ω is small (i.e., ω = 0.001), the maximum average utility can be achieved by most methods, but it is maximum in our method. As its future increases from 0.001 to 0.9, the average utility of our proposed BRD-CO algorithm decreases by 1.14%, while those of the benchmark methods (expect L. Offl.) decrease by at least 16.4%. This implies that as the computing delay pricing ptdiS_ct=ωφ increases, the average utility of the proposed algorithm decreases less dramatically than other methods. The reason is that when the payment factor ω is larger, the users are inclined to offload fewer image, resulting in the average utility decreasing slowly.

Performance under different payment factors.

Figure 11: Performance under different payment factors.

In Fig. 11B, the proposed BRD-CO algorithm always achieves a higher average utility than the benchmark methods, especially when the number of users is small. As the number of users increases, the average utility of each algorithm (expect L. Offl.) decreases and gradually converges. The reason is that when the number of users is larger, the potential pressure at the edge servers will be increased. When the number of users is large enough, each user has very little offload. In addition, in this article, a marginal decrease of less than 0.005 in average utility is called convergence. From Fig. 11B, we can find that only the BRD-CO algorithm can converge when the number of users is 45. For G. Offl., R. Offl., F. Offl., PSO. Offl., DE. Offl., SA. Offl., and ACO. Offl., convergence is achieved at 50, 85, 55, 85, 85, 75, and 100, respectively (expect L. Offl.). When the BRD-CO reaches convergence, the average utility is 0.138, which is greater than G. Offl. (0.115), L. Offl. (0), R. Offl. (−2.052), F. Offl. (−2.523), PSO. Offl. (−2.500), DE. Offl. (−2.551), SA. Offl (−2.39) and ACO (−2.265). Therefore, compared with other methods, our proposed BRD-CO algorithm has a faster convergence speed and higher average utility.

Statistical test is an effective way to evaluate the performance of algorithms. In this article, the Wilcoxon rank sum test (Derrac et al., 2011) is adopted as a non-parametric statistical test that returns a P-value that verifies the significant level difference between the two algorithms. It is worth noting that an algorithm is statistically different when the P-value is less than 0.05. The P values obtained from formula (21) under different payment factor and different number of users are shown in Table 7 of the Supplemental Information. By evaluating the comparison between BRD-CO and the other eight algorithms, it is clearly understood that only one of the 16 P-values exceeds 0.05, which reflects the statistical superiority of BRD-CO.

Conclusions

In this article, we propose a multi-user multi-objective computation offloading for medical image diagnosis, which can play a significant role in the medical image cloud. Prior computation offloading strategies ignored payment required to perform computation tasks and a user’s risk awareness. To reflect the real communication and computing environment, we consider a more realistic optimization of multi-objective. Specifically, to maximize the prospect theoretic utility of each user by considering delay, energy consumption, payment and user’s risk awareness, we design a low-complexity BRD-CO algorithm. The algorithm can quickly converge to NE point and obtain an optimal computation offloading strategy for each user in a distributed manner. The parameter influence analysis of the BRD-CO algorithm is verified by five aspects. The simulation results show that when compared with four benchmarks and four heuristic algorithms, our proposed BRD-CO algorithm can guarantee a higher user’s prospect theoretic utility and a faster convergence speed. The benefit is especially significant when the diagnosis tasks are delay-sensitive and the resources of terminal devices are limited.

It is worth noting that the medical tasks studied in this article are coarse-grained, but some medical tasks can be more fine-grained. Therefore, our future research work focuses on task dependencies. For diagnostic tasks based on radiomics model, the association relationships between sub-modules (i.e., subtasks) within the model have a large impact on the task offloading problem. Therefore, we intend to use the recurrent neural network to model the dependencies between subtasks. Meanwhile, the high-quality task offloading strategy is learned by continuous ‘trial and error’ through deep reinforcement learning.

Supplemental Information

Simulation code for computation offloading algorithm based on the Best Response Dynamics.

DOI: 10.7717/peerj-cs.1239/supp-1
  Visitors   Views   Downloads