skip to main content
survey
Open Access

Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges

Published:03 December 2022Publication History

Skip Abstract Section

Abstract

Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others. However, continuously executing the entire DNN on mobile devices can quickly deplete their battery. Although task offloading to cloud/edge servers may decrease the mobile device’s computational burden, erratic patterns in channel quality, network, and edge server load can lead to a significant delay in task execution. Recently, approaches based on split computing (SC) have been proposed, where the DNN is split into a head and a tail model, executed respectively on the mobile device and on the edge server. Ultimately, this may reduce bandwidth usage as well as energy consumption. Another approach, called early exiting (EE), trains models to embed multiple “exits” earlier in the architecture, each providing increasingly higher target accuracy. Therefore, the tradeoff between accuracy and delay can be tuned according to the current conditions or application demands. In this article, we provide a comprehensive survey of the state of the art in SC and EE strategies by presenting a comparison of the most relevant approaches. We conclude the article by providing a set of compelling research challenges.

Skip 1INTRODUCTION Section

1 INTRODUCTION

The field of deep learning (DL) has evolved at an impressive pace over the last few years [68], with new breakthroughs continuously appearing in domains such as computer vision (CV), natural language processing (NLP), digital signal processing (DSP), and wireless networking [56, 118], among others—we refer to [110] for a comprehensive survey on DL. For example, today’s state-of-the-art deep neural networks (DNNs) can classify thousands of images with unprecedented accuracy [51], while bleeding-edge advances in deep reinforcement learning (DRL) have shown to provide near-human capabilities in a multitude of complex optimization tasks, from playing dozens of Atari video games [99] to winning games of Go against top-tier players [127].

As DL-based classifiers improve their predictive accuracy, mobile applications such as speech recognition in smartphones [20, 45], real-time unmanned navigation [105], and drone-based surveillance [129, 170] are increasingly using DNNs to perform complex inference tasks. However, state-of-the-art DNN models present computational requirements that cannot be satisfied by the majority of the mobile devices available today. In fact, many state-of-the-art DNN models for difficult tasks—such as computer vision and natural language processing—are extremely complex. For instance, the EfficientDet [139] family offers the best performance for object detection tasks. While EfficientDet-D7 achieves a mean average precision (mAP) of 52.2%, it involves 52M parameters and will take seconds to be executed on strong embedded devices equipped with GPUs such as the NVIDIA Jetson Nano and Raspberry Pi. Notably, the execution of such complex models significantly increases energy consumption. While lightweight models specifically designed for mobile devices exist [122, 138], the reduced computational burden usually comes to the detriment of the model accuracy. For example, compared to ResNet-152 [43], the networks MnasNet [138] and MobileNetV2 [122] present up to 6.4% accuracy loss on the ImageNet dataset. YOLO-Lite [116] achieves a frame rate of 22 frames per second on some embedded devices but has a mAP of 12.36% on the COCO dataset [83]. To achieve 33.8% mAP on the COCO dataset, even the simplest model in the EfficientDet family, EfficientDet-D0, requires 3 times more FLOPs (2.5B)1 than SSD-MobileNetV2 [122] (0.8B FLOPs). While SSD-MobileNetV2 is a lower-performance DNN specifically designed for mobile platforms and can process up to 6 fps, its mAP on the COCO dataset is 20%, and keeping the model running on a mobile device significantly increases power consumption. On the other hand, due to excessive end-to-end latency, cloud-based approaches are hardly applicable in most of the latency-constrained applications where mobile devices usually operate. Most of the techniques we overview in the survey can be applied to both mobile-device-to-edge-server and edge-server-to-cloud offloading. For the sake of clarity, we primarily refer to the former to explain the frameworks.

Recently, edge computing (EC) approaches [10, 88] have attempted to address the “latency vs. computation” conundrum by completely offloading the DNN execution to servers located very close to the mobile device, i.e., at the “edge” of the network. However, canonical EC does not consider that the quality of wireless links—although providing high throughput on average—can suddenly fluctuate due to the presence of erratic noise and interference patterns, which may impair performance in latency-bound applications. For example, mobility and impaired propagation have been shown to decrease throughput even in high-bandwidth wireless links [89, 169], while many Internet of Things (IoT) systems are based on communication technologies such as Long Range (LoRa) [121], which has a maximum data rate of 37.5 Kbps due to duty cycle limitations [1].

The severe offloading limitations of some mobile devices, coupled with the instability of the wireless channel (e.g., UAV network [36]), imply that the amount of data offloaded to edge should be decreased, while at the same time keeping the model accuracy as close as possible to the original. For this reason, split computing (SC) [60] and early exiting (EE) strategies [140] have been proposed to provide an intermediate option between EC and local computing. The key intuition behind SC and EE is similar to the one behind model pruning [38, 44, 74, 160] and knowledge distillation [46, 61, 98]—since modern DNNs are heavily over-parameterized [165, 166], their accuracy can be preserved even with substantial reduction in the number of weights and filters, thus representing the input with fewer parameters. Specifically, SC divides a larger DNN into head and tail models, which are respectively executed by the mobile device and edge server. EE, on the other hand, proposes the introduction of “subbranches” into the early layers of DNN models, so that the full computation of the model can be halted—and a prediction result provided—if the classifiers in the current subbranches have high confidence with the specific model input.

Motivation and Novel Contributions. The proliferation of DL-based mobile applications in the IoT and 5G landscapes implies that techniques such as SC and EE are not simply “nice-to-have” features, but will become fundamental computational components in the years to come. Although a significant amount of research work has been done in SC and EE, to the best of our knowledge, a comprehensive survey of the state of the art has not been conducted yet. Moreover, there are still a series of research challenges that need to be addressed to take SC and EE to the next level. For this reason, this article makes the following novel contributions:

  • We summarize SC and EE studies with respect to approaches, tasks, and models. We first provide an overview of local, edge, split computing, and early-exit models in Section 2 by highlighting similarities and difference among them.

  • We then discuss and compare the various approaches to SC and EE in Sections 4 and 5 by highlighting the training strategies and applications. Since code availability is fundamental for replicability/reproducibility [34],2 we provide for each work its corresponding code repository, if available, so that interested readers can reproduce and learn from existing studies.

  • We conclude the article by discussing in Section 6 a compelling agenda of research challenges in SC and EE, hoping to spur further contributions in these exciting and timely fields.

Skip 2OVERVIEW OF LOCAL, EDGE, SPLIT COMPUTING, AND EARLY-EXIT MODELS Section

2 OVERVIEW OF LOCAL, EDGE, SPLIT COMPUTING, AND EARLY-EXIT MODELS

In this section, we provide an overview of local, edge, split computing, and early-exit models, which are the main computational paradigms that will be discussed in the article. Figure 1 provides a graphical overview of the approaches.

Fig. 1.

Fig. 1. Overview of (a) local, (b) edge, (c) split computing, and (d) early exiting: image classification as an example.

All these techniques operate on a DNN model \( \mathcal {M}(\cdot) \) whose task is to produce the inference output \( \mathbf {y} \) from an input \( \mathbf {x} \). Typically, \( \mathbf {x} \) is a high-dimensional variable, whereas the output \( \mathbf {y} \) has significantly lower dimensionality [143]. Split computing and early-exit approaches are contextualized in a setting where the system is composed of a mobile device and an edge server interconnected via a wireless channel. The overall goal of the system is to produce the inference output \( \mathbf {y} \) from the input \( \mathbf {x} \) acquired by the mobile device, by means of the DNN \( \mathbf {y} = \mathcal {M}(\mathbf {x}) \) under—possibly time varying—constraints on:

Resources: (1) the computational capacity (roughly expressed as number operations per second) \( C_{\rm md} \) and \( C_{\rm es} \) of the mobile device and edge server, respectively, and (2) the capacity \( \phi \), in bits per second, of the wireless channel connecting the mobile device to the edge server

Performance: (1) the absolute of average value of the time from the generation of \( \mathbf {x} \) to the availability of \( \mathbf {y} \), and (2) the degradation of the “quality” of the output \( \mathbf {y} \).

Split, edge, local, and early exiting strategies strive to find suitable operating points with respect to accuracy, end-to-end delay, and energy consumption, which are inevitably influenced by the characteristics of the underlying system. It is generally assumed that the computing and energy capacities of the mobile device are smaller than that of the edge server. As a consequence, if part of the workload is allocated to the mobile device, then the execution time increases, while the battery lifetime decreases. However, as explained later, the workload executed by the mobile device may result in a reduced amount of data to be transferred over the wireless channel, possibly compensating for the larger execution time and leading to smaller end-to-end delays.

2.1 Local and Edge Computing

We start with an overview of local and edge computing. In local computing (LC), the function \( \mathcal {M}(\mathbf {x}) \) is entirely executed by the mobile device. This approach eliminates the need to transfer data over the wireless channel. However, the complexity of the best-performing DNNs most likely exceeds the computing capacity and energy consumption available at the mobile device. Usually, simpler models \( \hat{\mathcal {M}}(\mathbf {x}) \) are used, such as MobileNet [122] and MnasNet [138], which often have a degraded accuracy performance. Besides designing lightweight neural models executable on mobile devices, the widely used techniques to reduce the complexity of models are knowledge distillation [46] and model pruning/quantization [55, 73], described in Section 3.2. Some of the techniques are also leveraged in SC studies to introduce bottlenecks without sacrificing model accuracy, as will be described in the following sections.

In EC, the input \( \mathbf {x} \) is transferred to the edge server, which then executes the original model \( \mathcal {M}(\mathbf {x}) \). In this approach, which preserves full accuracy, the mobile device is not allocated computing workload, but the full input \( \mathbf {x} \) needs to be transferred to the edge server. This may lead to an excessive end-to-end delay in degraded channel conditions and erasure of the task in extreme conditions. A possible approach to reduce the load imposed to the wireless channel, and thus also transmission delay and erasure probability, is to compress the input \( \mathbf {x} \). We define, then, the encoder and decoder models \( \mathbf {z} = F(\mathbf {x}) \) and \( \hat{\mathbf {x}} = G(\mathbf {z}) \), which are executed at the mobile device and edge server, respectively. The distance \( d(\mathbf {x},\hat{\mathbf {x}}) \) defines the performance of the encoding-decoding process \( \hat{\mathbf {x}} = G(F(\mathbf {x})) \), a metric that is separate from, but may influence, the accuracy loss of \( \mathcal {M}(\hat{\mathbf {x}}) \) with respect to \( \mathcal {M}(\mathbf {x}) \), that is, of the model executed with the reconstructed input with respect to the model executed with the original input. Clearly, the encoding/decoding functions increase the computing load at both the mobile device and edge server side. A broad range of different compression approaches exists ranging from low-complexity traditional compression (e.g., JPEG compression for images in EC [101]) to neural compression models [4, 5, 162]. We remark that while the compressed input data, e.g., JPEG objects, can reduce the data transfer time in EC, those representations are designed to allow the accurate reconstruction of the input signal. Therefore, these approaches may (1) decrease privacy as a “reconstructable” representation is transferred to the edge server [147] and (2) result in a larger amount of data to be transmitted over the channel compared to representation specifically designed for the computing task as in bottleneck-based SC, as explained in the following sections.

2.2 Split Computing and Early Exiting

SC aims at achieving the following goals: (1) the computing load is distributed across the mobile device and edge server and (2) establishes a task-oriented compression to reduce data transfer delays. We consider a neural model \( \mathcal {M}(\cdot) \) with L layers, and define \( \mathbf {z}_{\ell } \) as the output of the \( \ell \)th layer. Early implementations of SC select a layer \( \ell \) and divide the model \( \mathcal {M}(\cdot) \) to define the head and tail submodels \( \mathbf {z}_{\ell }{=}\mathcal {M}_{H}(\mathbf {x}) \) and \( \mathbf {\hat{y}}{=}\mathcal {M}_{T}(\mathbf {z}_{\ell }) \), executed at the mobile device and edge server, respectively. In early instances of SC, the architecture and weights of the head and tail model are exactly the same as the first \( \ell \) layers and last \( L-\ell \) layers of \( \mathcal {M}(\cdot) \). This simple approach preserves accuracy but allocates part of the execution of \( \mathcal {M}(\cdot) \) to the mobile device, whose computing power is expected to be smaller than that of the edge server, so that the total execution time may be larger. The transmission time of \( \mathbf {z}_{\ell } \) may be larger or smaller compared to that of transmitting the input \( \mathbf {x} \), depending on the size of the tensor \( \mathbf {z}_{\ell } \). However, we note that in most relevant applications the size of \( \mathbf {z}_{\ell } \) becomes smaller than that of \( \mathbf {x} \) only in later layers, which would allocate most of the computing load to the mobile device. More recent SC frameworks introduce the notion of bottleneck to achieve in-model compression toward the global task [90]. As formally described in the next section, a bottleneck is a compression point at one layer in the model, which can be realized by reducing the number of nodes of the target layer and/or by quantizing its output. We note that as SC realizes a task-oriented compression, it guarantees a higher degree of privacy compared to EC. In fact, the representation may lack information needed to fully reconstruct the original input data.

Another approach to enable mobile computing is referred to as EE. The core idea is to create models with multiple “exits” across the model, where each exit can produce the model output. Then, the first exit providing a target confidence on the output is selected. This approach tunes the computational complexity, determined by the exit point, to the sample or to system conditions. Formally, we can define a sequence of models \( \mathcal {M}_i \) and \( \mathcal {B}_i, i= 1,\ldots ,N \). Model \( \mathcal {M}_i \) takes as input \( \mathbf {z}_{i-1} \) (the output of model \( \mathcal {M}_{i-1} \)) and outputs \( \mathbf {z}_i \), where we set \( \mathbf {z}_{0}=\mathbf {x} \). The branch models \( \mathcal {B}_i \) take as input \( \mathbf {z}_i \) and produce the estimate of the desired output \( \mathbf {y}_i \). Thus, the concatenation of \( \mathcal {M}_1,\ldots ,\mathcal {M}_N \) results in an output analogous to that of the original model. Intuitively, the larger the number of models used to produce the output \( \mathbf {y}_i \), the better the accuracy. Thus, while SC optimizes intermediate representations to preserve information toward the final task (e.g., classification) for the whole dataset, early-exit models take a “per sample” control perspective. Each sample will be sequentially analyzed by concatenations of \( \mathcal {M}_i \) and \( \mathcal {B}_i \) sections until a predefined confidence level is reached. The hope is that a portion of the samples will require a smaller number of sections compared to executing the whole sequence.

Skip 3BACKGROUND OF DEEP LEARNING FOR MOBILE APPLICATIONS Section

3 BACKGROUND OF DEEP LEARNING FOR MOBILE APPLICATIONS

In this section, we provide an overview of recent approaches to reduce the computational complexity of DNN models for resource-constrained mobile devices. These approaches can be categorized into two main classes: (1) approaches that attempt to directly design lightweight models and (2) model compression.

3.1 Lightweight Models

From a conceptual perspective, the design of small deep learning models is one of the simplest ways to reduce inference cost. However, there is a tradeoff between model complexity and model accuracy, which makes this approach practically challenging when aiming at high model performance. The MobileNet series [47, 48, 122] is one among the most popular lightweight models for computer vision tasks, where Howard et al. [48] describe the first version, MobileNetV1. By using a pair of depth-wise and point-wise convolution layers in place of standard convolution layers, the design drastically reduces model size, and thus computing load. Following this study, Sandler et al. [122] proposed MobileNetV2, which achieves improved accuracy. The design is based on MobileNetV1 [48] and uses the bottleneck residual block, a resource-efficient block with inverted residuals and linear bottlenecks. Howard et al. [47] present MobileNetV3, which further improves the model accuracy and is designed by a hardware-aware neural architecture search [138] with NetAdapt [161]. The largest variant of MobileNetV3, MobileNetV3-Large 1.0, achieves a comparable accuracy to ResNet-34 [43] for the ImageNet dataset, while reducing by about 75% the model parameters.

While many of the lightweight neural networks are often manually designed, there are also studies on automating the neural architecture search (NAS) [173]. For instance, Zoph et al. [174] design a novel search space through experiments with the CIFAR-10 dataset [63], which is then scaled to larger, higher-resolution image datasets such as the ImageNet dataset [120], to design their proposed model: NASNet. Leveraging the concept of NAS, some studies design lightweight models in a platform-aware fashion. Dong et al. [23] propose the Device-aware Progressive Search for Pareto-optimal Neural Architectures (DDP-Net) framework, which optimizes the network design with respect to two objectives: device-related (e.g., inference latency and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. Similarly, Tan et al. [138] propose an automated mobile neural architecture search (MNAS) method and design the MnasNet models by optimizing both model accuracy and inference time.

3.2 Model Compression

A different approach to produce small DNN models is to “compress” a large model. Model pruning and quantization [38, 39, 55, 79] are the dominant model compression approaches. The former removes parameters from the model, while the latter uses fewer bits to represent them. In both these approaches, a large model is trained first and then compressed, rather than directly designing a lightweight model followed by training. Jacob et al. [55] empirically show that their quantization technique leads to an improved tradeoff between inference time and accuracy on MobileNet [48] for image classification tasks on Qualcomm Snapdragon 835 and 821 compared to the original, float-only MobileNet. For what concerns model pruning, Li et al. [75] and Liu et al. [86] demonstrate that it is difficult for model pruning itself to accelerate inference while achieving strong performance guarantees on general-purpose hardware due to the unstructured sparsity of the pruned model and/or kernels in layers.

Knowledge distillation [8, 46] is another popular model compression method. While model pruning and quantization make trained models smaller, the concept of knowledge distillation is to provide outputs extracted from the trained model (called “teacher”) as informative signals to train smaller models (called “student”) in order to improve the accuracy of predesigned small models. Thus, the goal of the process is that of distilling knowledge of a trained teacher model into a smaller student model for boosting accuracy of the smaller model without increasing model complexity. For instance, Ba and Caruana [3] propose a method to train small neural networks by mimicking the detailed behavior of larger models. The experimental results show that models trained by this mimic learning method achieve performance close to that of deeper neural networks on some phoneme recognition and image recognition tasks. The formulation of some knowledge distillation methods will be described in Section 4.4.

Skip 4SPLIT COMPUTING: A SURVEY Section

4 SPLIT COMPUTING: A SURVEY

This section discusses the existing state of of the art in SC. Figure 2 illustrates the existing SC approaches. They can be categorized into either (1) without network modification or (2) with bottleneck injection. We first present SC approaches without DNN modification in Section 4.1. We then discuss the motivations behind the introduction of SC with bottlenecks in Section 4.2, which are then discussed in detail in Section 4.3. Since the latter require specific training procedures, we devote Section 4.4 to their discussion.

Fig. 2.

Fig. 2. Two different SC approaches.

4.1 Split Computing without DNN Modification

In this class of approaches, the architecture and weights of the head \( \mathcal {M}_{H}(\cdot) \) and tail \( \mathcal {M}_T(\cdot) \) models are exactly the same as the first \( \ell \) layers and last \( L-\ell \) layers of \( \mathcal {M}(\cdot) \). To the best of our knowledge, Kang et al. [60] proposed the first SC approach (called “Neurosurgeon”), which searches for the best partitioning layer in a DNN model for minimizing total (end-to-end) latency or energy consumption. Formally, inference time in SC is the sum of processing time on a mobile device, delay of communication between a mobile device and edge server, and processing time on an edge server.

Interestingly, their experimental results show that the best partitioning (splitting) layers in terms of energy consumption and total latency for most of the considered models result in either their input or output layers. In other words, deploying the whole model on either a mobile device or an edge server (i.e., local computing or EC) would be the best option for such DNN models. Following the work by Kang et al. [60], the research communities explored various SC approaches mainly focused on CV tasks such as image classification. Table 1 summarizes the studies on SC without architectural modifications.

Table 1.
WorkTask(s)Dataset(s)Model(s)MetricsCode
Kang et al. [60]Image classification Speech recognition Part-of-speech tagging Named entity recognition Word chunkingN/A (No task-specific metrics)AlexNet [64] VGG-19 [128] DeepFace [137] LeNet-5 [69] Kaldi [111] SENNA [17]D, E, L
Li et al. [76]Image classificationN/A (No task-specific metrics)AlexNet [64]C, D
Jeong et al. [58]Image classificationN/A (No task-specific metrics)GoogLeNet [135] AgeNet [72] GenderNet [72]D, L
Li et al. [73]Image classificationImageNet [120]AlexNet [64] VGG-16 [128] ResNet-18 [43] GoogLeNet [135]A, D, L
Choi and Bajić [13]Object detectionVOC 2007 [28]YOLO9000 [115]A, C, D, L
Eshratifar et al. [25]Image classification Speech recognitionN/A (No task-specific metrics)AlexNet [64] OverFeat [125] NiN [80] VGG-16 [128] ResNet-50 [43]D, E, L
Zeng et al. [168]Image classificationCIFAR-10 [63]AlexNet [64]A, D, L
Cohen et al. [16]Image classification Object detectionImageNet (2012) [120] COCO 2017 [83]VGG-16 [128] ResNet-50 [43] YOLOv3 [116]A, D
Pagliari et al. [106]Natural language inference Reading comprehension Sentiment analysisN/A (No task-specific metrics)RNNsE, L
Itahara et al. [53]Image classificationCIFAR-10 [63]VGG-16 [128]A, D
  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost.

Table 1. Studies on SC without Architectural Modifications

  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost.

Jeong et al. [58] used this partial offloading approach as a privacy-preserving way for computation offloading to blind the edge server to the original data captured by client. Leveraging neural network quantization techniques, Li et al. [73] discussed best splitting point in DNN models to minimize inference latency and showed that quantized DNN models did not degrade accuracy compared to the (pre-quantized) original models. Choi and Bajić [13] proposed a feature compression strategy for object detection models that introduces a quantization/video-coding-based compressor to the intermediate features in YOLO9000 [115].

Eshratifar et al. [25] propose JointDNN for collaborative computation between the mobile device and cloud and demonstrate that using either local computing only or cloud computing only is not an optimal solution in terms of inference time and energy consumption. Different from [60], they consider not only discriminative deep learning models (e.g., classifiers) but also generative deep learning models and autoencoders as benchmark models in their experimental evaluation. Cohen et al. [16] introduce a technique to code the output of the head portion in a split DNN to a wide range of bit-rates and demonstrate the performance for image classification and object detection tasks. Pagliari et al. [106] first discuss the collaborative inference for simple recurrent neural networks, and their proposed scheme is designed to automatically select the best inference device for each input data in terms of total latency or end-device energy. Itahara et al. [53] use dropout layers [133] to emulate a packet loss scenario rather than for the sake of compression and discuss the robustness of VGG-based models [128] for split computing.

While only a few studies in Table 1 heuristically choose splitting points [13, 16], most of the other studies [25, 58, 60, 73, 76, 106, 168] in Table 1 analyze various types of cost (e.g., computational load and energy consumption on mobile device, communication cost, and/or privacy risk) to partition DNN models at each of their splitting points. Based on the analysis, performance profiles of the split DNN models are derived to inform selection. Concerning metrics, many of the studies in Table 1 do not discuss task-specific performance metrics such as accuracy. This is in part because the proposed approaches do not modify the input or intermediate representations in the models (i.e., the final prediction will not change). On the other hand, Choi and Bajić [13], Cohen et al. [16], and Li et al. [73] introduce lossy compression techniques to intermediate stages in DNN models, which may affect the final prediction results. Thus, discussing the tradeoff between compression rate and task-specific performance metrics would be essential for such studies. As shown in the table, such tradeoff is discussed only for CV tasks, and many of the models considered in such studies have weak performance compared with state-of-the-art models and complexity within reach of modern mobile devices. Specific to image classification tasks, most of the models considered in the studies listed in Table 1 are more complex and/or the accuracy is comparable to or lower than that of lightweight baseline models such as MobileNetV2 [122] and MnasNet [138]. Thus, in future work, more accurate models should be considered to discuss the performance tradeoff and further motivate SC approaches.

4.2 The Need for Bottleneck Injection

While Kang et al. [60] empirically show that executing the whole model on either the mobile device or edge server would be best in terms of total inference and energy consumption for most of their considered DNN models, their proposed approach finds the best partitioning layers inside some of their considered CV models (convolutional neural networks (CNNs)) to minimize the total inference time. There are a few trends observed from their experimental results: (1) communication delay to transfer data from the mobile device to edge server is a key component in SC to reduce total inference time; (2) all the neural models they considered for NLP tasks are relatively small (consisting of only a few layers), which potentially resulted in finding that the output layer is the best partition point (i.e., local computing) according to their proposed approach; and (3) similarly, not only DNN models they considered (except VGG [128]) but also the size of the input data to the models (see Table 2) are relatively small, which gives more advantage to EC (fully offloading computation). In other words, it highlights that complex CV tasks requiring large (high-resolution) images for models to achieve high accuracy such as ImageNet and COCO datasets would be essential to discuss the tradeoff between accuracy and execution metrics to be minimized (e.g., total latency, energy consumption) for SC studies. The key issue is that straightforward SC approaches like Kang et al. [60] rely on the existence of natural bottlenecks—that is, intermediate layers whose output \( \mathbf {z}_{\ell } \) tensor size is smaller than the input—inside the model. Without such natural bottlenecks in the model, straightforward splitting approaches would fail to improve performance in most settings [6, 35].

Table 2.
MNISTCIFAR-10CIFAR-100ImageNet (2012)
# labeled train/dev(test) samples60k/10k50k/10k50k/10k1,281k/50k
# object categories10101001,000
Input tensor size\( 1 \times 32 \times 32 \)\( 3 \times 32 \times 32 \)\( 3 \times 32 \times 32 \)\( 3 \times 224 \times 224 \)*
JPEG data size [KB/sample]0.96571.7901.79344.77
  • \( ^* \)A standard (resized) input tensor size for DNN models.

Table 2. Statistics of Image Classification Datasets in SC Studies

  • \( ^* \)A standard (resized) input tensor size for DNN models.

Some models, such as AlexNet [64], VGG [128], and DenseNet [51], possess such layers [90]. However, recent DNN models such as ResNet [43], Inception-v3 [136], Faster R-CNN [117], and Mask R-CNN [42] do not have natural bottlenecks in the early layers; that is, splitting the model would result in compression only when assigning a large portion of the workload to the mobile device. As discussed earlier, reducing the communication delay is key to minimize total inference time in SC. For these reasons, introducing artificial bottlenecks to DNN models by modifying their architecture is a recent trend and has been attracting attention from the research community. Since the main role of such encoders in SC is to compress intermediate features rather than to complete inference, the encoders usually consist of only a few layers. Also, the resulting encoders in SC to be executed on constrained mobile devices are often much smaller (e.g., 10K parameters in the encoder of ResNet-based SC model [94]) than lightweight models such as MobileNetV2 [122] (3.5M parameters) and MnasNet [138] (4.4M parameters). Thus, even if the model accuracy is either degraded or comparable to such small models, SC models are still beneficial in terms of computational burden and energy consumption at the mobile devices.

4.3 Split Computing with Bottleneck Injection

This class of models can be described as composed of three sections: \( \mathcal {M}_{E} \), \( \mathcal {M}_{D} \), and \( \mathcal {M}_{T} \). We define \( \mathbf {z}_{\ell }|\mathbf {x} \) as the output of the \( \ell \)th layer of the original model given the input \( \mathbf {x} \). The concatenation of the \( \mathcal {M}_{E} \) and \( \mathcal {M}_{D} \) models is designed to produce a possibly noisy version \( \hat{\mathbf {z}}_{\ell }|\mathbf {x} \) of \( \mathbf {z}_{\ell }|\mathbf {x} \), which is taken as input by \( \mathcal {M}_{T} \) to produce the output \( \hat{\mathbf {y}} \), on which the accuracy degradation with respect to \( \mathbf {y} \) is measured. The models \( \mathcal {M}_{E} \), \( \mathcal {M}_{D} \) function as specialized encoders and decoders in the form \( \hat{\mathbf {z}}_{\ell }=\mathcal {M}_{D}(\mathcal {M}_E(\mathbf {x})) \), where \( \mathcal {M}_{E}(\mathbf {x}) \) produces the latent variable \( \mathbf {z} \). In worlds, the two first sections of the modified model transform the input \( \mathbf {x} \) into a version of the output of the \( \ell \)th layer via the intermediate representation \( \mathbf {z} \), thus functioning as encoder/decoder functions. The model is split after the first section; that is, \( \mathcal {M}_{E} \) is the head model, and the concatenation of \( \mathcal {M}_{D} \) and \( \mathcal {M}_{T} \) is the tail model. Then, the tensor \( \mathbf {z} \) is transmitted over the channel. The objective of the architecture is to minimize the size of \( \mathbf {z} \) to reduce the communication time while also minimizing the complexity of \( \mathcal {M}_E \) (that is, the part of the model executed at the—weaker—mobile device) and the discrepancy between \( \mathbf {y} \) and \( \hat{\mathbf {y}} \). The layer between \( \mathcal {M}_E \) and \( \mathcal {M}_D \) is the injected bottleneck.

Table 3 summarizes SC studies with bottleneck-injected strategies. To the best of our knowledge, the papers in [26] and [90] were the first to propose altering existing DNN architectures to design relatively small bottlenecks at early layers in DNN models, instead of introducing compression techniques (e.g., quantization, autoencoder) to the models, so that communication delay (cost) and total inference time can be further reduced. Following these studies, Hu and Krishnamachari [49] introduce bottlenecks to MobileNetV2 [122] (modified for CIFAR datasets) in a similar way for SC and discuss end-to-end performance evaluation. Choi et al. [14] combine multiple compression techniques such as quantization and tiling besides convolution/deconvolution layers and design a feature compression approach for object detectors. Similar to the concept of bottleneck injection, Shao and Zhang [126] find that over-compression of intermediate features and inaccurate communication between computing devices can be tolerated unless the prediction performance of the models is significantly degraded by them. Also, Jankowski et al. [57] propose introducing a reconstruction-based bottleneck to DNN models, which is similar to the concept of BottleNet [26]. A comprehensive discussion on the delay/complexity/accuracy tradeoff can be found in [91, 164].

Table 3.
WorkTask(s)Dataset(s)Base Model(s)TrainingMetricsCode
Eshratifar et al. [26]Image classificationminiImageNet [130]ResNet-50 [43] VGG-16 [128]CE-basedA, D, L
Matsubara et al. [90, 91]Image classificationCaltech 101 [29] ImageNet [120]DenseNet-169 [51] DenseNet-201 [51] ResNet-152 [43] Inception-v3 [136]HND KD CE-basedA, C, D, L, T Link
Hu and Krishnamachari [49]Image classificationCIFAR-10/100 [63]MobileNetV2 [122]CE-basedA, D, L
Choi et al. [14]Object detectionCOCO 2014 [83]YOLOv3 [116]Reconstruct.A, D
Shao and Zhang [126]Image classificationCIFAR-100 [63]ResNet-50 [43] VGG-16 [128]CE-based (Multi-stage)A, C, D
Jankowski et al. [57]Image classificationCIFAR-100 [63]VGG-16 [128]CE + \( \mathcal {L}_{2} \) (Multi-stage)A, C, D
Matsubara et al. [93, 94]Object detection Keypoint detectionCOCO 2017 [83]Faster R-CNN [117] Mask R-CNN [42] Keypoint R-CNN [42]HND GHNDA, C, D, L Link
Yao et al. [164]Image classification Speech recognitionImageNet [120] LibriSpeech [107]ResNet-50 [43] Deep Speech [40]Reconstruct. + KDA, D, E, L, T Link*
Assine et al. [2]Object detectionCOCO [83]EfficientDet [139]GHND-basedA, C, D Link
Sbai et al. [124]Image classificationSubset of ImageNet [120] (700 out of 1,000 classes)MobileNetV1 [48] VGG-16 [128]Reconstruct. + KDA, C, D
Lee et al. [70]Object detectionCOCO [83]YOLOv5 [144]CE-basedA, C, D, L
Matsubara et al. [92]Image ClassificationImageNet [120]DenseNet-169 [51] DenseNet-201 [51] ResNet-152 [43]Reconst. HND GHND CE/KD (Multi-stage)A, C, D, EL Link
Matsubara et al. [96, 97]Image Classification Object detection Semantic SegmentationImageNet [120] COCO [83] PASCAL VOC [27]ResNet-50 [43] ResNet-101 [43] RegNetY-6.4GF [113] Hybrid ViT [134] RetinaNet [82] Faster R-CNN [117] DeepLabv3 [11]GHND CE/KD+Rate (Multi-stage)A, C, D, L Link(2021) Link(2022)
  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost. \( ^* \)The repository is incomplete and lacks instructions to reproduce the reported results for vision and speech datasets.

Table 3. Studies on SC with Bottleneck Injection Strategies

  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost. \( ^* \)The repository is incomplete and lacks instructions to reproduce the reported results for vision and speech datasets.

These studies are all focused on image classification. Other CV tasks present further challenges. For instance, state-of-the-art object detectors such as R-CNN models have more narrow range of layers that we can introduce bottlenecks due to the network architecture, which has multiple forward paths to forward outputs from intermediate layers to the feature pyramid network (FPN) [81]. The head network distillation training approach—discussed later in this section—was used by Matsubara and Levorato [94] to address some of these challenges and reduce the amount of data transmitted over the channel by 94% while degrading mAP loss by 1 point. Assine et al. [2] introduce bottlenecks to the EfficientDet-D2 [139] object detector and apply the training method based on the generalized head network distillation [94] and mutual learning [159] to the modified model. Following the studies on SC for resource-constrained edge computing systems [90, 91, 164], Sbai et al. [124] introduce autoencoder to small classifiers and train them on a subset of the ImageNet dataset in a similar manner. These studies discuss the tradeoff between accuracy and memory size on mobile devices, considering communication constraint-based 3G and LoRa technologies [121]. Similar to [2, 93, 94], Lee et al. [70] design a lightweight encoder for an object detector on the mobile device followed by both a module to amplify the compressed feature and the object detector to be executed on the edge server. Matsubara et al. [92] empirically show that bottleneck-injected models can be further improved by elaborating the methods to train the models. The resulting models outperform models with autoencoder-based feature compression (e.g., Figure 5) in terms of the tradeoff between model accuracy and transferred data size.

Matsubara et al. [97] propose a supervised compression method for resource-constrained edge computing systems, which adapts ideas from knowledge distillation and neural image compression [4, 5]. Their student model (namely, Entropic Student) contains a lightweight encoder with a learnable prior, which quantizes and entropy-codes latent representations under a prior probability model for efficiently saving the size of data to be offloaded to the edge server. By adjusting a balancing weight in their loss function during training, we can control the tradeoff between data size (rate) and model accuracy (distortion). The performance of the entropic student model was demonstrated for three large-scale downstream supervised tasks: image classification (ImageNet), object detection (COCO), and semantic segmentation (COCO, PASCAL VOC). Notably, the representation produced by a single trained encoder of the entropic student model can serve multiple downstream tasks. Following the study, Matsubara et al. [96] further investigate this approach and empirically show that it generalizes to other reference models (e.g., ResNet-101 [43], RegNetY-6.4GF [113], Hybrid ViT [134]). Through experiments, the study also points out that simply introducing such bottleneck layers at later layers in a model can improve the conventional rate-distortion (R-D) tradeoff, which will result in most of the computational load being assigned to a weak mobile device.

In contrast to SC studies without bottlenecks in Table 1, many of the studies on bottleneck injection strategies in Table 3 are published with code that would help the research communities replicate/reproduce the experimental results and build on existing studies.

4.4 SC with Bottlenecks: Training Methodologies

Given that recent SC studies with bottleneck injection strategies result in more or less accuracy loss compared to the original models (i.e., without injected bottlenecks), various training methodologies are used and/or proposed in such studies. Some of the training methods are designed specifically for architectures with injected bottlenecks. We now summarize the differences between the various training methodologies used in recent SC studies.

We recall that \( \mathbf {x} \) and \( \mathbf {y} \) are an input (e.g., an RGB image) and the corresponding label (e.g., one-hot vector), respectively. Given an input \( \mathbf {x} \), a DNN model \( \mathcal {M} \) returns its output \( \mathbf {\hat{y}} = \mathcal {M}(\mathbf {x}) \) such as class probabilities in classification task. Each of the L layers of model \( \mathcal {M} \) can be either low-level (e.g., convolution [69], batch normalization [52], ReLU [100]) or high-level layers (e.g., residual block in ResNet [43] and dense block in DenseNet [51]), which are composed by multiple low-level layers. \( \mathcal {M}(\mathbf {x}) \) is a sequence of the L layer functions \( \mathrm{f}_{j} \)s, and the jth layer transforms \( \mathbf {z}_{j-1} \), the output from the previous \( {(j-1)} \)th layer: (1) \( \begin{equation} \mathbf {z}_{j} = \left\lbrace \!\!\!\begin{array}{ll} \mathbf {x} & j = 0 \\ \mathrm{f}_j(\mathbf {z}_{j-1}, \mathbf {\theta }_j) & 1 \le j \lt L, \\ \mathrm{f}_L(\mathbf {z}_{L-1}, \mathbf {\theta }_L) = \mathcal {M}(\mathbf {x}) = \mathbf {\hat{y}} & j = L \end{array} \right. \end{equation} \) where \( \mathbf {\theta }_{j} \) denotes the jth layer’s hyperparameters and parameters to be optimized during training.

Cross-entropy-based Training

To optimize parameters in a DNN model, we first need to define a loss function and update the parameters by minimizing the loss value with an optimizer such as stochastic gradient descent and Adam [62] during training. In image classification, a standard method is to train a DNN model \( \mathcal {M} \) in an end-to-end manner using the cross-entropy like many of the studies [26, 49, 91] in Table 3. For simplicity, here we focus on the categorical cross-entropy and assume \( c \equiv \mathbf {y} \) is the correct class index given a model input \( \mathbf {x} \). Given a pair of \( \mathbf {x} \) and c, we obtain the model output \( \mathbf {\hat{y}} = \mathcal {M}(\mathbf {x}) \), and then the (categorical) cross-entropy loss is defined as (2) \( \begin{equation} \mathcal {L}_\text{CE}(\mathbf {\hat{y}}, c) = -\log \left(\frac{\exp \left(\hat{\mathbf {y}}_{c} \right)}{\sum _{j \in \mathcal {C}} \exp \left(\hat{\mathbf {y}}_j \right)} \right)\!, \end{equation} \) where \( \hat{\mathbf {y}}_{j} \) is the class probability for the class index j, and \( \mathcal {C} \) is a set of considered classes (\( c \in \mathcal {C} \)).

As shown in Equation (2), the loss function used in cross-entropy-based training methods are used as a function of the final output \( \mathbf {\hat{y}} \), and thus are not designed for SC frameworks. While Eshratifar et al. [26], Hu and Krishnamachari [49], Lee et al. [70], and Shao and Zhang [126] use cross-entropy to train bottleneck-injected DNN models in end-to-end manners (Figure 3), Matsubara et al. [91] empirically show that these methods cause a larger accuracy loss in complex tasks such as the ImageNet dataset [120] compared to other more advanced techniques, including knowledge distillation.

Fig. 3.

Fig. 3. Cross-entropy-based training for bottleneck-injected DNN.

Knowledge Distillation

Complex DNN models are usually trained to learn parameters for discriminating between a large number of classes (e.g., 1,000 in ImageNet dataset) and are often overparameterized. KNOWLEDGE DISTILLATION (KD) [3, 46, 78] is a training scheme to address this problem, and trains a DNN model (called “student”) using additional signals from a pretrained DNN model (called “teacher” and often larger than the student). In standard cross-entropy-based training—that is, using “hard targets” (e.g., one-hot vectors)—we face a side effect that the trained models assign probabilities to all of the incorrect classes. From the relative probabilities of incorrect classes, we can see how large models tend to generalize.

As illustrated in Figure 4, by distilling the knowledge from a pretrained complex model (teacher), a student model can be more generalized and avoid overfitting to the training dataset, using the outputs of the teacher model as “soft targets” in addition to the hard targets [46]. (3) \( \begin{equation} \mathcal {L}_\text{KD}(\hat{\mathbf {y}}^\text{S}, \hat{\mathbf {y}}^\text{T}, \mathbf {y}) = \alpha \mathcal {L}_\text{task}(\hat{\mathbf {y}}^\text{S}, \mathbf {y}) + (1 - \alpha) \tau ^2 \mathrm{KL} \left(\mathrm{q}(\hat{\mathbf {y}}^\text{S}), \mathrm{p}(\hat{\mathbf {y}}^\text{T}) \right)\!, \end{equation} \) where \( \alpha \) is a balancing factor (hyperparameter) between the hard target (left term) and soft target (right term) losses, and \( \tau \) is another hyperparameter called temperature to soften the outputs of teacher and student models in Equation (4). \( \mathcal {L}_\text{task} \) is a task-specific loss function, and it is a cross-entropy loss in image classification tasks, i.e., \( \mathcal {L}_\text{task} = \mathcal {L}_\text{CE} \). \( \mathrm{KL} \) is the Kullback-Leibler divergence function, where \( \mathrm{q}(\hat{\mathbf {y}}^\text{S}) \) and \( \mathrm{p}(\hat{\mathbf {y}}^\text{T}) \) are probability distributions of student and teacher models for an input \( \mathbf {x} \), that is, \( \mathrm{q}(\hat{\mathbf {y}}^\text{S}) = [\mathrm{q}_{1}(\hat{\mathbf {y}}^\text{S}), \ldots , \mathrm{q}_{|\mathcal {C}|}(\hat{\mathbf {y}}^\text{S})] \) and \( \mathrm{p}(\hat{\mathbf {y}}^\text{T}) = [\mathrm{p}_{1}(\hat{\mathbf {y}}^\text{S}), \ldots , \mathrm{p}_{|C|}(\hat{\mathbf {y}}^\text{T})] \): (4) \( \begin{equation} \mathrm{q}_{k}(\hat{\mathbf {y}}^\text{S}) = \frac{\exp \left(\frac{\hat{\mathbf {y}}^\text{S}_{k}}{\tau } \right)}{\sum _{j \in \mathcal {C}} \exp \left(\frac{\hat{\mathbf {y}}^\text{S}_{j}}{\tau } \right)}, ~~\mathrm{p}_{k}(\hat{\mathbf {y}}^\text{T}) = \frac{\exp \left(\frac{\hat{\mathbf {y}}^\text{T}_{k}}{\tau } \right)}{\sum _{j \in \mathcal {C}} \exp \left(\frac{\hat{\mathbf {y}}^\text{T}_{j}}{\tau } \right)}. \end{equation} \)

Fig. 4.

Fig. 4. Knowledge distillation for bottleneck-injected DNN (student), using a pretrained model as teacher.

Using the ImageNet dataset, it is empirically shown in Matsubara et al. [91] that all the considered bottleneck-injected student models trained with their teacher models (original models without injected bottlenecks) consistently outperform those trained without the teacher models. This result matches a widely known trend in knowledge distillation reported in Ba and Caruana [3]. However, similar to cross-entropy, the knowledge distillation is still not aware of bottlenecks we introduce to DNN models and may result in significant accuracy loss as suggested by Matsubara et al. [91].

Reconstruction-based Training

As illustrated in Figure 5, Choi et al. [14], Jankowski et al. [57], Sbai et al. [124], and Yao et al. [164] inject AE models into existing DNN models and train the injected components by minimizing the reconstruction error. First, manually an intermediate layer in a DNN model (say its jth layer) is chosen, and the output of the jth layer \( \mathbf {z}_{j} \) is fed to the encoder \( \mathrm{f}_\text{enc} \) whose role is to compress \( \mathbf {z}_{j} \). The encoder’s output \( \mathbf {z}_\text{enc} \) is a compressed representation, i.e., bottleneck, to be transferred to the edge server, and the following decoder \( \mathrm{f}_\text{dec} \) decompresses the compressed representation and returns \( \mathbf {z}_\text{dec} \). As the decoder is designed to reconstruct \( \mathbf {z}_{j} \), its output \( \mathbf {z}_\text{dec} \) should share the same dimensionality with \( \mathbf {z}_{j} \). Then, the injected AEs are trained by minimizing the following reconstruction loss: (5) \( \begin{eqnarray} \mathcal {L}_\text{Recon.}\left(\mathbf {z}_{j} \right) &=& \Vert \mathbf {z}_{j} - \mathrm{f}_\text{dec}(\mathrm{f}_\text{enc}(\mathbf {z}_{j}; \theta _\text{enc}); \mathbf {\theta _\text{dec}}) + \epsilon \Vert _{n}^{m}, \\ &=& \Vert \mathbf {z}_{j} - \mathbf {z}_\text{dec} + \epsilon \Vert _{n}^{m} \nonumber \nonumber , \end{eqnarray} \) where \( \Vert \mathbf {z}\Vert _n^m \) denotes the \( m^\text{th} \) power of the n-norm of \( \mathbf {z} \), and \( \epsilon \) is an optional regularization constant. For example, Choi et al. [14] set \( m = 1 \), \( n = 2, \) and \( \epsilon = 10^{-6} \), and Jankowski et al. [57] use \( m = n = 1 \) and \( \epsilon = 0 \). Inspired by the idea of knowledge distillation [46], Yao et al. [164] also consider additional squared errors between intermediate feature maps from models with and without bottlenecks as additional loss terms like generalized head network distillation [94], described later. While Yao et al. [164] show a high compression rate with small accuracy loss by injecting encoder-decoder architectures into existing DNN models, such strategies [14, 57, 124, 164] increase computational complexity as a result. Suppose the encoder and decoder consist of \( L_\text{enc} \) and \( L_\text{dec} \) layers, respectively; then the total number of layers in the altered DNN model is \( L + L_\text{enc} + L_\text{dec} \).

Fig. 5.

Fig. 5. Reconstruction-based training to compress intermediate output (here \( \mathbf {z}_{2} \) ) in DNN by Autoencoder (AE) (yellow).

Head Network Distillation

The training methods described above are focused on either end-to-end or encoder-decoder training. The first approach often requires hard targets such as one-hot vectors and more training cost, while the latter can focus on the injected components (encoder and decoder) during training, but the additional components (layers) will increase the complexity of the DNN model. To reduce both training cost and model complexity while preserving accuracy, it is proposed in Matsubara et al. [90] to use head network distillation (HND) to distill the head portion of the DNN—which contains a bottleneck—leveraging pretrained DNN models. Figure 6 illustrates this approach.

Fig. 6.

Fig. 6. Head network distillation for bottleneck-injected DNN (student), using a pretrained model as teacher. The student model’s tail portion is copied from that of its teacher model with respect to the architecture and pretrained parameters.

The original pretrained DNN (consisting of L layers) is used as a starting point, whose architecture (in the head part) is simplified. As only the teacher’s head portion is altered, the tail portion of the student model is identical to that of the teacher model with respect to architecture and the same pretrained parameters can be maintained. Thus, head network distillation requires only the first layers of the teacher and student models in the training session as the student head model \( \mathrm{f}_\text{head}^\text{S} \) will be trained to mimic the behavior of teacher’s head model \( \mathrm{f}_\text{head}^\text{T} \) given an input \( \mathbf {x} \): (6) \( \begin{equation} \mathcal {L}_\text{HND}(\mathbf {x}) = \big \Vert \mathrm{f}_\text{head}^\text{S}\left(\mathbf {x}; \mathbf {\theta }_\text{head}^\text{S}\right) - \mathrm{f}_\text{head}^\text{T}\left(\mathbf {x}; \mathbf {\theta }_\text{head}^\text{T}\right) \big \Vert ^2 , \end{equation} \) where \( \mathrm{f}_\text{head}^\text{S} \) and \( \mathrm{f}_\text{head}^\text{T} \) are sequences of the first \( L_\text{head}^\text{S} \) and \( L_\text{head}^\text{T} \) layers in student and teacher models (\( L_\text{head}^\text{S} \ll L^\text{S} \), and \( L_\text{head}^\text{T} \ll L \)), respectively.

Experimental results with the ImageNet (ILSVRC 2012) dataset show that given a bottleneck-introduced model, the head network distillation method consistently outperforms cross-entropy-based training [26, 49, 126] and knowledge distillation methods in terms of not only training cost but also accuracy of the trained model. This method is extended in Matsubara and Levorato [94], where the generalized head network distillation technique (GHND) is proposed for complex object detection tasks and models. We note that these tasks require finer feature maps mimicking those at intermediate layers in the original pretrained object detectors. The loss function in this approach is (7) \( \begin{equation} \mathcal {L}_\text{GHND}(\mathbf {x}) = \sum _{j \in \mathcal {J}} \lambda _{j} \cdot \mathcal {L}_{j}\left(\mathbf {x}, \mathrm{f}_{1-L_j^\text{S}}^\text{S}, \mathrm{f}_{1-L_j}^\text{T}\right)\!, \end{equation} \) where j is the loss index, \( \lambda _{j} \) is a scale factor (hyperparameter) associated with loss \( \mathcal {L}_{j} \), and \( \mathrm{f}_{1-L_j^\text{S}}^\text{S} \) and \( \mathrm{f}_{1-L_j^\text{T}}^\text{T} \) indicate the corresponding sequences of the first \( L_j^\text{S} \) and \( L_j^\text{T} \) layers in the student and teacher models (functions of input data \( \mathbf {x} \)), respectively. The total loss, then, is a linear combination of \( |\mathcal {J}| \) weighted losses. Following Equation (7), the previously proposed head network distillation technique [90] can be seen as a special case of generalized head network distillation (GHND). GHND significantly improved the object detection performance in bottleneck-injected R-CNN models on the COCO 2017 dataset while achieving a high compression rate.

Skip 5EARLY EXITING: A SURVEY Section

5 EARLY EXITING: A SURVEY

This section presents a survey of the state of the art in EE strategies. We first provide a compendium of work focused on CV and NLP applications in Sections 5.2 and 5.3, respectively. Section 5.4 summarizes training methodologies used in the EE studies.

5.1 Rationale behind EE

The core idea of EE, first proposed by Teerapittayanon et al. [140], is to circumvent the need to make DNN models smaller by introducing early exits in the DNN, where execution is terminated at the first exit achieving the desired confidence on the input sample. For instance, some samples in test datasets (and in real-world problems) will be easy for a DNN model, but others may not be, depending on the ML models we use. Thus, EE ends the inference process with fewer transforms (layers) for such easy samples so that the overall inference time and computation cost are reduced.

Figure 7 illustrates an example of early classifiers (subbranches) introduced in a DNN model. In this example, the second early classifier has sufficient confidence in its output (class probability is 0.85 out of 1.0) to terminate the inference for the input sample so that the following layers are not executed. Note that all the exits are executed until the desired confidence is reached; that is, the computational complexity up to that point increases. Thus, the classifiers added to the DNN model need to be simple; that is, they need to have fewer layers than the layers after the branches. Otherwise, the overall inference cost will increase on average rather than decrease. Teerapittayanon et al. [141] also apply this idea to mobile-edge-cloud computing systems; the smallest neural model is allocated to the mobile device, and if that model’s confidence for the input is not large enough, the intermediate output is forwarded to the edge server, where inference will continue using a mid-sized neural model with another exit. If the output still does not reach the target confidence, the intermediate layer’s output is forwarded to the cloud, which executes the largest neural model. EE strategies have been widely investigated in the literature, as summarized in Table 4.

Fig. 7.

Fig. 7. Illustration of two early exits (green) introduced to DNN.

Table 4.
WorkTask(s)Dataset(s)Base Model(s)MetricsCode
Teerapittayanon et al. [140]Image classificationMNIST [69] CIFAR-10 [63]LeNet-5 [69] AlexNet [64] ResNet [43]A, L Link
Teerapittayanon et al. [141]Image classification*Multi-camera multi-object detection [119]Distributed DNNsA, D Link
Lo et al. [87]Image classificationCIFAR-10/100 [63]NiN [80] ResNet [43] WRN [167]A, C
Neshatpour et al. [102]Image classificationImageNet [120]AlexNet [64]A, C, L
Zeng et al. [168]Image classificationCIFAR-10 [63]AlexNet [64]A, D, L
Wang et al. [148]Image classificationCIFAR-10/100 [63]ResNet [43]A, C
Li et al. [77]Image classificationCIFAR-10/100 [63] ImageNet (2012) [120]MSDNet [50]A, C Link
Phuong and Lampert [108]Image classificationCIFAR-100 [63] ImageNet (2012) [120]MSDNet [50]A Link
Elbayad et al. [24]Machine translationIWSLT’14 De-En WMT’14 En-FrTransformer [145]A, C
Wang et al. [150]Image classificationCIFAR-100 [63] ImageNet (2012) [120]ResNet [43] DenseNet [51]A, C, E
Yang et al. [158]Image classificationCIFAR-10/100 [63] ImageNet [120]RANetA, C Link
Soldaini and Moschitti [132]Text rankingWikiQA [163], TREC QA [149], ASNQ [32], GPDRoBERTa [85]A, C Link
Liu et al. [84]Text classificationChnSentiCorp, Book review [112], Shopping review, Weibo, THUCNews, Ag.News, Amz.F, DBpedia, Yahoo, Yelp.F, Yelp.P [171]BERT [21]A, C, T Link
Xin et al. [156]GLUE [146]SST-2 [131], MRPC [22], QQP [54], MNLI [152], QNLI [114], RTE [7, 18, 33, 37]BERT [21] RoBERTa [85]A, C Link
Xing et al. [157]Quality enhancementRAISE [19]Dynamic DNNA, C Link
Laskaridis et al. [67]Image classificationCIFAR-100 [63] ImageNet (2012) [120]ResNet-56 [43] ResNet-50 [43] Inception-v3 [136]A, E, L
Xin et al. [155]Text rankingMS MARCO [104] ASNQ [32]BERT [21]A, L Link
Zhou et al. [172]GLUE [146]CoLA [151], SST-2 [131], MRPC [22], STS-B [9], QQP [54], MNLI [152], QNLI [114], WNLI [71], RTE [7, 18, 33, 37]BERT [21] ALBERT [66]A, C, L, T Link
Matsubara and Levorato [94]Keypoint detectionCOCO 2017 [83]Keypoint R-CNN [42]A, D, L Link
Garg and Moschitti [31]Text ranking Question answeringWikiQA [163], ASNQ [32] SQuAD 1.1 [114]BERT [21] RoBERTa [85] ELECTRA [15]A, L Link
Wołczyk et al. [153]Image classificationCIFAR-10/100 [63], Tiny ImageNetResNet-56 [43] MobileNet [48] WideResNet [167] VGG-16BN [128]A, L Link
Chiang et al. [12]Image classificationCIFAR-100 [63]VGG-11 [128] VGG-13 [128] VGG-16 [128] VGG-19 [128]A, L
Pomponi et al. [109]Image classificationSVHN [103], CIFAR-10/100 [63]AlexNet [64] VGG-11 [128] ResNet-20 [43]A Link
  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost. \( ^* \)The authors extract annotated objects from the original dataset for multi-camera object detection and use the extracted images for an image classification task.

Table 4. Studies on Early Exiting Strategies

  • A: Model accuracy, C: Model complexity, D: Transferred data size, E: Energy consumption, L: Latency, T: Training cost. \( ^* \)The authors extract annotated objects from the original dataset for multi-camera object detection and use the extracted images for an image classification task.

As shown in Tables 1 and 3, most of the studies on SC were focused on computer vision. For EE, we can confirm a good balance between the studies with computer vision and NLP applications as summarized in Table 4, with structural/conceptual differences between the two domains. Moreover, CNN (e.g., AlexNet [64] and ResNet [43]) and Transformer-based models (e.g., BERT [21]) are mostly discussed in the EE studies for computer vision and NLP, respectively. For these reasons, we categorize the EE papers by task domain in Sections 5.2 and 5.3.

5.2 EE for CV Applications

Similar to the SC studies we discussed in Section 4, the research community mainly focused on EE approaches applied to CV tasks.

Design approaches

Wang et al. [150] propose a unified Dual Dynamic Inference that introduces the following features to a DNN model: Input-Adaptive Dynamic Inference (IADI) and Resource-Adaptive Dynamic Inference (RADI). The IADI dynamically determines which sub-networks to be executed for cost-efficient inference, and the RADI leverages the concept of EE to offer “anytime classification.” Using the concept of EE, Lo et al. [87] propose two different methods: (1) authentic operation and (2) dynamic network sizing. The first approach is used to determine whether the model input is transferred to the edge server, and the latter dynamically adjusts the number of layers to be used as an auxiliary neural model deployed on the mobile device for efficient usage of communication channels in EC systems. Neshatpour et al. [102] decompose a DNN’s inference pipeline into multiple stages and introduce EE (termination) points for energy-efficient inference.

Training Approaches

Wang et al. [148] focus on training methods for DNNs with an early exit and observe that prior EE approaches suffered from the burden of manually tuning balancing weights of early-exit losses to find a good tradeoff between computational complexity and overall accuracy. To address this problem, the authors propose a strategy to dynamically adjust the loss weights for the ResNet models they consider. Li et al. [77] and Phuong and Lampert [108] introduce multiple early exits to DNN models and apply knowledge distillation to each of the early exits as students, using their final classifiers as teacher models. Similar to other studies, the DNNs with early exits are designed to finish inference for “easy” samples by early sub-classifiers based on confidence thresholds defined beforehand.

Inference Approaches

Yang et al. [158] leverage EE strategies for multi-scale inputs and propose an approach to classify “easy” samples with smaller neural models. Different from prior studies, their proposed approach scales up the input image (use higher-resolution image as input), depending on the classification difficulty of the sample. Laskaridis et al. [67] design a distributed inference system that employs synergistic device-cloud computation for collaborative inference, including an EE strategy (referred to as progressive inference in their work). Xing et al. [157] apply EE strategies to quality enhancement tasks and propose a resource-efficient blind quality enhancement approach for compressed images. By identifying “easy” samples in the tasks, they dynamically process input samples with/without early exits. Zeng et al. [168] combine EE and SC approaches and propose a framework named Boomerang, which is designed to automate end-to-end DNN inference planning for IoT scenarios; they introduce multiple early exits in AlexNet [64]. Their proposed framework profiles the model to decide its partition (splitting) point.

In addition to introducing and training bottleneck points for object detectors, Matsubara and Levorato [94] introduce a neural filter in an early stage of the head-distilled Keypoint R-CNN model. Similarly to EE frameworks, the filter identifies pictures without objects of interest and triggers termination of the execution before the output of the bottleneck is forwarded. Wołczyk et al. [153] propose Zero Time Waste, a method in which each early exit reuses predictions returned by its predecessors. The method adds direct connections between early exits and combines outputs of the previous early exits like an ensemble model. Through experiments with multiple image classification datasets and model architectures, they demonstrate that their proposed method improves a tradeoff between accuracy and inference time compared to other early-exit methods. Extending the idea of BranchyNet [140], Chiang et al. [12] formulate the early-exit (branch) placement problem. They propose a dynamic programming algorithm to address the problem and discuss the tradeoff between model accuracy and inference time. Pomponi et al. [109] introduce multiple early exits to a classifier and train the entire multi-exit model jointly. Using multiple base models, they discuss various early-exit stopping criteria. Many studies on EE for CV tasks publish their source code to ensure replicability of their work.

5.3 EE for NLP Applications

Interestingly, EE approaches have been widely studied not only in CV tasks—the main application of SC—but also in NLP tasks. Recent studies introduce subbranches (early exits) to transformer-based models such as BERT [21]. While these transformer-based models achieve state-of-the-art performance in NLP tasks, they have an extremely large number of parameters; e.g., BERT [21] has up to 355 million parameters, whereas the largest image classification model used in SC studies (Tables 1 and 3), ResNet-152, has 60 million parameters.

In Elbayad et al. [24] an EE technique for NLP tasks is developed for transformer sequence-to-sequence models [145] in machine translation tasks. The decoder networks in the considered transformer models can be trained by either aligned training or mixed training methods. The former method optimizes all classifiers in the decoder network simultaneously. However, when a different classifier (exit) is chosen for each token (e.g., word) at test time, some of the hidden states from previous time steps may be missed and then the input states to the following decoder network will be misaligned (mismatched). The latter method addresses this issue. In mixed sample training, several paths of random exits are sampled at which the model is assumed to have exited for reducing the mismatch by feeding hidden states from different decoder depths of previous time steps.

For different tasks, Soldaini and Moschitti [132], Xin et al. [156], and Liu et al. [84] propose EE frameworks based on BERT [21] and RoBERTa [85] that share almost the same network architecture. Focused on text ranking, specifically answer sentence selection tasks with question-answering datasets, Soldaini and Moschitti [132] add classification layers to intermediate stages of RoBERTa to build sequential (neural) rerankers [95] inside as early exits and propose the Cascade Transformer models. Focusing on powerful transformer models for industrial scenarios, Liu et al. [84] discuss the effectiveness on 12 (6 English and 6 Chinese) NLP datasets of BERT models when early classifiers are introduced. Similar to the studies by Li et al. [77] and Phuong and Lampert [108], Liu et al. [84] leverage knowledge distillation [46] to train early classifiers, treating the final classifier of the BERT model and their introduced early classifiers as teacher and student classifiers, respectively. Xin et al. [156] target general language understanding evaluation (GLUE) tasks [146] and introduce early exits after each of 12 transformer blocks in BERT and RoBERTa models.

While the Cascade Transformer [132] disregards a fixed portion of candidates (samples) given a query in answer sentence selection tasks, Xin et al. [155] use a score-based EE strategy for a BERT architecture for text ranking tasks. Zhou et al. [172] introduce early classifiers to BERT and ALBERT [66] models and discuss adversarial robustness using the ALBERT models with and without the early exits. Using an adversarial attack method [59], the authors feed perturbed input data (called adversarial examples [65]) to their trained models and show how robust their models are against the adversarial attack, compared to those without early classifiers. Garg and Moschitti [31] propose an approach to filter out questions in answer sentence selection and question-answering tasks. Leveraging the concept of knowledge distillation, they train a question filter model (student), whose input is a query, by mimicking the top-1 candidate score of the answer model (teacher), whose input is a pair of query and the list of the candidate answers. When the trained question filter model finds a query answerable for the answer model, the subsequent inference pipeline will be executed. Otherwise, the question filter model terminates the inference process for the query (i.e., early exit) to save the overall inference cost.

Most of the studies on EE for NLP tasks in Table 4 are published with source code to ensure replicable results. Notably, this application domain enjoys a well-generalized open source framework—Huggingface’s Transformers [154]—which provides state-of-the-art (pretrained) Transformer models, including the BERT, RoBERTa, and ALBERT models used in the above studies.

5.4 Training Methodologies for EE Strategies

To introduce EE strategies, the early classifiers need to be trained in addition to the base models. We can categorize the training methodologies used in EE studies into two main classes: joint training and separate training, illustrated in Figure 8 and described in the next sections.

Fig. 8.

Fig. 8. Examples of joint and separate training methods for DNN with early exits.

Joint Training

Most of the training methods used in existing works belong to this category. Joint training trains all the (early) classifiers in a model simultaneously (left part of Figure 8). Specifically, these studies [24, 67, 87, 109, 132, 140, 141, 148, 150, 155, 157, 158, 168, 172] define a loss function for each of the classifiers and minimize the weighted sum of cross-entropy losses per sample as follows: (8) \( \begin{equation} \mathcal {L}_\text{Joint}([\mathbf {\hat{y}}^{1}, \ldots , \mathbf {\hat{y}}^{N}], c) = \sum _{j=1}^{N} \lambda _{j} \mathcal {L}_\text{CE}(\mathbf {\hat{y}}^{j}, c), \end{equation} \) where \( [\mathbf {\hat{y}}^{1}, \ldots , \mathbf {\hat{y}}^{N}] \) indicates outputs from N (early) classifiers, and the correct label c is shared across all the classifiers in a model. Note that the base model (final classifier) is also counted as one of the N classifiers, and \( N-1 \) early classifiers are introduced to the base model.

Instead, Li et al. [77] and Phuong and Lampert [108] use a knowledge distillation-based loss such as Equation (3) by treating the final classifier (last exit) as a teacher model and all the early classifiers as student models. This approach is based on the assumption that the last classifier will achieve the highest accuracy among all the (early) classifiers in the model, and early classifiers (students) could learn from the last classifier as a teacher model.

Separate Training

A few studies [31, 84, 94, 156] suggest training the early classifiers separately. This approach can be interpreted as a two-stage training paradigm that trains a model in the first stage and then trains the early classifiers introduced to the pretrained model whose parameters are fixed in the second stage (see Figure 8 (right)). For instance, Xin et al. [156] fine-tune a BERT model in the first stage following Devlin et al. [21]. Then, the early classifiers are introduced in the model and trained while all the parameters of the BERT model learned in the first stage are kept frozen. Liu et al. [84] adopt a similar approach, but in the second training stage, knowledge distillation is used to train the early classifiers. Different from SC studies using knowledge distillation, the teacher model is fixed, and only the additional parameters corresponding to the early classifiers are trained. Wołczyk et al. [153] introduce early exits to a pretrained model. Using the cross-entropy loss, they train the introduced early exits.

Skip 6SPLIT COMPUTING AND EARLY EXITING: RESEARCH CHALLENGES Section

6 SPLIT COMPUTING AND EARLY EXITING: RESEARCH CHALLENGES

In this section, we describe some of the research challenges in the SC and EE domains.

Evaluation of SC and EE in More Practical Settings

Due to the cross-disciplinary nature of this research area, it is essential to design practical and convincing evaluation settings to demonstrate the effectiveness of proposed approaches. As shown in Tables 3 and 4, the techniques proposed in many of the recent related studies are validated only on small-scale datasets such as MNIST and CIFAR datasets, which leads to some concerns on the input data size in relation to compression. Indeed, Table 2 suggests that the input data size in many of such datasets is relatively small (e.g., smaller than 2 kilobytes per image with a resolution of \( 32 \times 32 \) pixels). The low resolution of the input size may enable conventional EC, where the mobile device fully offloads the computing task by transferring the input data to an edge server. In fact, the transmission of such a small amount of data would require a short time even in settings with limited communication capacity. As a consequence, executing even small head models on a resource-limited mobile device could lead to an overall delay increase.

Based on the above discussion, it becomes apparent that the models and datasets, in addition to the wireless and computing environments, are of paramount importance when assessing the performance of SC and EE schemes. Of particular relevance is the evaluation of accuracy, which is not provided in some of the early studies (e.g., [43, 122, 128]), and the consideration of state-of-the-art models and datasets, which are largely used in the machine learning community. For instance, the use of small models, such as MobileNetV2, ResNet-50, and VGG-16, which are likely overparameterized for simple classification tasks, could lead to wrong conclusions when injecting bottlenecks. Conversely, it was shown in [90] how challenging it is to inject bottlenecks when considering complex vision tasks such as classification on the ImageNet dataset [120].

Optimization of Bottleneck Design and Placement in SC

The study of the architecture and placement of the bottleneck in a DNN model is also of considerable importance. As suggested in [96], important metrics include (1) bottleneck data size (or compression rate), (2) complexity of the head model executed on the mobile device, and (3) resulting model accuracy. As a principle, the smaller the bottleneck representation is, the lower the communication cost between the mobile device and edge server will be. In general, the objective of SC is to generate a bottleneck whose data size is smaller than that of input data such as JPEG file size of input data, which is in turn much smaller than the data size of the input tensor (32-bit floating point), as the communication delay is a key component to reduce overall inference time [90, 91, 94, 158]. Secondly, since mobile devices often have limited computing resources and may have other constraints such as energy consumption due to their battery capacities, SC should aim at minimizing their computational load by making head models as lightweight as possible. For instance, designing a small bottleneck at a very early stage of the DNN model enables a reduction in the computational complexity of the head model [93, 94].

On top of these two criteria, the resulting model accuracy by the bottleneck injection should not be compromised as the introduced bottleneck removes more or less information at the placement compared to the original model. A reasonable lower bound of the model accuracy in SC would be that of widely recognized lightweight models, e.g., MobileNetV2 [122] for ImageNet dataset, considering a local computing system where such lightweight models can be efficiently executed. In general, it is challenging to optimize bottleneck design and placement with respect to all three different metrics, and existing studies empirically design the bottlenecks and determine the placements. Thus, theoretical discussion on bottleneck design and placement should be an interesting research topic for future work.

Dynamic Control of Exits in EE

In most of the recent studies, early exits are used when one of the introduced early classifiers (exits) is confident enough in its prediction. However, users are required to determine a threshold for each of the classifiers beforehand at least for one early classifier in the original model where we introduce the early classifier. For example, if the first classifier’s prediction score is greater than 0.9 in the range of 0.0 and 1.0, then the inference for the input is terminated.

To achieve more efficient inference without significantly sacrificing the accuracy of the original model, the system needs to find a balance between (early) classifiers. As recent studies introduce multiple early exits to a model at different stages, such optimizations are challenging. In addition to manually defining such a threshold for each of the classifiers based on empirical results, a possibly interesting direction is the optimization of the decision-making process, that is, at which (early) classifier we should terminate the inference for a given input, without a set of thresholds defined beforehand based on system characteristics.

Expanding the Application Domain of SC and EE

The application domains of SC and (in minor part) EE remain primarily focused on image classification. This focus may be explained by the size of the input, which makes compression a relevant problem in many settings and the complexity of the models and tasks. However, there are many other unexplored domains from which SC would benefit. Real-time health conditions monitoring via wearable sensors is a notable example of an application where a significant amount of data is transferred from sensors to edge servers such as cellular phones and home hubs. For instance, the detection and monitoring of heart anomalies (e.g., arrhythmia) from ECG [30] require the processing of high-rate samples (e.g., 100 to \( 1,\!000 \) per heart cycle) using high-complexity DNN models [41]. Health monitoring applications pose different challenges compared to CV-based applications. Indeed, in the former, both the computing capacity and the bandwidth available to the system are often smaller compared to the latter scenario, and conceptual advancements are required.

Toward an Information-theoretic Perspective

The key intuition behind the success of SC and EE is similar to what has led to the success of techniques such as model pruning [38, 44, 74, 160] and knowledge distillation [46, 61, 98]: most state-of-the-art DNNs are significantly overparameterized [165, 166]. A possible approach to justify SC and EE can be found in the study of information bottlenecks (IBs), which were introduced in [142] as a compression technique in which a random variable \( \mathbf {X} \) is compressed while preserving relevant information about another random variable \( \mathbf {Y} \). The IB method has been applied in [143] to quantify mutual information between the network layers and derive an information theory limit on DNN efficiency. This has led to attempts at explaining the behavior of deep neural networks with the information bottleneck formalism [123].

Despite these early attempts, a strong connection between this relatively new perspective and the techniques described in this article is still elusive. Some of the approaches and architectures discussed in this article are meaningful attempts to efficiently extract a compressed representation of the input and provide sufficient information toward a certain task early in the network layers. The emerging IB formalism is a promising approach to enable the first moves in the information-theoretic analysis of neural-network-based transformations. We believe that this interpretation could serve as a foundation for an in-depth study of structural properties for both SC and EE.

Skip 7CONCLUSIONS Section

7 CONCLUSIONS

Mobile devices such as smartphones and drones have now become an integral part of our daily lives. These devices increasingly utilize deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others. For this reason, in this article, we provided a comprehensive survey of the state of the art in split computing (SC) and early exiting (EE) by presenting a thorough comparison of the most relevant approaches. We also provided a set of compelling research challenges that need to be addressed to improve existing work in the field. We hope this survey will elicit further research in these emerging fields.

Footnotes

  1. 1 In Tan et al. [139], FLOP denotes number of multiply-adds.

    Footnote
  2. 2 To address this problem, major machine learning venues (e.g., ICML, NeurIPS, CVPR, ECCV, NAACL, ACL, and EMNLP) adopt a reproducibility checklist as part of the official review process such as the ML Code Completeness Checklist. See https://github.com/paperswithcode/releasing-research-code.

    Footnote

REFERENCES

  1. [1] Adelantado Ferran, Vilajosana Xavier, Tuset-Peiro Pere, Martinez Borja, Melia-Segui Joan, and Watteyne Thomas. 2017. Understanding the limits of LoRaWAN. IEEE Communications Magazine 55, 9 (2017), 3440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Juliano S. Assine, J. C. S. Santos Filho, and Eduardo Valle. 2021. Single-training collaborative object detectors adaptive to bandwidth and computation. arXiv preprint arXiv:2105.00591 (2021).Google ScholarGoogle Scholar
  3. [3] Ba Jimmy and Caruana Rich. 2014. Do deep nets really need to be deep? In Neural Information Processing Systems 2014. 26542662.Google ScholarGoogle Scholar
  4. [4] Ballé Johannes, Laparra Valero, and Simoncelli Eero P.. 2017. End-to-end optimized image compression. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  5. [5] Ballé Johannes, Minnen David, Singh Saurabh, Hwang Sung Jin, and Johnston Nick. 2018. Variational image compression with a scale hyperprior. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  6. [6] Barbera Marco V., Kosta Sokol, Mei Alessandro, and Stefa Julinda. 2013. To offload or not to offload? The bandwidth and energy costs of mobile cloud computing. In Proceedings of IEEE International Conference on Computer Communications 2013. 12851293.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bentivogli Luisa, Clark Peter, Dagan Ido, and Giampiccolo Danilo. 2009. The fifth PASCAL recognizing textual entailment challenge. In Proceedings of Text Analysis Conference (TAC’09).Google ScholarGoogle Scholar
  8. [8] Buciluǎ Cristian, Caruana Rich, and Niculescu-Mizil Alexandru. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 535541.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Cer Daniel, Diab Mona, Agirre Eneko, Lopez-Gazpio Iñigo, and Specia Lucia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and cross-lingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval’17). 114.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Chen Jiasi and Ran Xukan. 2019. Deep learning with edge computing: A review. Proceedings of the IEEE 107, 8 (2019), 16551674.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Chen Liang-Chieh, Papandreou George, Schroff Florian, and Adam Hartwig. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017).Google ScholarGoogle Scholar
  12. [12] Chiang Chang-Han, Liu Pangfeng, Wang Da-Wei, Hong Ding-Yong, and Wu Jan-Jan. 2021. Optimal branch location for cost-effective inference on branchynet. In 2021 IEEE International Conference on Big Data (Big Data’21). IEEE, 50715080.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Choi Hyomin and Bajić Ivan V.. 2018. Deep feature compression for collaborative object detection. In 2018 25th IEEE International Conference on Image Processing (ICIP’18). IEEE, 37433747.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Choi Hyomin, Cohen Robert A., and Bajić Ivan V.. 2020. Back-and-forth prediction for deep tensor compression. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’20). IEEE, 44674471.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Clark Kevin, Luong Minh-Thang, Le Quoc V., and Manning Christopher D.. 2019. ELECTRA: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  16. [16] Cohen Robert A., Choi Hyomin, and Bajić Ivan V.. 2020. Lightweight compression of neural network feature tensors for collaborative intelligence. In 2020 IEEE International Conference on Multimedia and Expo (ICME’20). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Collobert Ronan, Weston Jason, Bottou Léon, Karlen Michael, Kavukcuoglu Koray, and Kuksa Pavel. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12 (2011), 24932537.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Dagan Ido, Glickman Oren, and Magnini Bernardo. 2005. The PASCAL recognising textual entailment challenge. In Proceedings of the 1st International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment. 177190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Dang-Nguyen Duc-Tien, Pasquini Cecilia, Conotter Valentina, and Boato Giulia. 2015. RAISE: A raw images dataset for digital image forensics. In Proceedings of the 6th ACM Multimedia Systems Conference. 219224.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Deng Li, Hinton Geoffrey, and Kingsbury Brian. 2013. New types of deep neural network learning for speech recognition and related applications: An overview. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 85998603.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 41714186.Google ScholarGoogle Scholar
  22. [22] Dolan William B. and Brockett Chris. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the 3rd International Workshop on Paraphrasing (IWP’05).Google ScholarGoogle Scholar
  23. [23] Dong Jin-Dong, Cheng An-Chieh, Juan Da-Cheng, Wei Wei, and Sun Min. 2018. DPP-Net: Device-aware progressive search for Pareto-optimal neural architectures. In Proceedings of the European Conference on Computer Vision (ECCV’18). 517531.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Elbayad Maha, Gu Jiatao, Grave E., and Auli M.. 2020. Depth-adaptive transformer. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  25. [25] Eshratifar Amir Erfan, Abrishami Mohammad Saeed, and Pedram Massoud. 2019. JointDNN: An efficient training and inference engine for intelligent mobile cloud computing services. IEEE Transactions on Mobile Computing (2019).Google ScholarGoogle Scholar
  26. [26] Eshratifar Amir Erfan, Esmaili Amirhossein, and Pedram Massoud. 2019. BottleNet: A deep learning architecture for intelligent mobile cloud computing services. In 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED’19). 16. https://ieeexplore.ieee.org/document/8824955.Google ScholarGoogle Scholar
  27. [27] Everingham Mark, Gool Luc Van, Williams C. K. I., Winn John, and Zisserman Andrew. 2012. The PASCAL Visual Object Classes Challenge 2012 (VOC2012).Google ScholarGoogle Scholar
  28. [28] Everingham M., Gool L. Van, Williams C. K. I., Winn J., and Zisserman A.. [n.d.]. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html.Google ScholarGoogle Scholar
  29. [29] Fei-Fei Li, Fergus Rob, and Perona Pietro. 2006. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 4 (2006), 594611.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Gadaleta Matteo, Rossi Michele, Steinhubl Steven R., and Quer Giorgio. 2018. Deep learning to detect atrial fibrillation from short noisy ECG segments measured with wireless sensors. Circulation 138, Suppl_1 (2018), A16177–A16177.Google ScholarGoogle Scholar
  31. [31] Garg Siddhant and Moschitti Alessandro. 2021. Will this question be answered? Question filtering via answer model distillation for efficient question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 73297346.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Garg Siddhant, Vu Thuy, and Moschitti Alessandro. 2020. TANDA: Transfer and adapt pre-trained transformer models for answer sentence selection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 77807788.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Giampiccolo Danilo, Magnini Bernardo, Dagan Ido, and Dolan William B.. 2007. The third Pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Gundersen Odd Erik and Kjensmo Sigbjørn. 2018. State of the art: Reproducibility in artificial intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Guo Tian. 2018. Cloud-based or on-device: An empirical study of mobile deep inference. In 2018 IEEE International Conference on Cloud Engineering (IC2E’18). IEEE, 184190.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Gupta Lav, Jain Raj, and Vaszkun Gabor. 2015. Survey of important issues in UAV communication networks. IEEE Communications Surveys & Tutorials 18, 2 (2015), 11231152.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Haim R. Bar, Dagan Ido, Dolan Bill, Ferro Lisa, Giampiccolo Danilo, Magnini Bernardo, and Szpektor Idan. 2006. The second Pascal recognising textual entailment challenge. In Proceedings of the 2nd PASCAL Challenges Workshop on Recognising Textual Entailment.Google ScholarGoogle Scholar
  38. [38] Han Song, Mao Huizi, and Dally William J.. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations.Google ScholarGoogle Scholar
  39. [39] Han Song, Pool Jeff, Tran John, and Dally William. 2015. Learning both weights and connections for efficient neural network. Advances in Neural Information Processing Systems 28 (2015), 1135–1143.Google ScholarGoogle Scholar
  40. [40] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014).Google ScholarGoogle Scholar
  41. [41] Hannun Awni Y., Rajpurkar Pranav, Haghpanahi Masoumeh, Tison Geoffrey H., Bourn Codie, Turakhia Mintu P., and Ng Andrew Y.. 2019. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature Medicine 25, 1 (2019), 6569.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] He Kaiming, Gkioxari Georgia, Dollár Piotr, and Girshick Ross. 2017. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision. 29612969.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] He Yihui, Zhang Xiangyu, and Sun Jian. 2017. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision. 13891397.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew, Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29, 6 (2012), 8297.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Hinton Geoffrey, Vinyals Oriol, and Dean Jeff. 2014. Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop: NIPS 2014.Google ScholarGoogle Scholar
  47. [47] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. 2019. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 13141324.Google ScholarGoogle Scholar
  48. [48] Howard Andrew G., Zhu Menglong, Chen Bo, Kalenichenko Dmitry, Wang Weijun, Weyand Tobias, Andreetto Marco, and Adam Hartwig. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).Google ScholarGoogle Scholar
  49. [49] Hu Diyi and Krishnamachari Bhaskar. 2020. Fast and accurate streaming CNN inference via communication compression on the edge. In 2020 IEEE/ACM 5th International Conference on Internet-of-Things Design and Implementation (IoTDI’20). IEEE, 157163.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Huang Gao, Chen Danlu, Li T., Wu Felix, Maaten L. V. D., and Weinberger Kilian Q.. 2018. Multi-scale dense networks for resource efficient image classification. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  51. [51] Huang Gao, Liu Zhuang, Maaten Laurens Van Der, and Weinberger Kilian Q.. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 47004708.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Ioffe Sergey and Szegedy Christian. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning. PMLR, 448456.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Itahara Sohei, Nishio Takayuki, and Yamamoto Koji. 2021. Packet-loss-tolerant split inference for delay-sensitive deep learning in lossy wireless networks. arXiv preprint arXiv:2104.13629 (2021).Google ScholarGoogle Scholar
  54. [54] Iyer Shankar, Dandekar Nikhil, and Csernai Kornél. [n.d.]. First Quora Dataset Release: Question Pairs. Retrieved January 25, 2021, from https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs.Google ScholarGoogle Scholar
  55. [55] Jacob Benoit, Kligys Skirmantas, Chen Bo, Zhu Menglong, Tang Matthew, Howard Andrew, Adam Hartwig, and Kalenichenko Dmitry. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 27042713.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Jagannath Jithin, Polosky Nicholas, Jagannath Anu, Restuccia Francesco, and Melodia Tommaso. 2019. Machine learning for wireless communications in the Internet of Things: A comprehensive survey. Ad Hoc Networks 93 (2019), 101913.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Jankowski Mikolaj, Gündüz Deniz, and Mikolajczyk Krystian. 2020. Joint device-edge inference over wireless links with pruning. In 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC’20). IEEE, 15.Google ScholarGoogle Scholar
  58. [58] Jeong Hyuk-Jin, Jeong InChang, Lee Hyeon-Jae, and Moon Soo-Mook. 2018. Computation offloading for machine learning web apps in the edge server environment. In 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS’18). 14921499.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Jin Di, Jin Zhijing, Zhou Joey Tianyi, and Szolovits Peter. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 80188025.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Kang Yiping, Hauswald Johann, Gao Cao, Rovinski Austin, Mudge Trevor, Mars Jason, and Tang Lingjia. 2017. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In Proceedings of the 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. 615629. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Kim Yoon and Rush Alexander M.. 2016. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 13171327.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Kingma Diederik P. and Ba Jimmy. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations.Google ScholarGoogle Scholar
  63. [63] Krizhevsky Alex. 2009. Learning multiple layers of features from tiny images. (2009). http://www.cs.toronto.edu/kriz/cifar.html.Google ScholarGoogle Scholar
  64. [64] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, Pereira F., Burges C. J. C., Bottou L., and Weinberger K. Q. (Eds.). 10971105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Kurakin Alexey, Goodfellow Ian, and Bengio Samy. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).Google ScholarGoogle Scholar
  66. [66] Lan Zhenzhong, Chen Mingda, Goodman Sebastian, Gimpel Kevin, Sharma Piyush, and Soricut Radu. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  67. [67] Laskaridis Stefanos, Venieris Stylianos I., Almeida Mario, Leontiadis Ilias, and Lane Nicholas D.. 2020. SPINN: Synergistic progressive inference of neural networks over device and cloud. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] LeCun Yann, Bengio Yoshua, and Hinton Geoffrey. 2015. Deep learning. Nature 521, 7553 (2015), 436.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] LeCun Yann, Bottou Léon, Bengio Yoshua, and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 22782324.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Lee Joo Chan, Kim Yongwoo, Moon SungTae, and Ko Jong Hwan. 2021. A splittable DNN-based object detector for edge-cloud collaborative real-time video inference. In 2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS’21). IEEE, 18.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Levesque Hector J., Davis Ernest, and Morgenstern Leora. 2012. The Winograd schema challenge. In Proceedings of the 13th International Conference on Principles of Knowledge Representation and Reasoning. 552561.Google ScholarGoogle Scholar
  72. [72] Levi Gil and Hassner Tal. 2015. Age and gender classification using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 3442.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Li Guangli, Liu Lei, Wang Xueying, Dong Xiao, Zhao Peng, and Feng Xiaobing. 2018. Auto-tuning neural network quantization framework for collaborative inference between the cloud and edge. In International Conference on Artificial Neural Networks. 402411.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Li Hao, Kadav Asim, Durdanovic Igor, Samet Hanan, and Graf Hans Peter. 2016. Pruning filters for efficient ConvNets. In 4th International Conference on Learning Representations.Google ScholarGoogle Scholar
  75. [75] Li Hao, Kadav Asim, Durdanovic Igor, Samet Hanan, and Graf Hans Peter. 2017. Pruning filters for efficient ConvNets. In 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  76. [76] Li He, Ota Kaoru, and Dong Mianxiong. 2018. Learning IoT in edge: Deep learning for the Internet of Things with edge computing. IEEE Network 32, 1 (2018), 96101.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Li Hao, Zhang Hong, Qi Xiaojuan, Yang Ruigang, and Huang Gao. 2019. Improved techniques for training adaptive deep networks. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV’19). 18911900.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Li Jinyu, Zhao Rui, Huang Jui-Ting, and Gong Yifan. 2014. Learning small-size DNN with output-distribution-based criteria. In 15th Annual Conference of the International Speech Communication Association.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Li Zhuohan, Wallace Eric, Shen Sheng, Lin Kevin, Keutzer Kurt, Klein Dan, and Gonzalez Joey. 2020. Train big, then compress: Rethinking model size for efficient training and inference of transformers. In International Conference on Machine Learning. PMLR, 59585968.Google ScholarGoogle Scholar
  80. [80] Lin Min, Chen Qiang, and Yan Shuicheng. 2014. Network in network. In Second International Conference on Learning Representations. (2014).Google ScholarGoogle Scholar
  81. [81] Lin Tsung-Yi, Dollár Piotr, Girshick Ross, He Kaiming, Hariharan Bharath, and Belongie Serge. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 21172125.Google ScholarGoogle ScholarCross RefCross Ref
  82. [82] Lin Tsung-Yi, Goyal Priya, Girshick Ross, He Kaiming, and Dollár Piotr. 2017. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision. 29802988.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Lin Tsung-Yi, Maire Michael, Belongie Serge, Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft Coco: Common objects in context. In European Conference on Computer Vision. Springer, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Liu Weijie, Zhou Peng, Wang Zhiruo, Zhao Zhe, Deng Haotang, and JU QI. 2020. FastBERT: A self-distilling BERT with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 60356044.Google ScholarGoogle ScholarCross RefCross Ref
  85. [85] Liu Yinhan, Ott Myle, Goyal Naman, Du Jingfei, Joshi Mandar, Chen Danqi, Levy Omer, Lewis Mike, Zettlemoyer Luke, and Stoyanov Veselin. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019).Google ScholarGoogle Scholar
  86. [86] Liu Zejian, Li Fanrong, Li Gang, and Cheng Jian. 2021. EBERT: Efficient BERT inference with dynamic structured pruning. In Findings of the Association for Computational Linguistics (ACL-IJCNLP’21). 48144823.Google ScholarGoogle Scholar
  87. [87] Lo Chi, Su Yu-Yi, Lee Chun-Yi, and Chang Shih-Chieh. 2017. A dynamic deep neural network design for efficient workload allocation in edge computing. In 2017 IEEE International Conference on Computer Design (ICCD’17). 273280.Google ScholarGoogle ScholarCross RefCross Ref
  88. [88] Mao Yuyi, You Changsheng, Zhang Jun, Huang Kaibin, and Letaief Khaled B.. 2017. A survey on mobile edge computing: The communication perspective. IEEE Communications Surveys & Tutorials 19, 4 (2017), 23222358.Google ScholarGoogle ScholarCross RefCross Ref
  89. [89] Mateo Pablo Jiménez, Fiandrino Claudio, and Widmer Joerg. 2019. Analysis of TCP performance in 5G mm-wave mobile networks. In 2019 IEEE International Conference on Communications (IEEE ICC’19). IEEE, 17.Google ScholarGoogle ScholarCross RefCross Ref
  90. [90] Matsubara Yoshitomo, Baidya Sabur, Callegaro Davide, Levorato Marco, and Singh Sameer. 2019. Distilled split deep neural networks for edge-assisted real-time systems. In Proceedings of the 2019 MobiCom Workshop on Hot Topics in Video Analytics and Intelligent Edges. 2126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. [91] Matsubara Yoshitomo, Callegaro Davide, Baidya Sabur, Levorato Marco, and Singh Sameer. 2020. Head network distillation: Splitting distilled deep neural networks for resource-constrained edge computing systems. IEEE Access 8 (2020), 212177212193. Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Matsubara Yoshitomo, Callegaro Davide, Singh Sameer, Levorato Marco, and Restuccia Francesco. 2022. BottleFit: Learning compressed representations in deep neural networks for effective and efficient split computing. arXiv preprint arXiv:2201.02693 (2022).Google ScholarGoogle Scholar
  93. [93] Matsubara Yoshitomo and Levorato Marco. 2020. Split computing for complex object detectors: Challenges and preliminary results. In Proceedings of the 4th International Workshop on Embedded and Mobile Deep Learning. 712.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. [94] Matsubara Yoshitomo and Levorato Marco. 2021. Neural compression and filtering for edge-assisted real-time object detection in challenged networks. In 2020 25th International Conference on Pattern Recognition (ICPR’21). 22722279.Google ScholarGoogle ScholarCross RefCross Ref
  95. [95] Matsubara Yoshitomo, Vu Thuy, and Moschitti Alessandro. 2020. Reranking for efficient transformer-based answer selection. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 15771580.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. [96] Matsubara Yoshitomo, Yang Ruihan, Levorato Marco, and Mandt Stephan. 2022. SC2: Supervised compression for split computing. arXiv preprint arXiv:2203.08875 (2022).Google ScholarGoogle Scholar
  97. [97] Matsubara Yoshitomo, Yang Ruihan, Levorato Marco, and Mandt Stephan. 2022. Supervised compression for resource-constrained edge computing systems. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 26852695.Google ScholarGoogle ScholarCross RefCross Ref
  98. [98] Mirzadeh Seyed Iman, Farajtabar Mehrdad, Li Ang, Levine Nir, Matsukawa Akihiro, and Ghasemzadeh Hassan. 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 51915198.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Mnih Volodymyr, Kavukcuoglu Koray, Silver David, Graves Alex, Antonoglou Ioannis, Wierstra Daan, and Riedmiller Martin. 2013. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).Google ScholarGoogle Scholar
  100. [100] Nair Vinod and Hinton Geoffrey E.. 2010. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning. 807814.Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. [101] Nakahara Mutsuki, Hisano Daisuke, Nishimura Mai, Ushiku Yoshitaka, Maruta Kazuki, and Nakayama Yu. 2021. Retransmission edge computing system conducting adaptive image compression based on image recognition accuracy. In 2021 IEEE 94rd Vehicular Technology Conference (VTC’21-Fall). IEEE, 15.Google ScholarGoogle ScholarCross RefCross Ref
  102. [102] Neshatpour Katayoun, Behnia Farnaz, Homayoun Houman, and Sasan Avesta. 2019. Exploiting energy-accuracy trade-off through contextual awareness in multi-stage convolutional neural networks. In 20th International Symposium on Quality Electronic Design (ISQED’19). 265270.Google ScholarGoogle Scholar
  103. [103] Netzer Yuval, Wang Tao, Coates Adam, Bissacco Alessandro, Wu Bo, and Ng Andrew Y.. [n.d.]. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.Google ScholarGoogle Scholar
  104. [104] Nguyen Tri, Rosenberg Mir, Song Xia, Gao Jianfeng, Tiwary Saurabh, Majumder Rangan, and Deng Li. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS.Google ScholarGoogle Scholar
  105. [105] Padhy Ram Prasad, Verma Sachin, Ahmad Shahzad, Choudhury Suman Kumar, and Sa Pankaj Kumar. 2018. Deep neural network for autonomous UAV navigation in indoor corridor environments. Procedia Computer Science 133 (2018), 643650.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Pagliari Daniele Jahier, Chiaro Roberta, Macii Enrico, and Poncino Massimo. 2020. CRIME: Input-dependent collaborative inference for recurrent neural networks. IEEE Transactions on Computers 70, 10 (2020), 1626–1639.Google ScholarGoogle Scholar
  107. [107] Panayotov Vassil, Chen Guoguo, Povey Daniel, and Khudanpur Sanjeev. 2015. LibriSpeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’15). IEEE, 52065210.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Phuong Mary and Lampert Christoph H.. 2019. Distillation-based training for multi-exit architectures. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV’19). 13551364.Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Pomponi Jary, Scardapane Simone, and Uncini Aurelio. 2021. A probabilistic re-intepretation of confidence scores in multi-exit models. Entropy 24, 1 (2021), 1.Google ScholarGoogle ScholarCross RefCross Ref
  110. [110] Pouyanfar Samira, Sadiq Saad, Yan Yilin, Tian Haiman, Tao Yudong, Reyes Maria Presa, Shyu Mei-Ling, Chen Shu-Ching, and Iyengar S. S.. 2018. A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR) 51, 5 (2018), 136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. [111] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlıcek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society.Google ScholarGoogle Scholar
  112. [112] Qiu Y., Li Hongzheng, Li Shen, Jiang Yingdi, Hu Renfen, and Yang L.. 2018. Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings. In CCL.Google ScholarGoogle Scholar
  113. [113] Radosavovic Ilija, Kosaraju Raj Prateek, Girshick Ross, He Kaiming, and Dollár Piotr. 2020. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1042810436.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Rajpurkar Pranav, Zhang Jian, Lopyrev Konstantin, and Liang Percy. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 23832392.Google ScholarGoogle ScholarCross RefCross Ref
  115. [115] Redmon Joseph and Farhadi Ali. 2017. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 72637271.Google ScholarGoogle ScholarCross RefCross Ref
  116. [116] Redmon Joseph and Farhadi Ali. 2018. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018).Google ScholarGoogle Scholar
  117. [117] Ren Shaoqing, He Kaiming, Girshick Ross, and Sun Jian. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems. 9199.Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. [118] Restuccia Francesco and Melodia Tommaso. 2020. Deep learning at the physical layer: System challenges and applications to 5G and beyond. IEEE Communications Magazine 58, 10 (2020), 5864. Google ScholarGoogle ScholarCross RefCross Ref
  119. [119] Roig Gemma, Boix Xavier, Shitrit Horesh Ben, and Fua Pascal. 2011. Conditional random fields for multi-camera object detection. In 2011 International Conference on Computer Vision. IEEE, 563570.Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. [120] Russakovsky Olga, Deng Jia, Su Hao, Krause Jonathan, Satheesh Sanjeev, Ma Sean, Huang Zhiheng, Karpathy Andrej, Khosla Aditya, Bernstein Michael, Berg Alexander C., and Fei-Fei Li. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. [121] Samie Farzad, Bauer Lars, and Henkel Jörg. 2016. IoT technologies for embedded computing: A survey. In 2016 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ ISSS’16). IEEE, 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. [122] Sandler Mark, Howard Andrew, Zhu Menglong, Zhmoginov Andrey, and Chen Liang-Chieh. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 45104520.Google ScholarGoogle ScholarCross RefCross Ref
  123. [123] Saxe Andrew M., Bansal Yamini, Dapello Joel, Advani Madhu, Kolchinsky Artemy, Tracey Brendan D., and Cox David D.. 2019. On the information bottleneck theory of deep learning. Journal of Statistical Mechanics: Theory and Experiment 2019, 12 (2019), 124020.Google ScholarGoogle ScholarCross RefCross Ref
  124. [124] Sbai Marion, Saputra Muhamad Risqi U., Trigoni Niki, and Markham Andrew. 2021. Cut, distil and encode (CDE): Split cloud-edge deep inference. In 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON’21). IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. [125] Sermanet Pierre, Eigen David, Zhang Xiang, Mathieu Michaël, Fergus Rob, and LeCun Yann. 2014. OverFeat: Integrated recognition, localization and detection using convolutional networks. In 2nd International Conference on Learning Representations.Google ScholarGoogle Scholar
  126. [126] Shao Jiawei and Zhang Jun. 2020. BottleNet++: An end-to-end approach for feature compression in device-edge co-inference systems. In 2020 IEEE International Conference on Communications Workshops (ICC Workshops’20). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  127. [127] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. 2017. Mastering the game of go without human knowledge. Nature 550, 7676 (2017), 354.Google ScholarGoogle ScholarCross RefCross Ref
  128. [128] Simonyan Karen and Zisserman Andrew. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations.Google ScholarGoogle Scholar
  129. [129] Singh Amarjot, Patil Devendra, and Omkar S. N.. 2018. Eye in the sky: Real-time drone surveillance system (DSS) for violent individuals identification using ScatterNet hybrid deep learning network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 16291637.Google ScholarGoogle ScholarCross RefCross Ref
  130. [130] Snell Jake, Swersky Kevin, and Zemel Richard. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems. 40774087.Google ScholarGoogle Scholar
  131. [131] Socher Richard, Perelygin Alex, Wu Jean, Chuang Jason, Manning Christopher D., Ng Andrew Y., and Potts Christopher. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 16311642.Google ScholarGoogle Scholar
  132. [132] Soldaini Luca and Moschitti Alessandro. 2020. The cascade transformer: An application for efficient answer sentence selection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 56975708.Google ScholarGoogle ScholarCross RefCross Ref
  133. [133] Srivastava Nitish, Hinton Geoffrey, Krizhevsky Alex, Sutskever Ilya, and Salakhutdinov Ruslan. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 1 (2014), 19291958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. [134] Steiner Andreas, Kolesnikov Alexander, Zhai Xiaohua, Wightman Ross, Uszkoreit Jakob, and Beyer Lucas. 2021. How to train your ViT? Data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270 (2021).Google ScholarGoogle Scholar
  135. [135] Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent, and Rabinovich Andrew. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 19.Google ScholarGoogle ScholarCross RefCross Ref
  136. [136] Szegedy Christian, Vanhoucke Vincent, Ioffe Sergey, Shlens Jon, and Wojna Zbigniew. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 28182826.Google ScholarGoogle ScholarCross RefCross Ref
  137. [137] Taigman Yaniv, Yang Ming, Ranzato Marc’Aurelio, and Wolf Lior. 2014. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 17011708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. [138] Tan Mingxing, Chen Bo, Pang Ruoming, Vasudevan Vijay, Sandler Mark, Howard Andrew, and Le Quoc V.. 2019. MnasNet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 28202828.Google ScholarGoogle ScholarCross RefCross Ref
  139. [139] Tan Mingxing, Pang Ruoming, and Le Quoc V.. 2020. EfficientDet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1078110790.Google ScholarGoogle ScholarCross RefCross Ref
  140. [140] Teerapittayanon Surat, McDanel Bradley, and Kung Hsiang-Tsung. 2016. BranchyNet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR’16). IEEE, 24642469.Google ScholarGoogle ScholarCross RefCross Ref
  141. [141] Teerapittayanon Surat, McDanel Bradley, and Kung H. T.. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS’17). 328339.Google ScholarGoogle ScholarCross RefCross Ref
  142. [142] Tishby Naftali, Pereira Fernando C., and Bialek William. 2000. The information bottleneck method. arXiv preprint physics/0004057 (2000).Google ScholarGoogle Scholar
  143. [143] Tishby Naftali and Zaslavsky Noga. 2015. Deep learning and the information bottleneck principle. In 2015 IEEE Information Theory Workshop (ITW’15). IEEE, 15.Google ScholarGoogle Scholar
  144. [144] Ultralytics. [n.d.]. YOLOv5. https://github.com/ultralytics/yolov5.Google ScholarGoogle Scholar
  145. [145] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, Guyon I., Luxburg U. V., Bengio S., Wallach H., Fergus R., Vishwanathan S., and Garnett R. (Eds.), Vol. 30. Curran Associates, Inc., 59986008.Google ScholarGoogle Scholar
  146. [146] Wang Alex, Singh Amanpreet, Michael Julian, Hill Felix, Levy Omer, and Bowman Samuel R.. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  147. [147] Wang Fei, Diao Boyu, Sun Tao, and Xu Yongjun. 2020. Data security and privacy challenges of computing offloading in FINs. IEEE Network 34, 2 (2020), 1420.Google ScholarGoogle ScholarCross RefCross Ref
  148. [148] Wang Meiqi, Mo Jianqiao, Lin Jun, Wang Zhongfeng, and Du Li. 2019. DynExit: A dynamic early-exit strategy for deep residual networks. In 2019 IEEE International Workshop on Signal Processing Systems (SiPS’19). IEEE, 178183.Google ScholarGoogle Scholar
  149. [149] Wang Mengqiu, Smith Noah A., and Mitamura Teruko. 2007. What is the jeopardy model? A quasi-synchronous grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL’07). 2232.Google ScholarGoogle Scholar
  150. [150] Wang Yue, Shen Jianghao, Hu Ting-Kuei, Xu P., Nguyen Tan, Baraniuk Richard, Wang Zhangyang, and Lin Yingyan. 2020. Dual dynamic inference: Enabling more efficient, adaptive, and controllable deep inference. IEEE Journal of Selected Topics in Signal Processing 14 (2020), 623633.Google ScholarGoogle ScholarCross RefCross Ref
  151. [151] Warstadt Alex, Singh Amanpreet, and Bowman Samuel. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics 7 (2019), 625641.Google ScholarGoogle ScholarCross RefCross Ref
  152. [152] Williams Adina, Nangia Nikita, and Bowman Samuel. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 11121122.Google ScholarGoogle ScholarCross RefCross Ref
  153. [153] Wołczyk Maciej, Wójcik Bartosz, Bałazy Klaudia, Podolak Igor, Tabor Jacek, Śmieja Marek, and Trzcinski Tomasz. 2021. Zero time waste: Recycling predictions in early exit neural networks. Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  154. [154] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. 3845.Google ScholarGoogle Scholar
  155. [155] Xin Ji, Nogueira Rodrigo, Yu Yaoliang, and Lin Jimmy. 2020. Early exiting BERT for efficient document ranking. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing. 8388.Google ScholarGoogle ScholarCross RefCross Ref
  156. [156] Xin Ji, Tang Raphael, Lee Jaejun, Yu Yaoliang, and Lin Jimmy. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 22462251.Google ScholarGoogle ScholarCross RefCross Ref
  157. [157] Xing Qunliang, Xu Mai, Li Tianyi, and Guan Zhenyu. 2020. Early exit or not: Resource-efficient blind quality enhancement for compressed images. In Computer Vision (ECCV’20). Springer International Publishing.Google ScholarGoogle Scholar
  158. [158] Yang L., Han Yizeng, Chen X., Song Shiji, Dai Jifeng, and Huang Gao. 2020. Resolution adaptive networks for efficient inference. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). 23662375.Google ScholarGoogle ScholarCross RefCross Ref
  159. [159] Yang Taojiannan, Zhu Sijie, Chen Chen, Yan Shen, Zhang Mi, and Willis Andrew. 2020. MutualNet: Adaptive convnet via mutual learning from network width and resolution. In European Conference on Computer Vision. Springer, 299315.Google ScholarGoogle ScholarDigital LibraryDigital Library
  160. [160] Yang Tien-Ju, Chen Yu-Hsin, and Sze Vivienne. 2017. Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 56875695.Google ScholarGoogle ScholarCross RefCross Ref
  161. [161] Yang Tien-Ju, Howard Andrew, Chen Bo, Zhang Xiao, Go Alec, Sandler Mark, Sze Vivienne, and Adam Hartwig. 2018. NetAdapt: Platform-aware neural network adaptation for mobile applications. In Proceedings of the European Conference on Computer Vision (ECCV’18). 285300.Google ScholarGoogle ScholarDigital LibraryDigital Library
  162. [162] Yang Yibo, Bamler Robert, and Mandt Stephan. 2020. Variational Bayesian quantization. In International Conference on Machine Learning. PMLR, 1067010680.Google ScholarGoogle Scholar
  163. [163] Yang Yi, Yih Wen-tau, and Meek Christopher. 2015. WikiQA: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 20132018.Google ScholarGoogle ScholarCross RefCross Ref
  164. [164] Yao Shuochao, Li Jinyang, Liu Dongxin, Wang Tianshi, Liu Shengzhong, Shao Huajie, and Abdelzaher Tarek. 2020. Deep compressive offloading: Speeding up neural network inference by trading edge computation for network latency. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 476488.Google ScholarGoogle ScholarDigital LibraryDigital Library
  165. [165] Yu Shujian and Principe Jose C.. 2019. Understanding autoencoders with information theoretic concepts. Neural Networks 117 (2019), 104123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  166. [166] Yu Shujian, Wickstrøm Kristoffer, Jenssen Robert, and Príncipe José C.. 2020. Understanding convolutional neural networks with information theory: An initial exploration. IEEE Transactions on Neural Networks and Learning Systems Vol. 32, 1 (2020), 435–442.Google ScholarGoogle Scholar
  167. [167] Zagoruyko Sergey and Komodakis Nikos. 2016. Wide residual networks. In Proceedings of the British Machine Vision Conference (BMVC’16). BMVA Press, 87.1–87.12.Google ScholarGoogle ScholarCross RefCross Ref
  168. [168] Zeng Liekang, Li En, Zhou Zhi, and Chen X.. 2019. Boomerang: On-demand cooperative deep neural network inference for edge intelligence on the industrial Internet of Things. IEEE Network 33 (2019), 96103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  169. [169] Zhang Menglei, Polese Michele, Mezzavilla Marco, Zhu Jing, Rangan Sundeep, Panwar Shivendra, and Zorzi Michele. 2019. Will TCP work in mmWave 5G cellular networks? IEEE Communications Magazine 57, 1 (2019), 6571.Google ScholarGoogle ScholarDigital LibraryDigital Library
  170. [170] Zhang Shizhou, Zhang Qi, Yang Yifei, Wei Xing, Wang Peng, Jiao Bingliang, and Zhang Yanning. 2020. Person re-identification in aerial imagery. IEEE Transactions on Multimedia 23 (2020), 281291.Google ScholarGoogle ScholarCross RefCross Ref
  171. [171] Zhang X., Zhao J., and LeCun Y.. 2015. Character-level convolutional networks for text classification. In NIPS.Google ScholarGoogle Scholar
  172. [172] Zhou Wangchunshu, Xu Canwen, Ge Tao, McAuley Julian, Xu Ke, and Wei Furu. 2020. BERT loses patience: Fast and robust inference with early exit. Advances in Neural Information Processing Systems 33 (2020), 18330–18341.Google ScholarGoogle Scholar
  173. [173] Zoph Barret and Le Quoc. 2017. Neural architecture search with reinforcement learning. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  174. [174] Zoph Barret, Vasudevan Vijay, Shlens Jonathon, and Le Quoc V.. 2018. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 86978710.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Computing Surveys
          ACM Computing Surveys  Volume 55, Issue 5
          May 2023
          810 pages
          ISSN:0360-0300
          EISSN:1557-7341
          DOI:10.1145/3567470
          Issue’s Table of Contents

          Copyright © 2022 Copyright held by the owner/author(s).

          This work is licensed under a Creative Commons Attribution-NoDerivs International 4.0 License.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 3 December 2022
          • Online AM: 31 March 2022
          • Accepted: 14 March 2022
          • Revised: 15 November 2021
          • Received: 7 March 2021
          Published in csur Volume 55, Issue 5

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • survey
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format