1 Introduction

Typical neural network training starts with random initialization and is trained until reaching convergence in some local optima. The final result is quite sensitive to the starting random seed as reported in [1, 2], who observed a 0.5% difference in accuracy between the worst and best seed on the Imagenet dataset and a 1.8% difference on the CIFAR-10 dataset. Thus, one might need to run an experiment several times to avoid hitting the unlucky seed. The final selected network is just the one with the best validation accuracy.

We believe that the discrepancy between starting seeds performance can be explained by selecting slightly different features in hidden layers in each initialization. One might ask a question: can we somehow select better features for network training? One approach is to train a bigger network and select the most important channels via channel pruning [3,4,5,6]. Training a big network, which is subsequently pruned, might be in many cases prohibitive, since increasing network width by a factor of two results in a four times increase in FLOPs and also might require a change in some hyperparameters (e.g. regularization, learning rate).

Fig. 1
figure 1

Comparison between (a) training a bigger network and then pruning and (b) training two separate networks and then merging them together. Width of rectangles denotes the number of channels in the layer

Fig. 2
figure 2

Set of filters in the first layer of two ResNet20 networks trained on CIFAR-100 dataset with different starting seeds. Each row shows filters from one network. Selected filters for merged network are marked with a red outline

Here, we propose an alternative approach demonstrated in Fig. 1. Instead of training a bigger network and pruning it, we will train two same-sized networks and merge them together into one. The idea is that each training run would fall into different local optima and thus have different sets of filters in each layer, as shown in Fig. 2. We then can select a better set of filters than in the original networks and achieve better accuracy.

In summary, in this paper:

  • We propose a procedure for merging two networks with the same architecture into one with the same architecture as the original ones.

  • We demonstrate that our procedure produces a network with better performance than the best of original ones. On top of that, we also show that the resulting network is better than the same network trained for an extended number of epochs (matching the whole training budget for the merged network).

1.1 Related Work

Multiple approaches try to improve accuracy/size tradeoff for neural networks without the need for specialized sparse computation (such as in case of weight pruning). The most notable one is channel pruning [3,4,5,6]. Here, we first train a bigger network and then select the most important channels in each layer. The selection process usually involves assigning some score to each channel and then removing channels with the lowest score.

Another approach is knowledge distillation [7]. Knowledge distillation first trains a bigger network (teacher) and then uses its outputs as targets for a smaller network (student). It is hypothesized that by using outputs from larger network, the smaller network can also learn hidden aspects of data which are not visible in the dataset labels. However, it was shown that successful knowledge distillation requires training for a huge number of epochs (i.e. 1200) [8]. A slight twist to distillation was applied in [9] where bigger and smaller networks were cotrained together.

One can also use auxiliary losses to reduce redundancy and increase diversity between various places in the neural network [10].

A closely related approach to ours is known as model fusion [11, 12] (with further advancements detailed in [13]). Model fusion, like our method, combines multiple networks into a unified model. Given that neural networks exhibit permutation invariance, model fusion initially focuses on aligning neurons across various models, essentially identifying the optimal permutation of neurons before averaging their respective parameters. Notably, when neurons are aligned only by looking at relevant weights, there is no need for training data, which offers a significant advantage in federated learning scenarios. In contrast, our approach directly selects an optimal subset of neurons from both networks. Our strategy might be particularly advantageous in scenarios where specific neurons from one network cannot be seamlessly aligned with those of another.

2 Methods

Here, we describe our training and merging procedure. We will denote two networks, which will be merged as teachers and the resulting network as a student.

Our training strategy is composed of three stages:

  1. 1.

    Training of two teachers

  2. 2.

    Merging procedure, i.e. creating a student, which consists of the following substeps:

    1. (a)

      Layerwise concatenation of teachers into a big student

    2. (b)

      Learning importance of big student neurons

    3. (c)

      Compression of big student

  3. 3.

    Fine-tuning of the student

Training of teachers and fine-tuning of the student is just standard training of a neural network by backpropagation. Below, we describe how we derive a student from two teachers.

2.1 Layerwise Concatenation of Teachers into a Big Student

First, we create a “big” student by layerwise concatenation of teachers. The big student simulates the two teachers and averages their predictions in the final layer. This phase is just network transformation without any training, see Fig. 3. Concatenation of the convolutional layer is done in the channel dimension, see Fig. 4. Concatenation of the linear layer is done analogically in the feature dimension. We call the model "big student" because it has a doubled width.

Fig. 3
figure 3

Concatenation of linear layer. Orange and green weights are copies of the teacher’s weights. Gray weights are initialized to zero. In the beginning, big student simulates two separate computational flows. But during the training, they can be interconnected

Fig. 4
figure 4

Pytorch code for concatenation of a convolutional layer in ResNet. Since convolutions are followed by Batch normalization, they do not use biases

2.2 Learning Importance of Big Student Neurons

We want the big student to learn to use only half of the neurons in every layer. So, after removing unimportant neurons, we will end up with the original architecture. Besides learning the relevance of neurons, we also want the two computational flows to interconnect.

There are multiple ways to find the most relevant channels. One can assign scores to individual channels [3, 4], or use an auxiliary loss to guide the network to select the most relevant channels. We have chosen the latter approach, inspired by [14]. It leverages the L0 loss presented in [15].

Let \(\ell \) be a linear layer with k input features. Let \(g_i\) be gate assigned to feature \(f_i\). Gate can be either opened \(g_i =1\) (student is using the feature) or closed \(g_i = 0\) (student isn’t using the feature). Before computing outputs of the layer, we first multiply inputs by gates, i. e. instead of computing \(Wf + b\), we compute \(W(f\cdot g) + b\). To make our model use only half of the features, we want \(\frac{1}{n}\sum _1^k g_i = \frac{1}{2}\).

The problem with this approach is that \(g_i\) is discrete and is not trainable by the gradient descent. We used stochastic gates and continuous relaxation of \(L_0\) norm presented in [15] to overcome this issue. The stochastic gates contain a random variable that has nonzero probability to be 0: \(P[g_i=0]>0\), nonzero probability to be 1 \(P[g_i=1]>0\), and is continuous on interval (0, 1). The reparameterization trick makes the distribution of the gates trainable by gradient descent.

To encourage the big student to use only half of the features of the layer, we use an auxiliary loss:

$$\begin{aligned} L_{half}^\ell = \left( \frac{1}{2} - \frac{1}{k}\sum _1^k P[g_i>0]\right) ^2 \end{aligned}$$

Note that our loss is different from the loss used in [15]. Whereas our loss forces the model to have exactly half of the gates opened, their loss pushes the model to use as few gates as possible.

Thus we are optimizing \(L = L_{E} + \lambda \sum _\ell L_{half}^\ell \), where \(L_{E}\) is error loss measuring fit on the dataset and new hyperparameter \(\lambda \) is the proportion of importance of error loss and auxiliary loss.

Hyperparameter \(\lambda \) is sensitive and needs proper tuning. At the beginning of the training, it can not be too big, or the student will set every gate to be closed with a probability of 0.5. At the end of the training, it can not be too small, or the student will ignore the auxiliary loss in favor of the error loss. It will use more than half of the neurons of the layer and will significantly drop performance after the compression. We found that using the quadratic increase of \(\lambda \) during big student’s training works sufficiently well, see Fig. 5.

Fig. 5
figure 5

Evolution of \(\lambda \) during training. For the sine problem we have used \({\lambda _{t+1} = \lambda _t + 0.05* \sqrt{\lambda _t}}\) (where t is the epoch number)

We have implemented gates in a separate layer. We have used two designs of gate layers, one for 2d channels and one for 1d data. The position of gate layers is critical. For example, if a gate layer is positioned right before the batch norm, its effect (i. e. multiplying the channel by 0.1) would be countered by the batch norm, see Fig. 6.

Fig. 6
figure 6

Positions of gate layers (a) sine problem (b) LeNet c) two consecutive blocks in ResNet. Two of the ResNet gate layers have to be identical. If the layers would not be linked and for some channel i, the first gate would be closed while the second gate would be opened, the result of the second block, for that channel, would be \(0+f(x)\) instead of \(x_i + f(x)\) which would defeat the whole purpose of ResNet and skip connections

2.3 Compression of Big Student

After learning of importance is finished, we select half of the most important neurons for every layer. Then, we compress each layer by keeping only the selected neurons as visualized in Fig. 7.

Fig. 7
figure 7

Compression of the big student. On the left side is a linear layer of a big student. Prior and following gate layers decide which neurons are important. On the right side is a compressed layer. It consists of only the important neurons

3 Experimental Results

We compare our merging strategy to generic neural network training on several problems. First, we test our training strategy on a synthetic problem. We show that our merging strategy can learn better features than typical training. Then we test various architectures on image classification problems. We show that after the merging procedure, the resulting network is better than the original ones and also better than training one network for an extended amount of time.

3.1 Training Strategies

We compare our network merging with generic training strategies using the same number of training epochs. Except for Imagenet and comparison with model fusion, we are comparing our training strategy with strategies bo3 model and one model. In the CIFAR-100 experiment, we also compare with pruning strategy. We run tests multiple times and report summary statistics (minimum, maximum, median, mean, and standard deviation of accuracy or mean squared error) over multiple runs.

In our merging strategy, student, we use two-thirds of epochs to train teachers and one-third to train student (one-sixth to find important neurons and one-sixth to fine-tune).

In the bo3 model, we train three models, each for one-third of epochs, and then we choose the best.

In the one model strategy, we use all epochs to train one model.

In the pruning strategy, we use the channel pruning method from [4]. We first train network with doubled width using two thirds or training budget and then incrementally prune for one sixth of the training budget and finally fine-tune for the rest of the time.

Fig. 8
figure 8

a) Training dataset for sine problem consisting of 10000 samples where \(x \sim \mathcal {U}(0,1)\) and \(y = sin(10\pi x) + z \,; \, z \sim \mathcal {N}(0,\,0.2)\). b) Architecture of model for sine problem

Note that each strategy uses a similar training budget. Also, during inference, all models use the equivalent amount of resources since they have exactly the same architecture.

3.2 Synthetic Dataset-Sine Problem

Table 1 Summary of experimental results for sine problem

First, we want to confirm the idea that a network trained from random initialization might end in suboptimal local optima, and our merging procedure finds higher quality local optima. To verify, we have created a synthetic dataset—five sine waves with noise. The input is scalar x and the target is \(y = sin(10\pi x)+z\) where \(z \sim \mathcal {N}(0,\,0.2)\), see Fig. 8.

Our architecture is composed of two linear layers (i. e. one hidden layer) with 100 hidden neurons (Fig. 8). In every strategy, we have used 900 epochs and SGD with starting learning rate 0.01 and momentum 0.9. Then, we have decreased the learning rate to 0.001 after the 100th epoch for student fine-tuning, the 250th epoch for teachers and bo3 models, and the 800th epoch for a model in one model. We repeat all experiments 50 times.

We can observe that our strategy has significantly smaller error than other strategies (Table  1, Fig. 9).

Digging deeper (Fig. 10), we observe networks trained by our strategy to predict all the peaks correctly. However, networks trained by generic strategy often miss some peaks. This indicates that our training strategy helps the network to select better features for later use. In some cases, bo1 network is lucky and predicts all the peaks correctly, which can be seen in having similar minimal error as our training strategy.

Fig. 9
figure 9

Box plot of testing losses of 50 experiments on the sine problem. The vertical line inside the box represents the median, and the cross presents the mean

Fig. 10
figure 10

Plots of learned sine curves by models trained with different strategies. We plot every train result as one line and overlay on top of each other. As we can see, all the students resulting from merging get all the peaks. However, models without merging often missed some peaks

Fig. 11
figure 11

Box plot of the testing accuracies of 10 experiments on Imagewoof with LeNet

3.3 Image Classification

Here, we test our training strategy on various combinations of dataset and architecture. First, we use Imagewoof (Imagenet-1k using only 10 classes of dog breeds) dataset [16, 17] with LeNet [18] and ResNet18 [19] architectures. Then, we test our approach on CIFAR-100 dataset [20] using ResNet20 [19] architecture. To compare with other model fusion approach, we run a test on CIFAR-10 dataset using VGG-11 architecture used in [11, 13]. Finally, we evaluate our approach on Imagenet-1k dataset [21].

In all cases, our training strategy with merging provides better results than generic training strategies. Results are summarized in Tables 2 and 4 and details about the training setup are provided below.

Table 2 Summary of experimental results on image classification tasks

3.3.1 Imagewoof on LeNet

LeNet is composed of two convolutional layers followed by three linear layers. The shape of an input image is (28, 28, 3). The convolutional layers have 6 and 16 output channels, respectively. The linear layers have 400, 120, and 80 input features, respectively. For the architecture of the big student, see Fig. 6.

Every strategy has used 6000 epochs cumulatively and SGD with starting learning rate 0.01 and momentum 0.9. Every training except finding important neurons (teachers, student fine-tuning, bo3 models, and one model) decreased the learning to 0.001 in the third quarter and 0.0001 in the last quarter of the training.

We have conducted 10 experiments, see Fig. 11 for visualisation and Table 2 for detailed statistics. Our strategy has consistently better results than other strategies. It has a greater sample variance than one model due to an outlier, see Fig. 11.

3.3.2 Imagewoof on ResNet18

ResNet has two information flows (one through blocks, one through skip connections). Throughout the computation, its update is \(x = f(x) + x\), instead of the original \(x = f(x)\). To conserve this property, some gate layers have to be synchronized—share weights and realization of random variables, see Fig. 6.

Every strategy has used 600 epochs cumulatively. The optimizer and the learning rate scheduler is analogical to the LeNet experiment.

We have conducted 5 experiments, see Fig. 12 for visualisation and Table 2 for detailed statistics. Similarly, as with LeNet, our strategy has consistently better results than other strategies.

Fig. 12
figure 12

Results of 5 experiments on Imagewoof with ResNet18. The worst student (0.819) had slightly better accuracy than the best long teacher (0.818)

3.3.3 CIFAR-100 on ResNet20

We also tested our approach on the CIFAR-100 dataset using ResNet20. Our total training budget is 600 epochs. We optimize models using SGD with starting learning rate 0.1 and then divide it by 10 during half and three quarters of the training of one network. We run all strategies 5 times and report results in Table 2. We can see that our strategy is more than \(1\%\) better than training one model for an extended period of time. Our strategy is also competitive with typical channel pruning, but compared to channel pruning it can reuse already trained networks of target size.

3.3.4 Comparison with Other Model Fusion Algorithms on CIFAR-10 Using VGG-11

Model fusion presented in [11,12,13] can achieve the same goal as our approach. However, they first select the optimal alignment of neurons from teacher networks, and the resulting student is just an average between aligned teachers. Our approach tries to find the best subset of neurons in each layer. In this test, we compare on the CIFAR-10 dataset using VGG-11 [22] network, which is a setup used both in [11] and [13]. We took baseline networks provided by [11] and merged them using our strategy. We found important neurons during 100 epochs and finetuned the resulting student for another 100 epochs. As shown in Table 3, our approach provides more accurate final results than the model fusion approaches.

Table 3 Comparison with model fusion algorithms: OTFusion from [11] and GAMF from [13] on CIFAR-10 dataset using VGG-11

3.3.5 Imagenet on ResNet18

Table 4 Results on ResNet-18 on Imagenet benchmark

We tested our merging approach also on Imagenet-1k dataset [21]. However, as seen in [2] high quality training requires 300 to 600 epochs, which is quite prohibitive. We opted for the approach from Torchvision [23], which achieves decent results in 90 epochs. We train networks using SGD with starting learning rate 0.1, which decreases by factor of 10 in third and two thirds of training. For the final finetuning of student, we used a slightly smaller starting learning rate of 0.07.

For merging, we used slightly different appoach than in previous experiments. We trained teachers only for a short amount of 20 epochs, which gives teacher accuracy around \(65\%\). Then we spend 20 epochs in tuning big student and finding important neurons, and finally finetune for 90 epochs. In the total of 150 epochs, we get better results than ordinary training for 90 epochs and also better results than training for equivalent amount of 150 epochs. Results are summarized in Table 4.

4 Conclusions and Future Work

We proposed a simple scheme for merging two neural networks trained from different initializations into one. Our scheme can be used as a finalization step after one trains multiple copies of one network with varying starting seeds. Alternatively, we can use our scheme to get higher quality networks under a similar training budget, which we experimentally demonstrated.

One of our scheme’s downsides is that we need to instantiate a rather big neural network during the selection of important neurons. In the future, we would like to optimize this step to be more resource efficient. One option is to select important neurons in a layerwise fashion. Another option is to align neurons first, as in other model fusion approaches, find highly similar neurons, and run the selection step with the remaining neurons.

Other options for future research include merging more than two networks and merging networks pretrained on different datasets.