ISCA Archive Interspeech 2018
ISCA Archive Interspeech 2018

Inference-Invariant Transformation of Batch Normalization for Domain Adaptation of Acoustic Models

Masayuki Suzuki, Tohru Nagano, Gakuto Kurata, Samuel Thomas

Batch normalization, or batchnorm, is a popular technique often used to accelerate and improve training of deep neural networks. When existing models that use this technique via batchnorm layers, are used as initial models for domain adaptation or transfer learning, the novel input feature distributions of the adapted domains, considerably change the batchnorm transformations learnt in the training mode from those which are applied in the inference mode. We empirically find that this mismatch can degrade the performance of domain adaptation for acoustic modeling. To mitigate this degradation, we propose an inference-invariant transformation of batch normalization, a method which reduces the mismatch between training mode and inference mode transformations without changing the inference results. This invariance property is achieved by adjusting the weight and bias terms of the batchnorm to compensate for differences in the mean and variance terms when using the adaptation data. Experimental results show that our proposed method performs the best on several acoustic model adaptation tasks with up to 5% relative improvement in recognition performances in both supervised and unsupervised domain adaptation settings.


doi: 10.21437/Interspeech.2018-1563

Cite as: Suzuki, M., Nagano, T., Kurata, G., Thomas, S. (2018) Inference-Invariant Transformation of Batch Normalization for Domain Adaptation of Acoustic Models. Proc. Interspeech 2018, 2893-2897, doi: 10.21437/Interspeech.2018-1563

@inproceedings{suzuki18_interspeech,
  author={Masayuki Suzuki and Tohru Nagano and Gakuto Kurata and Samuel Thomas},
  title={{Inference-Invariant Transformation of Batch Normalization for Domain Adaptation of Acoustic Models}},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2893--2897},
  doi={10.21437/Interspeech.2018-1563}
}