ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training

Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

In this paper, we explore an improved framework to train a monoaural neural enhancement model for robust speech recognition. The designed training framework extends the existing mixture invariant training criterion to exploit both unpaired clean speech and real noisy data. It is found that the unpaired clean speech is crucial to improve quality of separated speech from real noisy speech. The proposed method also performs remixing of processed and unprocessed signals to alleviate the processing artifacts. Experiments on the single-channel CHiME-3 real test sets show that the proposed method improves significantly in terms of speech recognition performance over the enhancement system trained either on the mismatched simulated data in a supervised fashion or on the matched real data in an unsupervised fashion. Between 16% and 39% relative WER reduction has been achieved by the proposed system compared to the unprocessed signal using end-to-end and hybrid acoustic models without retraining on distorted data.


doi: 10.21437/Interspeech.2022-11359

Cite as: Zhang, J., Zorila, C., Doddipatla, R., Barker, J. (2022) On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. Proc. Interspeech 2022, 1056-1060, doi: 10.21437/Interspeech.2022-11359

@inproceedings{zhang22fa_interspeech,
  author={Jisi Zhang and Catalin Zorila and Rama Doddipatla and Jon Barker},
  title={{On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={1056--1060},
  doi={10.21437/Interspeech.2022-11359}
}