Published November 2, 2020 | Version 1.0
Journal article Open

Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation

  • 1. Univ. Grenoble Alpes, CNRS, LIG
  • 2. Facebook AI

Description

We introduce dual-decoder Transformer, a new model architecture that jointly performs auto- matic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demon- strate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pretrained models are available at https://github.com/formiel/speech-translation.

Files

slides.zip

Files (893.7 MB)

Name Size Download all
md5:dfcdf07a007c766b58d731fb61d8741c
281.5 kB Download
md5:2793bb368e851dc4cfd9a9a31e88fb1b
201.5 MB Download
md5:f3efb7c7b2d872ae88744d0cb24aeb32
205.4 MB Download
md5:2c984b6b88acfac2d21b2d5477c0f35c
178.1 MB Download
md5:5f56433b765530fa416fc7fcecb5073d
189.8 MB Download
md5:629a197312446db46352821a3704131a
116.3 MB Download
md5:5a15a01a999944ff8d2c8212030b21a9
2.2 MB Preview Download