ISCA Archive Interspeech 2018
ISCA Archive Interspeech 2018

Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech

Yu-An Chung, James Glass

In this paper, we propose a novel deep neural network architecture, Speech2Vec, for learning fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the underlying spoken words and are close to other vectors in the embedding space if their corresponding underlying spoken words are semantically similar. The proposed model can be viewed as a speech version of Word2Vec. Its design is based on a RNN Encoder-Decoder framework and borrows the methodology of skipgrams or continuous bag-of-words for training. Learning word embeddings directly from speech enables Speech2Vec to make use of the semantic information carried by speech that does not exist in plain text. The learned word embeddings are evaluated and analyzed on 13 widely used word similarity benchmarks and outperform word embeddings learned by Word2Vec from the transcriptions.


doi: 10.21437/Interspeech.2018-2341

Cite as: Chung, Y.-A., Glass, J. (2018) Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech. Proc. Interspeech 2018, 811-815, doi: 10.21437/Interspeech.2018-2341

@inproceedings{chung18c_interspeech,
  author={Yu-An Chung and James Glass},
  title={{Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech}},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={811--815},
  doi={10.21437/Interspeech.2018-2341}
}