ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Facing Realism in Spontaneous Emotion Recognition from Speech: Feature Enhancement by Autoencoder with LSTM Neural Networks

Zixing Zhang, Fabien Ringeval, Jing Han, Jun Deng, Erik Marchi, Björn Schuller

During the last decade, speech emotion recognition technology has matured well enough to be used in some real-life scenarios. However, these scenarios require an almost silent environment to not compromise the performance of the system. Emotion recognition technology from speech thus needs to evolve and face more challenging conditions, such as environmental additive and convolutional noises, in order to broaden its applicability to real-life conditions. This contribution evaluates the impact of a front-end feature enhancement method based on an autoencoder with long short-term memory neural networks, for robust emotion recognition from speech. Support Vector Regression is then used as a back-end for time- and value-continuous emotion prediction from enhanced features. We perform extensive evaluations on both non-stationary additive noise and convolutional noise, on a database of spontaneous and natural emotions. Results show that the proposed method significantly outperforms a system trained on raw features, for both arousal and valence dimensions, while having almost no degradation when applied to clean speech.


doi: 10.21437/Interspeech.2016-998

Cite as: Zhang, Z., Ringeval, F., Han, J., Deng, J., Marchi, E., Schuller, B. (2016) Facing Realism in Spontaneous Emotion Recognition from Speech: Feature Enhancement by Autoencoder with LSTM Neural Networks. Proc. Interspeech 2016, 3593-3597, doi: 10.21437/Interspeech.2016-998

@inproceedings{zhang16j_interspeech,
  author={Zixing Zhang and Fabien Ringeval and Jing Han and Jun Deng and Erik Marchi and Björn Schuller},
  title={{Facing Realism in Spontaneous Emotion Recognition from Speech: Feature Enhancement by Autoencoder with LSTM Neural Networks}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={3593--3597},
  doi={10.21437/Interspeech.2016-998}
}