ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Context-Sensitive and Role-Dependent Spoken Language Understanding Using Bidirectional and Attention LSTMs

Chiori Hori, Takaaki Hori, Shinji Watanabe, John R. Hershey

To understand speaker intentions accurately in a dialog, it is important to consider the context of the surrounding sequence of dialog turns. Furthermore, each speaker may play a different role in the conversation, such as agent versus client, and thus features related to these roles may be important to the context. In previous work, we proposed context-sensitive spoken language understanding (SLU) using role-dependent long short-term memory (LSTM) recurrent neural networks (RNNs), and showed improved performance at predicting concept tags representing the intentions of agent and client in a human-human hotel reservation task. In the present study, we use bidirectional and attention-based LSTMs to train a role-dependent context-sensitive model to jointly represent both the local word-level context within each utterance, and the left and right context within the dialog. The different roles of client and agent are modeled by switching between role-dependent layers. We evaluated label accuracies in the hotel reservation task using a variety of models, including logistic regression, RNNs, LSTMs, and the proposed bidirectional and attention-based LSTMs. The bidirectional and attention-based LSTMs yield significantly better performance in this task.


doi: 10.21437/Interspeech.2016-1171

Cite as: Hori, C., Hori, T., Watanabe, S., Hershey, J.R. (2016) Context-Sensitive and Role-Dependent Spoken Language Understanding Using Bidirectional and Attention LSTMs. Proc. Interspeech 2016, 3236-3240, doi: 10.21437/Interspeech.2016-1171

@inproceedings{hori16_interspeech,
  author={Chiori Hori and Takaaki Hori and Shinji Watanabe and John R. Hershey},
  title={{Context-Sensitive and Role-Dependent Spoken Language Understanding Using Bidirectional and Attention LSTMs}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={3236--3240},
  doi={10.21437/Interspeech.2016-1171}
}