Abstract
Affective database plays an important role in the process of affective computing which has been an attractive field of AI research. Based on analyzing current databases, a Chinese affective database (CHAD) is designed and established for seven emotion states: neutral, happy, sad, fear, angry, surprise and disgust. Instead of choosing the personal suggestion method, audiovisual materials are collected in four ways including three types of laboratory recording and movies. Broadcast programmes are also included as source of vocal corpus. By comparison of the five sources two points are gained. First, although broadcast programmes get the best performance in listening experiment, there are still problems as copyright, lacking visual information and can not represent the characteristics of speech in daily life. Second, laboratory recording using sentences with appropriately emotional content is an outstanding source of materials which has a comparable performance with broadcasts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Darwin, C.: The expression of the emotions in man and animals. University of Chicago Press, Chicago (1965) (Original work published in 1872)
Picard, R.: Affective computing. The MIT Press, Cambridge (1997)
Dellaert, F., Polzin, T., Waibel, A.: Recognizing emotions in speech. In: ICSLP (1996)
Murray, I.R., Arnott, J.L.: Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotions. Journal Acoustical society of America 2, 1097–1108 (1993)
Cowie, R., et al.: Emotion recognition in Human-Computer Interaction. IEEE Signal Processing Magazine 18, 32–80 (2001)
Roach, P., Stibbard, R., Osborne, J., Arnfield, S., Setter, J.: Transcriptions of prosodic and paralinguistic features of emotional speech. J. Int. Phonetic Assoc. 2, 83–94
Emanuela, M.C., Piero, C., Carlo, D., Graziano, T., Federica, C.: Modifications of phonetic labial targets in emotive speech: effects of the co-production of speech and emotions. Speech Communication 44, 173–185 (2004)
Douglas-Cowie, E., Cowie, R., Schroder, M.: A new emotion database Considerations, sources and scope. In: ISCA Workshop on Speech and Emotion: A conceptual framework for research (2000)
Albino, N., Asuncion, M., Antonio, B., Jose, B.: Marino Speech Emotion Recognition Using Hidden Markov Models. In: Eurospeech (2001)
Noam, A., Samuel, R., Nathaniel, L.: Analysis of an emotional speech corpus in Hebrew based on objective criteria. In: ISCA Workshop on Speech and Emotion: A conceptual framework for research (2000)
Chul Min, L., Shrikanth, S.N., Roberto, P.: Combining Acoustic and Language Information for Emotion Recognotion. In: ICSLP (2002)
McGilloway, S., et al.: Approaching Automatic Recognition of Emotion from Voice: A Rough Benchmark. In: ISCA Workshop on Speech and Emotion: A conceptual framework for research (2000)
Ilkka, L., Lea, L., Minna, V., Maija-Liisa, L., Synnove, C.: Conveyance of emotional connotations by a single word in English. Speech Communication 45, 27–39 (2005)
Veronika, M., Valery, A.P.: RUSLANA: a Database of Russian Emotional Utterances. In: ICSLP, pp. 2041–2044 (2002)
Li, Z., Xiangmin, Q., Cairong, Z.H., Zhenyang, W.: A Study on Emotion Recognition in Speech. Journal of Software 12 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
You, M., Chen, C., Bu, J. (2005). CHAD: A Chinese Affective Database. In: Tao, J., Tan, T., Picard, R.W. (eds) Affective Computing and Intelligent Interaction. ACII 2005. Lecture Notes in Computer Science, vol 3784. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11573548_70
Download citation
DOI: https://doi.org/10.1007/11573548_70
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-29621-8
Online ISBN: 978-3-540-32273-3
eBook Packages: Computer ScienceComputer Science (R0)