Speaker identification using laughter in a close social network

Authors

  • Elliott Land University of York
  • Erica Gold University of Huddersfield

DOI:

https://doi.org/10.1558/ijsll.34552

Keywords:

LAUGHTER, NAÏVE-SPEAKER IDENTIFICATION, SPEAKER-SPECIFICITY

Abstract

Forensically relevant research on laughter is extremely limited in the literature; however, experts have reported analysing laughter in forensic speaker comparison casework (Gold and French 2011). This article describes a preliminary investigation into the potential speaker-specificity of laughter. A close social network of seven undergraduate university students took part in an open speaker identification task containing 4-second samples of their laughter. Overall, the network members performed much worse than in a similar study using speech samples (see Foulkes and Barron 2000), as each network member identified only one speaker correctly. The largest number of correct identifications of any speaker was three, while another three of the network members were never correctly identified. Previous studies that have also investigated laughter using voice line-ups have reported higher identification rates (Philippon, Randall and Cherryman 2013; Yarmey 2004). The differences between the results of the present study and previous studies may be explained by qualitative and quantitative differences in the laughter samples used, particularly differences in voicing and sample length. This suggests that longer samples of specifically voiced laughter may facilitate higher naïve speaker identification rates. Further research is still needed on the possible speaker-specificity of voiced laughter but it may have the potential to be developed for use as a speaker discriminant in forensic phonetic casework.

Author Biographies

  • Elliott Land, University of York
    Elliott Land is a postgraduate student at the University of York, where he is currently studying the MSc in Forensic Speech Science programme. He holds a BA (Hons) in English Language and Linguistics from the University of Huddersfield. He is also a research assistant on the WYRED Project (funded by the ESRC, ES/N003268/1).
  • Erica Gold, University of Huddersfield
    Erica Gold is a Senior Lecturer in Forensic Speech Science at the University of Huddersfield. Erica holds an MSc and PhD in Forensic Speech Science from the University of York, and a BA in Linguistics from the University of California San Diego. She is currently the Principal Investigator on the West Yorkshire Regional English Database (WYRED) Project funded by the Economic and Social Research Council (ES/N003268/1), where she is collecting the largest forensically-relevant English database.

References

Armony, J. L., Chochol, C., Fecteau, S., and Belin, P. (2007). Laugh (or cry) and you will we remembered: influence of emotional expression on memory for vocalizations. Psychological Science 18(12): 1027--1029. doi:10.1111/j.1467- 9280.2007.02019.x

Bachorowski, J., Smoski, M. J., and Owren, M. J. (2001). The acoustic features of human laughter. Journal of the Acoustical Society of America 110(3): 1581--1597. doi:10.1121/1.1391244

Bartle, A., and Dellwo, V. (2015). Auditory speaker discrimination by forensic phoneticians and naive listeners in voiced and whispered speech. The International Journal of Speech, Language and the Law 22(2): 229--248. doi:10.1558/ijsll.v22i2.23101

Blatchford, H., and Foulkes, P. (2006). Identification of voices in shouting. International Journal of Speech, Language and the Law 13(2): 241--254. doi:10.1558/ijsll.2006.13.2.241

Boersma, P. and Weenink, D. (2017). Praat: Doing Phonetics by Computer [Computer program]. Version 6.0.26, retrieved from http://www.praat.org/ on 24/11/2016.

Braun, A. (1995). Fundamental frequency: how speaker-specific is it? In A. Braun and J. Köster (eds.) Studies in Forensic Phonetics 9--23. Trier: Wissenschaftlicher Verlag.

Clifford, B. R., Rathborn, H., and Bull, R. (1981). The effects of delay on voice recognition accuracy. Law and Human Behavior 5(2/3): 201--208. doi:10.1007/BF01044763

Cosentino, S., Sessa, S., and Takanishi, A. (2016). Quantitative laughter detection, measurement, and classification – a critical survey. IEEE Reviews in Biomedical Engineering 9(1): 148--162. doi:10.1109/RBME.2016.2527638

Drew, P. (2008). Conversation analysis. In J. Smith (ed.), Qualitative Psychology: A Practical Guide to Research Methods (2nd ed.) 133--159. Los Angeles: SAGE.

Foulkes, P., and Barron, A. (2000). Telephone speaker recognition amongst members of a close social network. The International Journal of Speech, Language and the Law 7(2): 180- -198.

Glenn, P. J., and Holt, E. (2013). Studies of Laughter in Interaction. London: Bloomsbury.

Gold, E., and French, P. (2011). International practices in forensic speaker comparison. The International Journal of Speech, Language and the Law 18(2): 293--307. doi:10.1558/ijsll.v18i2.293

Hofmann, J., Platt, T., Ruch, W., Niewiadomski, R., and Urbain, J. (2015). The influence of a virtual companion on amusement when watching funny films. Motivation and Emotion 39(3): 434--447. doi:10.1007/s11031-014-9461-y

Holt, E. (2016). Laughter at last: playfulness and laughter in interaction. Journal of Pragmatics 100: 89--102

Hudenko, W. J., Stone, W., and Bachorowski, J. (2009). Laughter differs in children with autism: an acoustic analysis of laughs produced by children with and without the disorder. Journal of Autism and Developmental Disorders 39(10): 1392--1400. doi:10.1007/s10803-009-0752-1

Makagon, M. M., Funayama, E. S., and Owren, M. J. (2008). An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students. Journal of the Acoustical Society of America 124(1): 472--483. doi:10.1121/1.2932088

Microsoft. (2012). Windows Movie Maker [Video Editing Software]. Retrieved from https://support.microsoft.com/en-gb/help/14220/windows-movie-maker-download on 24/11/2016.

Mowrer, D. E., LaPointe, L. L., and Case, J. (1987). Analysis of five acoustic correlates of laughter. Journal of Nonverbal Behavior 11(3): 191--199. doi:10.1007/BF00990237

Nolan, F. (2003). A recent voice parade. The International Journal of Speech, Language and the Law 10(2): 277--291.

Petridis, S., Martinez, B., and Pantic, M. (2013). The MAHNOB laughter database. Image and Vision Computing 31(2): 186--202. doi:10.1016/j.imavis.2012.08.014

Philippon, A. C., Randall, L. M., and Cherryman, J. (2013). The impact of laughter in earwitness identification performance. Psychiatry, Psychology and Law 20(6): 887-- 898. doi:10.1080/13218719.2013.768194

Provine, R. R., and Emmorey, K. (2006). Laughter among deaf signers. Journal of Deaf Studies and Deaf Education 11(4): 403--409. doi:10.1093/deafed/enl008

Schiller, N. O., and Köster, O. (1998). The ability of expert witnesses to identify voices: a comparison between trained and untrained listeners. The International Journal of Speech Language and the Law 5(1): 1--9.

Sidnell, J. (2010). Conversation Analysis: An Introduction. Chichester: Wiley-Blackwell.

Smoski, M., and Bachorowski, J. (2003a). Antiphonal laughter between friends and strangers. Cognition & Emotion 17(2): 327--340. doi:10.1080/02699930302296

Smoski, M. J., and Bachorowski, J. (2003b). Antiphonal laughter in developing friendships. Annals of the New York Academy of Sciences, 1000(1): 300--303. doi:10.1196/annals.1280.030

Sørensen, M. H. (2012). Voice line-ups: speakers’ F0 values influence the reliability of voice recognitions. The International Journal of Speech Language and the Law 19(2): 145-- 158. doi:10.1558/ijsll.v19i2.145

Szameitat, D. P., Darwin, C. J., Szameitat, A. J., Wildgruber, D., and Alter, K. (2011). Formant characteristics of human laughter. Journal of Voice 25(1): 32--37. doi:10.1016/j.jvoice.2009.06.010

Vettin, J., & Todt, D. (2004). Laughter in conversation: features of cccurrence and acoustic structure. Journal of Nonverbal Behavior 28(2): 93--115. doi:10.1023/B:JONB.0000023654.73558.72

Yarmey, A. D. (2004). Common-sense beliefs, recognition and the identification of familiar and unfamiliar speakers from verbal and non-linguistic vocalizations. The International Journal of Speech, Language and the Law 11(2): 267--277. doi:10.1558/sll.2004.11.2.267

Published

2017-12-20

Issue

Section

Articles

How to Cite

Land, E., & Gold, E. (2017). Speaker identification using laughter in a close social network. International Journal of Speech, Language and the Law, 24(2), 201-225. https://doi.org/10.1558/ijsll.34552