Abstract
The objective of the “Sign Language Recognition, Translation & Production” (SLRTP 2020) Workshop was to bring together researchers who focus on the various aspects of sign language understanding using tools from computer vision and linguistics. The workshop sought to promote a greater linguistic and historical understanding of sign languages within the computer vision community, to foster new collaborations and to identify the most pressing challenges for the field going forwards. The workshop was held in conjunction with the European Conference on Computer Vision (ECCV), 2020.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Belissen, V., Braffort, A., Gouiffés, M.: Towards continuous recognition of illustrative and spatial structures in sign language. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Borg, M., Camilleri, K.P.: Phonologically-meaningful subunits for deep learning-based sign language recognition. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 199–217. Springer, Cham (2020)
Bragg, D., et al.: Sign language recognition, generation, and translation: an interdisciplinary perspective. In: Proceedings of the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) (2019)
Bull, H., Gouiffès, M., Braffort, A.: Automatic segmentation of sign language into subtitle-units. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 186–198. Springer, Cham (2020)
Camgöz, N.C., Kındıroğlu, A.A., Akarun, L.: Sign language recognition for assisting the deaf in hospitals. In: Chetouani, M., Cohn, J., Salah, A.A. (eds.) HBU 2016. LNCS, vol. 9997, pp. 89–101. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46843-3_6
Duarte, A., et al.: How2Sign: a large-scale multimodal dataset for continuous American sign language. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Ekman, P., Friesen, W.V.: Manual for The Facial Action Coding System. Consulting Psychologists Press, Palo Alto (1978)
Gökçe, c., Özdemir, O., Kındıroğlu, A.A., Akarun, L.: Score-level multi cue fusion for sign language recognition. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 294–309. Springer, Cham (2020)
Hassan, S., Alonzo, O., Glasser, A., Huenerfauth, M.: Effect of ranking and precision of results on users’ satisfaction with search-by-video sign-language dictionaries. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Koller, O.: Quantitative survey of the state of the art in sign language recognition. arXiv preprint arXiv:2008.09918 (2020)
Koller, O., Camgoz, C., Ney, H., Bowden, R.: Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos. IEEE Trans. Pattern Anal. Mach. Intell. 42, 2306–2320 (2019)
Korte, J., Bender, A., Gallasch, G., Wiles, J., Back, A.: A plan for developing an Auslan communication technologies pipeline. In: In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 264–277. Springer, Cham (2020)
Kratimenos, A., Pavlakos, G., Maragos, P.: 3D hands, face and body extraction for sign language recognition. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Liang, X., Angelopoulou, A., Kapetanios, E., Woll, B., Al-batat, R., Woolfe, T.: A multi-modal machine learning approach and toolkit to automate recognition of early stages of dementia among British sign language users. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 278–293. Springer, Cham (2020)
Moryossef, A., Tsochantaridis, I., Aharoni, R., Ebling, S., Narayanan, S.: Real-time sign language detection using human pose estimation. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 237–248. Springer, Cham (2020)
Orbay, A., Akarun, L.: Neural sign language translation by learning tokenization. arXiv preprint arXiv:2002.00479 (2020)
Özdemir, O., Kındıroğlu, A.A., Camgöz, N.C., Akarun, L.: BosphorusSign22k sign language recognition dataset. arXiv preprint arXiv:2004.01283 (2020)
Parelli, M., Papadimitriou, K., Potamianos, G., Pavlakos, G., Maragos, P.: Exploiting 3D hand pose estimation in deep learning-based sign language recognition from RGB videos. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 249–263. Springer, Cham (2020)
Polat, K., Saraçlar, M.: Unsupervised discovery of sign terms by K-nearest neighbours approach. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 310–321. Springer, Cham (2020)
Sanabria, R., et al.: How2: a large-scale dataset for multimodal language understanding. In: Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL). NeurIPS (2018)
Shi, B., Del Rio, A.M., Keane, J., Brentari, D., Shakhnarovich, G., Livescu, K.: Fingerspelling recognition in the wild with iterative visual attention. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
da Silva, E.P., Costa, P.D.P., Kumada, K.M.O., De Martino, J.M., Florentino, G.A.: Recognition of affective and grammatical facial expressions: a study for Brazilian sign language. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 218–236. Springer, Cham (2020)
Siyli, R.D., Gundogdu, B., Saraclar, M., Akarun, L.: Unsupervised key hand shape discovery of sign language videos with correspondence sparse autoencoders. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8179–8183. IEEE (2020)
Tamer, N.C., Saraçlar, M.: Improving keyword search performance in sign language with hand shape features. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12536, pp. 322–333. Springer, Cham (2020)
Ventura, L., Duarte, A., Giro-i Nieto, X.: Can everybody sign now? Exploring sign language video generation from 2D poses. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Yin, K., Read, J.: Attention is all you sign: sign language translation with transformers. In: Sign Language Recognition, Translation and Production (SLRTP) Workshop - Extended Abstracts (2020)
Acknowledgements
We would like to thank our sponsors Microsoft (AI for Accessibility Program), Google, and EPSRC project “ExTOL” (EP/R03298X/1). We would also like to thank Ben Saunders, Bencie Woll, Liliane Momeni, Oscar Koller, and Sajida Chaudhary for their help and advice, Robert Adam for ASL and BSL translations, Akbar Sikder and Esther Rose Bevan for BSL Interpretations, Anna Michaels and Brett Best from Arrow Interpreting for ASL Interpretations, and Katy Ryder and Tara Meyer from MyClearText for the live captioning.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Camgöz, N.C. et al. (2020). SLRTP 2020: The Sign Language Recognition, Translation & Production Workshop. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12536. Springer, Cham. https://doi.org/10.1007/978-3-030-66096-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-66096-3_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-66095-6
Online ISBN: 978-3-030-66096-3
eBook Packages: Computer ScienceComputer Science (R0)