Abstract
This paper addresses the problem of isolate number recognition using visual information only. We utilize the intensity transformation and spatial filter to estimate the minimum enclosing rectangle of mouth in each frame. For each utterance, we obtain the two vectors composed of width and height of mouth, respectively. Then, we present a method to recognize the speech based on the polynomial fitting. Firstly, both width and height vectors are normalized and arranged into the constant length via interpolation. Secondly, least square method is utilized to produce two 3-order polynomials that can represent the main trend of the two vectors, respectively, and reduce the noise caused by the estimate error. Lastly, the positions of three crucial points (i.e. maximum, minimum, and right boundary point) in each 3-order polynomial curve are formed as a feature vector. For each utterance, we calculate the average of all vectors of training data to make a template, and utilize Euclidean distance between the template and testing data to perform the classification. Experiments show the promising results of the proposed approach in comparison with the existing methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bulwer, J.: Philocopus, or the Deaf and Dumbe Mans Friend. Humphrey and Moseley (1648)
Potamianos, G., Neti, C., Luettin, J., Matthews, I.: Audio-visual automatic speech recognition: An overview. In: Bailly, G., Vatikiotis-Bateson, E., Perrier, P. (eds.) Issues in Visual and Audio-Visual Speech Processing, MIT Press, Cambridge (2004)
Luettin, J., Thacker, N.A., Beet, S.W.: Speaker identification by lipreading. In: Proc. IEEE International Conference on Spoken Language Processing, Philadelphia, USA, pp. 62–65 (1996)
Chen, T., Rao, R.R.: Audio-visual integration in multimodal communication. Proceedings of the IEEE 86(5), 837–851 (1998)
Wei, X., Yin, L., Zhu, Z., Ji, Q.: Avatar-mediated face tracking and lip reading for human computer interaction. In: Procedings of the 12th Annual ACM International Conference on Multimedia, New York, USA, pp. 500–503 (2004)
Granstrom, B., House, D.: Effective interaction with talking animated agents an dialogue systems. In: Advances in Natural Multimodal Dialogue Systems. Springer, Netherlands (2005)
Bregler, C., Conig, Y.: “eigenlips” for robust speech recognition. In: Proc. IEEE International Conference on Acoustics, Speech, Signal Processing, Adelaide, Australia, pp. 669–672 (1994)
Potamianos, G., Graf, H.P.: Discriminative training of hmm stream exponents for audio-visual speech recognition. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Seattle,WA, pp. 3733–3736 (1998)
Potamianos, G., Luettin, J., Neti, C.: Hierarchical discriminant features for audio-visual lvcsr. In: Proc. IEEE Internatioal Conference on Acoustics, Speech, Signal Processing, Salt Lake City, Utah, USA, pp. 165–168 (2001)
Neti, C., Iyengar, G., Potamianos, G., Senior, A., Maison, B.: Perceptual interfaces for information interaction: Joint processing of audio and visual information for human-computer interaction. In: Proc. International Conference on Spoken Language Processing, Beijing, China
Luettin, J., Thacker, N.A.: Speechreading using probabilistic models. Computer Vision and Image Understanding 65(2), 163–178 (1997)
Dupont, S., Luettin, J.: Audio-visual speech modeling for continuous speech recognition. IEEE Transactions on Multimedia 2(3), 141–151 (2000)
Werda, S., Mahdi, W., Hamadou, A.B.: Colour and geometric based model for lip localisation: Application for lip-reading system. In: Proc. IEEE International Conference on Image Analysis and Processing, Modena, Italy, pp. 9–14 (2007)
Wang, S.L., Lau, W.H., Liew, A.W.C., Leung, S.H.: Automatic lipreading with limited training data. In: Proc. IEEE International Conference on Pattearn Recognition, pp. 881–884 (2006)
Baig, A.R., Séguier, R., Vaucher, G.: Image sequence analysis using a spatio-temporal coding for automatic lip-reading. In: Proc. IEEE International Conference on Image Analysis and Processing, Venice, Italy, pp. 544–549 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Li, M., Cheung, Ym. (2010). A Novel Automatic Lip Reading Method Based on Polynomial Fitting. In: An, A., Lingras, P., Petty, S., Huang, R. (eds) Active Media Technology. AMT 2010. Lecture Notes in Computer Science, vol 6335. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15470-6_31
Download citation
DOI: https://doi.org/10.1007/978-3-642-15470-6_31
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15469-0
Online ISBN: 978-3-642-15470-6
eBook Packages: Computer ScienceComputer Science (R0)