ABSTRACT
Analysis of the human gaze is a basic way to investigate human attention. Similarly, the view image of a human being includes the visual information of what he/she pays attention to.This paper proposes an interface system for extracting the region of an object viewed by a human from a view image sequence by analyzing the history of gaze points. All the gaze points, each of which is recorded as a 2D point in a view image, are transfered to an image in which the object region is extracted. These points are then divided into several groups based on their colors and positions. The gaze points in each group compose an initial region. After all the regions are extended, outlier regions are removed by comparing the colors and optical flows in the extended regions. All the remaining regions are merged into one in order to compose a gaze region.
- R. Tenmoku, M. Kanbara, and N. Yokoya: "A wearable augmented reality system for navigation using positioning infrastructures and a pedometer," in Proc. of IEEE and ACM International Symposium on Mixed Augmented Reality, pp.344--345, 2003. Google ScholarDigital Library
- P. Daehne and J. Karigiannis: "Aecheoguide: System Architecture of a Mobile Outdoor Augmented Reality System," in Proc. of International Symposium on Mixed Augmented Reality, pp.263--264, 2002. Google ScholarDigital Library
- S. Feiner, B. MacIntyre, T. Holler, and A. Webster: "A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment," in Proc. of International Symposium on Wearable Computers, pp.74--81, 1997. Google ScholarDigital Library
- R. Cipolla and H. J. Hollinghurst, "Human-robot interface by pointing with uncalibrated stereo vision", Image and Vision Computing, Vol.14, No.3, pp.178--178, 1996.Google ScholarCross Ref
- T. Ohno and N. Mukawa: "Gaze-Based Interaction for Anyone, Anytime", in Proc. of HCI International 2003, Vol.4, pp.1452--1456, 2003.Google Scholar
- Y. Matsumoto, T. Ino, and T. Ogasawara: "Development of Intelligent Wheelchair System with Face and Gaze Based Interface", in Proc. of 10th IEEE Int. Workshop on Robot and Human Communication (ROMAN 2001), pp.262--267, 2001.Google Scholar
- R. Atienza, A. Zelinsky: "Interactive skills using active gaze tracking", in Proc. of International Conference on Multimodal Interfaces, pp.188--195, 2003. Google ScholarDigital Library
- A. Sugimoto, A. Nakayama, and T. Matsuyama: "Detecting Gazing Region by Visual Direction and Stereo Cameras", in Proc. of International Conference on Pattern Recognition 2002, Vol.3, pp.278--282, 2002. Google ScholarDigital Library
- Y. Rosenberg and M. Werman: "Real-Time Object Tracking from a Moving Video Camera: A Software Approach on a PC," in Proc. of IEEE Workshop on Applications of Computer Vision, pp.238--239, 1998. Google ScholarDigital Library
- J. Frazier and R. Nevatia, "Detecting Moving Objects from a Moving Platform," in Proc. of International Conference on Robotics and Automation, pp.1627--1633, 1992.Google Scholar
- Y. Sugaya and K. Kanatani: "Extracting moving objects from a moving camera video sequence," in Proc. of the 10th Symposium on Sensing via Imaging Information, pp.279--284, 2004.Google Scholar
- N. Ukita, A. Sakakihara, and M. Kidode: "Extracting a Gaze Region with the History of View Directions," in Proc. of 17th International Conference on Pattern Recognition, Vol.4, pp.957--960, 2004. Google ScholarDigital Library
- D. I. Barnea and H. F. Silverman: "A class of algorithms for fast digital image registration," IEEE Trans. Computers, Vol.21, pp.179--186, 1972.Google ScholarDigital Library
- Q. Zheng, R. Chellappa: "Automatic Feature Point Exaction and Tracking in image Sequences for Arbitrary Camera Motion," International Journal of Computer Vision, Vol.9, No.2, pp.31--76, 1995. Google ScholarDigital Library
- J. Shi and C. Tomasi: "Good Features to Track," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp.593--600, 1994.Google Scholar
- S. Wesolkowski and E. Jernigan: "Color Edge Detection in RGB Using Jointly Euclidean Distance and Vector Angle," in Proc. of Vision interface '99, pp.19--21, 1999.Google Scholar
- M. Kass, A. Witkin, and D. Terzopoulos: "Snakes: Active Contour Models," International Journal of Computer Vision, Vol.1, No.4, pp.321--331, 1988.Google ScholarCross Ref
- R. J. Rousseeuw and A. M. Leroy: "Robust Regression and Outlier Detection," John Wiley & Sons, 1987. Google ScholarDigital Library
- C. B. Barber, D. P. Dobkin, H. Huhdanpaa: "The Quickhull Algorithm for Convex Hulls", ACM Trans. on Mathematical Software, Vol.22, No.4, 1996. Google ScholarDigital Library
- S. Finer, B. MacIntyre, and D. Seligmann: "Knowledge-based Augmented Reality," Communication of the ACM, Vol.37, No.7, pp.52--62, 1993. Google ScholarDigital Library
- Y. Rathi, N. Vaswani, A. Tannenbaum, and A. Yezzi: "Particle Filtering for Geometric Active Contours with Applications to Tracking Moving and Deformable Objects," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Vol.2, pp.2--9, 2005. Google ScholarDigital Library
- I. Mitsugamai, N. Ukita and M. Kidode: "Estimation of a 3D Gaze Position using View Lines," in Proc. of 12th International Conference on Image Analysis and Processing, pp.466--473, 2003. Google ScholarDigital Library
Index Terms
- Region extraction of a gaze object using the gaze point and view image sequences
Recommendations
Extracting a Gaze Region with the History of View Directions
ICPR '04: Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04We propose a method for extracting a gaze region from an observed image by analyzing human's view directions and image information. The view direction of the user, which is represented as a 2D gaze point in the observed image, is obtained by an eye-mark ...
Beyond gaze: preliminary analysis of pupil dilation and blink rates in an fMRI study of program comprehension
EMIP '18: Proceedings of the Workshop on Eye Movements in ProgrammingResearchers have been employing psycho-physiological measures to better understand program comprehension, for example simultaneous fMRI and eye tracking to validate top-down comprehension models. In this paper, we argue that there is additional value in ...
Hide my Gaze with EOG!: Towards Closed-Eye Gaze Gesture Passwords that Resist Observation-Attacks with Electrooculography in Smart Glasses
MoMM2019: Proceedings of the 17th International Conference on Advances in Mobile Computing & MultimediaSmart glasses allow for gaze gesture passwords as a hands-free form of mobile authentication. However, pupil movements for password input are easily observed by attackers, who thereby can derive the password. In this paper we investigate closed-eye gaze ...
Comments