Abstract
This paper presents a motion capture system using two cameras that is capable of estimating a constrained set of human postures in real time. We first obtain a 3D shape model of a person to be tracked and create a posture dictionary consisting of many posture examples. The posture is estimated by hierarchically matching silhouettes generated by projecting the 3D shape model deformed to have the dictionary poses onto the image plane with the observed silhouette in the current image. Based on this method, we have developed a virtual fashion show system that renders a computer graphics-model moving synchronously to a real fashion model, but wearing different clothes.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Matsuyama, T., Wu, X., Takai, T., Wada, T.: Real-time dynamic 3D object shape reconstruction and high-fidelity texture mapping for 3D video. IEEE Trans. on Circuits and Systems for Video Technology 14, 357–369 (2004)
Cheung, G., Baker, S., Kanade, T.: Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In: Proc. of CVPR, vol. 1, pp. 77–84 (2003)
Gavrila, D., Davis, L.: 3d model-based tracking of humans in action: A multi-view approach. In: Proc. of CVPR, pp. 73–80 (1996)
Plänkers, R., Fua, P.: Tracking and modeling people in video sequences. Computer Vision and Image Understanding 81 (2001)
Agarwa, A., Triggs, B.: 3d human pose from silhouettes by relevance vector regression. In: Proc. of CVPR, vol. 2, pp. 882–888 (2004)
Delamarre, Q., Faugeras, O.: 3d articulated models and multi-view tracking with silhouettes. In: Proc. of ICCV, vol. 2, pp. 716–721 (1999)
Brand, M.: Shadow puppetry. In: Proc. of ICCV, pp. 1237–1244 (1999)
Senior, A.: Real-time articulated human body tracking using silhouette information. In: Proc. of IEEE Workshop on Visual Surveillance/PETS, pp. 30–37 (2003)
Yamamoto, M., Ohta, Y., Yamagiwa, T., Yagishita, K., Yamanaka, H., Ohkubo, N.: Human action tracking guided by key-frames. In: Proc. of FG, pp. 354–361 (2000)
Deutscher, J., Blake, A., Reid, I.: Articulated body motion capture by annealed particle filtering. In: Proc. of CVPR, vol. 2, pp. 1144–1149 (2000)
Sminchisescu, C., Triggs, B.: Estimating articulated human motion with covariance scaled sampling. IJRR 22, 371–391 (2003)
Felzenszwalb, P., Huttenlocher, D.: Efficient matching of pictorial structures. In: Proc. of CVPR, vol. 2, pp. 66–73 (2000)
Date, N.: et al.: Real-time human motion sensing based on vision-based inverse kinematics for interactive applications. In: Proc. of ICPR, vol. 3, pp. 318–321 (2004)
Okada, R., et al.: High-speed object tracking in ordinary surroundings based on temporally evaluated optical flow. In: Proc. of IROS, pp. 242–247 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Okada, R., Stenger, B., Ike, T., Kondoh, N. (2006). Virtual Fashion Show Using Real-Time Markerless Motion Capture. In: Narayanan, P.J., Nayar, S.K., Shum, HY. (eds) Computer Vision – ACCV 2006. ACCV 2006. Lecture Notes in Computer Science, vol 3852. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11612704_80
Download citation
DOI: https://doi.org/10.1007/11612704_80
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-31244-4
Online ISBN: 978-3-540-32432-4
eBook Packages: Computer ScienceComputer Science (R0)