Skip to main content
Log in

Qualitative detection of motion by a moving observer

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Two complementary methods for the detection of moving objects by a moving observer are described. The first is based on the fact that, in a rigid environment, the projected velocity at any point in the image is constrained to lie on a 1-D locus in velocity space whose parameters depend only on the observer motion. If the observer motion is known, an independently moving object can, in principle, be detected because its projected velocity is unlikely to fall on this locus. We show how this principle can be adapted to use partial information about the motion field and observer motion that can be rapidly computed from real image sequences. The second method utilizes the fact that the apparent motion of a fixed point due to smooth observer motion changes slowly, while the apparent motion of many moving objects such as animals or maneuvering vehicles may change rapidly. The motion field at a given time can thus be used to place constraints on the future motion field which, if violated, indicate the presence of an autonomously maneuvering object. In both cases, the qualitative nature of the constraints allows the methods to be used with the inexact motion information typically available from real-image sequences. Implementations of the methods that run in real time on a parallel pipelined image processing system are described.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Aloimonos, Y. 1990. “Purposive and qualitative vision”.Proc. AAAI Workshop on Qualitative Vision, pp. 1–5.

  • Adiv, G. 1985. “Inherent ambiguities in recovering 3-D motion and structure from a noisy flow field”.Proc. IEEE Comput. Soc. Conf. Comput. Vision Patt. Recog., San Francisco, pp. 70–77.

  • Anandan, P., and Weiss, R. 1985. “Introducing a smoothness constraint in a matching approach for the computation of optical flow fields”.Proc. 3rd Workshop on Computer Vision: Representation and Control, pp. 186–194.

  • Anderson, C.H., Burt, P.J., and Van der Wal, G.S. 1985. “Change detection and tracking using pyramid transform techniques”.Proc. SPIE Conf. Intell. Rob. Comput. Vision, Boston, MA, pp. 300–305.

  • Barnard, S.T., and Thompson, W.B. 1980. “Disparity of images”.IEEE Trans PAMI 2(4): 330–340.

    Google Scholar 

  • Bhanu, B., Symosek, P., Ming, J., Burger, W., Nasr, H., and Kim, J. 1989. “Qualitative motion detection and tracking”.Proc. DARPA Image Understanding Workshop, Palo Alto, pp. 370–398.

  • Bolles, R.C. 1987. “Epipolar plane analysis: an approach to determining structure from motion”.Proc. Intern. Joint Conf. Intell., Milan, pp. 7–15.

  • Brown, C.M. 1989. “Centralized and decentralized Kalman filter techniques for tracking, navigation and control”.Proc. DARPA Image Understanding Workshop, May, Palo Alto, pp. 651–675.

  • Brown, C.M. (Ed.), with Ballard, D.H., Becker, T.G., Gans, F.R., Martin, N., Olson, T.J., Potter, R.D., Rimey, R.D., Tilley, D.G., and Whitehead, S.D. 1988. “The Rochester robot”. TR 257, Computer Science Dept., U. Rochester, August.

    Google Scholar 

  • Brown, C.M., and Rimey, R.D. 1988. “Coordinates, conversions, and kinematics for the Rochester Robotics Lab”. TR 259, Computer Science Dept., U. Rochester, August.

    Google Scholar 

  • Burt, P.J., Bergen, J.R., Hingorani, R., Kolczynski, R., Lee, W.A., Leung, A., Lubin, J., and Shvayster, H. 1989. “Object tracking with a moving camera”.Proc. IEEE Workshop on Motion, Irvine, CA.

  • Coombs, D.J. 1989. “Tracking objects with eye movements”.Proc. Topical Meet. Image Understand. Mach. Vision, Optical Society of America.

  • Dinstein, I. 1988. “A new technique for visual motion alarm”.Pattern Recognition Letters 8(5): 347.

    Google Scholar 

  • Heeger, D. 1987. “Optical flow from spatio-temporal filters”.Proc. Ist Intern. Conf. Comput. Vision, London, pp. 181–190.

  • Heeger, D., and Hager, G. 1988. “Egomotion and the stabilized world”.Proc. 2nd Intern. Conf. Comput. Vision, Tampa, pp. 435–440.

  • Horn, B.K.P., and Schunck, B.G. 1981. “Determining optical flow”.Artificial Intelligence 17: 185–204.

    Google Scholar 

  • Jain, R.C. 1984. “Segmentation of frame sequences obtained by a moving observer”.IEEE Trans. PAMI 6(5): 624–629.

    Google Scholar 

  • Little, J.J., Bulthoff, H.H., and Poggio, T. 1988. “Parallel optical flow using local vote counting”.Proc. 2nd Intern. Conf. Comput. Vision, Tampa, pp. 454–459.

  • Longuet-Higgins, H.C. 1981. “A computer algorithm for reconstructing a scene from two projections”.Nature 293.

  • Morovec, H.P. 1979. “Visual mapping by a robot rover”.Proc. 6th Intern. Joint Conf. Artif. Intell., Tokyo, pp. 598–600.

  • Nagel, H.H. 1983. “Displacement vectors derived from second order intensity variations in image sequences”.Comput. Vision, Patt. Recog., and Image Process. 91: 85–117.

    Google Scholar 

  • Nagel, H.H., and Enkelmann, W. 1986. “An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences”.IEEE Trans. PAMI (Sept.): 565–593.

  • Nelson, R.C., and Aloimonos, Y. 1988. “Finding motion parameters from spherical flow fields (or the advantages of having eyes in the back of your head).”Biological Cybernetics 58: 261–273.

    Google Scholar 

  • Nelson, R.C. 1988. “Visual navigation.” Ph.D. Thesis, University of Maryland; also University of Maryland Computer Science Department, TR 2087.

  • Nelson, R.C., and Aloimonos, J. 1989. “Using flow field divergence for obstacle avoidance in visual navigation.”IEEE Trans. PAMI 11(10): 1102–1106.

    Google Scholar 

  • Prazdny, K. 1980. “Egomotion and relative depth map from optical flow”.Biological Cybernetics 36: 87–102.

    Google Scholar 

  • Prazdny, K. 1981. “Determining the instantaneous direction of motion from optical flow generated by curvilinear moving observer”.Comput. Vision Graphics Image Process. 17: 238–248.

    Google Scholar 

  • Schunck, B.G. 1984. “Motion segmentation by constraint line clustering”.IEEE Workshop on Computer Vision: Representation and Control, pp. 58–62.

  • Thompson, W.B., and Kearney, J.K. 1986. “Inexact vision”.Workshop on Motion, Representation, and Analysis, May 15–22.

  • Thompson, W.B., and Pong, T.C. 1990. “Detecting moving objects”.Intern. J. Comput. Vision 4(1): 39–58.

    Google Scholar 

  • Tsai, R.Y., and Huang, T.S. 1981. “Estimating 3D motion parameters of a rigid planar patch I”.IEEE ASSP 30: 525–534.

    Google Scholar 

  • Tsai, R.Y., and Huang, T.S. 1984. “Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces”.IEEE Trans. PAMI 6: 13–27.

    Google Scholar 

  • Ullman, S. 1979. “The interpretation of structure from motion”.Proc. Roy. Soc. London B 203: 404–426.

    Google Scholar 

  • Uras, S., Girosi, F., and Torre, V. 1988. “A computational approach to motion perception”.Biological Cybernetics 60: 79–87.

    Google Scholar 

  • Verri, A., and Poggio, T., 1987. “Against quantitative optical flow”.1st Intern. Conf. Comput. Vision, London, June, pp. 171–180.

  • Waxman, A. 1987. “Image flow theory: a framework for 3-D inference from time varying imagery”.Advances in Computer Vision. C., Brown (editor), Erlbaum: Hillsdale, N.J.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nelson, R.C. Qualitative detection of motion by a moving observer. Int J Comput Vision 7, 33–46 (1991). https://doi.org/10.1007/BF00130488

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00130488

Keywords

Navigation