Elsevier

Pattern Recognition

Volume 35, Issue 6, June 2002, Pages 1339-1353
Pattern Recognition

Estimating relative vehicle motions in traffic scenes

https://doi.org/10.1016/S0031-3203(01)00119-4Get rights and content

Abstract

Autonomous operation of a vehicle on a road calls for understanding of various events involving the motions of the vehicles in its vicinity. In this paper we show how a moving vehicle which is carrying a camera can estimate the relative motions of nearby vehicles. We show how to “smooth” the motion of the observing vehicle, i.e. to correct the image sequence so that transient motions (primarily rotations) resulting from bumps, etc. are removed and the sequence corresponds more closely to the sequence that would have been collected if the motion had been smooth. We also show how to detect the motions of nearby vehicles relative to the observing vehicle. We present results for several road image sequences which demonstrate the effectiveness of our approach.

Introduction

Autonomous operation of a vehicle on a road calls for understanding of various events involving the motions of the vehicles in its vicinity. In normal traffic flow, most of the vehicles on a road move in the same direction without major changes in their distances and relative speeds. When a nearby vehicle deviates from this norm (e.g. when it passes or changes lanes), or when it is on a collision course, some action may need to be taken. In this paper we show how a vehicle carrying a camera can estimate the relative motions of nearby vehicles.

Understanding the relative motions of vehicles requires modeling both the motion of the observing vehicle and the motions of the other vehicles. In Ref. [1] we showed that the motions of vehicles could be represented using a Darboux motion model that corresponds to the motion of an object moving along a smooth curve that lies on a smooth surface. We showed that deviations from Darboux motion correspond primarily to small, rapid rotations around the roll and pitch axes of the vehicle. These rotations arise from the vehicle's suspension elements in response to unevenness of the road. We derived estimated bounds on both the smooth rotations due to Darboux motion (from highway design principles) and the non-smooth rotations due to the suspension, and showed that both types of rotational motion, as well as the non-smooth translational component of the motion (bounce), are small relative to the smooth (Darboux) translational motion of the vehicle.

By our analysis, both the rotational and translational velocity components of the observing vehicle are important. On the other hand, the rotational velocity components of an observed vehicle are negligible compared to its translational velocity. As a consequence we need to estimate the rotational velocity components only for the observing vehicle. This is the case even when an observed vehicle is changing its direction of motion relative to the observing vehicle (turning or changing lanes); the turn shows up as a gradual change in the direction of the relative translational velocity.

An important consequence of the Darboux motion model is that for a fixed forward-looking camera mounted on the observing vehicle the direction of translation (and therefore the position of the focus of expansion, FOE) remains the same in the images obtained by the camera. We use this fact to estimate the observing vehicle's rotational velocity components; this is done by finding the rotational flow which, when subtracted from the observed flow, leaves a radial flow pattern (radiating from the FOE) of minimal magnitude.

We describe the motion field using full perspective projection, estimate its rotational components, and derotate the field. The flow fields of nearby vehicles are then, under the Darboux motion model, pure translational fields. We analyze the motions of the other vehicles under weak perspective projection, and derive their motion parameters. We present results for several road image sequences obtained from cameras carried by moving vehicles. The results demonstrate the effectiveness of our approach.

In Section 2, we review research related to autonomous driving and analysis of road and traffic scenes, as well as selected references on independent motion detection and object motion understanding. In Section 3, we discuss the image motion field in images obtained by a vehicle-borne camera and describe a way to estimate the necessary derotation, as well as methods of estimating nearby vehicle motions from the normal flow field. Section 4 presents experimental results for several traffic scene sequences taken at different locations. More details about the Darboux motion model are provided in Appendix A.

Section snippets

Related work

There has been extensive research on vision tasks related to autonomous driving and traffic scenes. We will not attempt to review this literature here; we cite only a few recent references [2], [3], [4], [5].

More relevant to this paper is work on vehicle detection and tracking by a moving observer. Early work on high-level descriptions of object/vehicle trajectories in terms of such concepts as stopping/starting, object interactions, and motion verbs has been described by Nagel et al. [6], [7].

Motion of the observing vehicle and the camera

Assume that a camera is mounted on the observing vehicle; let dc be the position vector of the mass center of the vehicle relative to the nodal point of the camera. The orientation of the vehicle coordinate system Cξηζ relative to the camera is given by an orthogonal rotational matrix (a matrix of direction cosines) which we denote by Rc. The columns of Rc are the unit vectors of the vehicle coordinate system expressed in the camera coordinate system. We will assume that the position and

Experiments

In 4.1 Road and vehicle detection, 4.2 Derotation, we give examples illustrating road detection, stabilization, and vehicle detection. In Section 4.3, we present results for several sequences showing vehicles in motion.

Conclusions and plans for future work

Understanding the motions of vehicles from images taken by a moving camera requires a mathematical formulation of the relationships between the camera's motion and the image motion field, as well as a model for the other vehicles’ trajectories and their contributions to the image motion field. The use of the Darboux frame provides a vocabulary appropriate for describing long motion sequences.

We have derived equations for understanding the relative motions of vehicles in traffic scenes from a

Acknowledgements

The support of this effort by the Office of Naval Research under grant N00014-95-1-0521 is gratefully acknowledged.

About the Author—ZORAN DURIC received a Ph.D. in Computer Science from the University of Maryland at College Park in 1995. From 1995 to 1997 he was an Assistant Research Scientist at the Machine Learning and Inference Laboratory at George Mason University and at the Center for Automation Research at the University of Maryland. From 1996 to 1997 he was also a Visiting Assistant Professor at the Computer Science Department of George Mason University. He joined the faculty of George Mason

References (44)

  • W. Kasprzak et al.

    Adaptive road recognition and ego-state tracking in the presence of obstacles

    Int. J. Comput. Vision

    (1998)
  • L. Dorst

    Analyzing the behaviors of a car: a study in abstraction of goal-directed motions

    IEEE Trans. Systems, Man, Cybernet.—Part A: Systems and Humans

    (1998)
  • D. Koller, H. Heinze, H.-H. Nagel, Algorithmic characterization of vehicle trajectories from image sequences by motion...
  • H. Kollnig, H.-H. Nagel, M. Otte, Association of motion verbs with vehicle movements extracted from dense optical flow...
  • H. Kollnig et al.

    3D pose estimation by directly matching polyhedral models to gray value gradients

    Int. J. Comput. Vision

    (1997)
  • T.N. Tan et al.

    Model-independent recovery of object orientations

    IEEE Trans. Robot. Automat.

    (1997)
  • T.N. Tan et al.

    Model-based localization and recognition of road vehicles

    Int. J. Comput. Vision

    (1998)
  • M. Betke, E. Haritaoglu, L.S. Davis, Multiple vehicle detection and tracking in hard real time, Technical Report...
  • P.H. Batavia, D.A. Pomerleau, C.E. Thorpe, Detecting overtaking vehicles with implicit optical flow, Technical Report...
  • M. Aste et al.

    Visual routines for real-time monitoring of vehicle behavior

    Mach. Vision Appl.

    (1998)
  • A. Giachetti et al.

    The use of optical flow for road navigation

    IEEE Trans. Robot. Automat.

    (1998)
  • S.-C. Pei et al.

    Vehicle-type motion estimation by the fusion of image point and line features

    Pattern Recognition

    (1998)
  • Cited by (5)

    • A novel unsupervised approach to discovering regions of interest in traffic images

      2015, Pattern Recognition
      Citation Excerpt :

      Negri et al. aimed at detecting pedestrians in surveillance video sequences [19], by designing a family of oriented histogram descriptors and a cascade of boosted classifiers. Duric et al. used the Darboux motion model to estimate relative vehicle motions in traffic scenes [20]. Jia et al. discussed the detection of vehicles in front-view static images with frequent occlusions [21], they constructed Bayesian problem׳s formulations and designed Markov chain to detect vehicles.

    • Motion Segmentation via Generalized Curvatures

      2019, IEEE Transactions on Pattern Analysis and Machine Intelligence
    • A framework for model-based tracking experiments in image sequences

      2007, International Journal of Computer Vision
    • Detection of vehicles using gabor filters and affine moment invariants from an image

      2003, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    About the Author—ZORAN DURIC received a Ph.D. in Computer Science from the University of Maryland at College Park in 1995. From 1995 to 1997 he was an Assistant Research Scientist at the Machine Learning and Inference Laboratory at George Mason University and at the Center for Automation Research at the University of Maryland. From 1996 to 1997 he was also a Visiting Assistant Professor at the Computer Science Department of George Mason University. He joined the faculty of George Mason University in the Fall of 1997 as an Assistant Professor of Computer Science.

    About the Author—ROMAN GOLDENBERG received the B.A. degree (with honors) in computer science from the Technion, Israel Institute of Technology, Haifa in 1995. From 1994 to 1996, 1999 he was with IBM Research Lab in Haifa. Currently he is a Ph.D. student at the Computer Science Department, Technion. His research interests include video analysis, tracking, motion based recognition, PDE methods for image processing, medical imaging, etc.

    About the Author—EHUD RIVLIN received the B.Sc and M.Sc degrees in computer science and the M.B.A. degree from the Hebrew University in Jerusalem, and the Ph.D. from the University of Maryland.

    Currently, he is an Associate Professor in the Computer Science Department at the Technion, Israel Institute of Technology. His current research interests are in machine vision and robot navigation.

    About the Author—AZRIEL ROSENFELD is a tenured Research Professor, a Distinguished University Professor, and Director of the Center for Automation Research at the University of Maryland in College Park. He also holds affiliate professorships in the Departments of Computer Science, Electrical Engineering, and Psychology. He holds a Ph.D. in mathematics from Columbia University (1957), rabbinic ordination (1952) and a Doctor of Hebrew Literature degree (1955) from Yeshiva University, and honorary Doctor of Technology degrees from Linköping University, Sweden (1980) and Oulu University, Finland (1994) and an honorary Doctor of Humane Letters degree from Yeshiva University (2000).

    Dr. Rosenfeld is widely regarded as the leading researcher in the world in the field of computer image analysis. Over a period of 35 years he has made many fundamental and pioneering contributions to nearly every area of that field. He wrote the first textbook in the field (1969); was founding editor of its first journal (1972); and was co-chairman of its first international conference (1987). He has published over 30 books and over 600 book chapters and journal articles, and has directed over 50 Ph.D. dissertations. In 1985, he served as chairman of a panel appointed by the National Research Council to brief the President's Science Advisor on the subject of computer vision; he has also served (1985–1988) as a member of the Vision Committee of the National Research Council. In honor of his 65th birthday, a book entitled “Advances in Image Understanding—A Festschrift for Azriel Rosenfeld”, edited by Kevin Bowyer and Narendra Ahuja, was published by IEEE Computer Society Press in 1996.

    He is a Fellow of the Institute of Electrical and Electronics Engineers (1971), won its Emanuel Piore Award in 1985, and received its Third Millenium Medal in 2000; he is a founding Fellow of the American Association for Artificial Intelligence (1990) and of the Association for Computing Machinery (1993); he is a Fellow of the Washington Academy of Sciences (1988), and won its Mathematics and Computer Science Award in 1988; he was a founding Director of the Machine Vision Association of the Society of Manufacturing Engineers (1985–1988), won its President's Award in 1987 and is a certified Manufacturing Engineer (1988); he was a founding member of the IEEE Computer Society's Technical Committee on Pattern Analysis and Machine Intelligence (1965), served as its Chairman (1985–1987), and received the Society's Meritorious Service Award in 1986, its Harry Goode Memorial Award in 1995, and became a Golden Core member of the Society in 1996; he received the IEEE Systems, Man, and Cybernetics Society's Norbert Wiener Award in 1995; he received an IEEE Standards Medallion in 1990, and the Electronic Imaging International Imager of the Year Award in 1991; he was a founding member of the Governing Board of the International Association for Pattern Recognition (1978–1985), served as its President (1980–1982), won its first K.S. Fu Award in 1988, and became one of its founding Fellows in 1994; he received the Information Science Award from the Association for Intelligent Machinery in 1998; he was a Foreign Member of the Academy of Science of the German Democratic Republic (1988–1992), and is a Corresponding Member of the National Academy of Engineering of Mexico (1982).

    View full text