Abstract

The Zinf-algorithm is a novel method for navigation and obstacle avoidence based on the optical flow.The algorithm allows the calculation of the rotation and translational direction as well as the obstacle detection with a swerve direction. Though the method requires only little computing power, the accuracy is quite high an can be adapted to the actual circumstances.



Motivation and some details

To stabilize a robot or to execute a coarse movement, it is sufficient to get a relative position of the robot. So far, mainly active sensors, like laser sensors, have been used to fullfill this task. However, often it would be of advantage and useful to calculate accurate and in real time the relative location with only one camera, due to its compactness, passivity, low power consumption and light weight.

Therefore, an algorithm has been developed, which is able to calculate the rotation and the translation from a frame sequence. This algorithm was designed to operate in 3D space without any apriori knowledge and to be real time capable. The approach is based on the assumption that points, far away from the camera, are affected in their pixel-discretized camera position only by the rotational component the motion. The rotation is calculated by the so called Arun-algorithm and the translation is estimated by the centroid of a weighted point cloud of the optical flow. Both practices are used in a RANSAC-framework. Furthermore, an obstacle recognition has been implemented to be able to avoid imminent dangers in the environment and to get a swerve direction.

zinf.jpg

Several applications for this algorithm are planned, e.g. on MAVs or for the 3D reconstruction of a wasp flight.



Videos

 

Click here to see a short Flash film about the basic principle of the algorithm. Principle of the Algorithm
Click here to see the results of the motion estimation and obstacle avoidence on a testrun in real time: Testrun - Fast


Prizes

This master's thesis was honored with the Siemens Prize for the best master's thesis 2007 at the TUM.


References

[1] Elmar Mair, Direkte Schätzung der Lageveränderung mit einer monokularen Kamera, Master's Thesis.
[2] Darius Burschka and Elmar Mair, Direct Pose Estimation with a Monocular Camera, RobVis08.