comparative presentation of real time obstacle avoidance
play

Comparative Presentation of Real-Time Obstacle Avoidance Algorithms - PDF document

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/264887011 Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision Article January 2010


  1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/264887011 Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision Article · January 2010 CITATIONS READS 3 641 3 authors: Ioannis Kostavelis Lazaros Nalpantidis The Centre for Research and Technology, Hellas Technical University of Denmark 70 PUBLICATIONS 678 CITATIONS 74 PUBLICATIONS 1,369 CITATIONS SEE PROFILE SEE PROFILE Antonios Gasteratos Democritus University of Thrace 235 PUBLICATIONS 2,676 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: STAMINA View project Place recognition View project All content following this page was uploaded by Antonios Gasteratos on 10 September 2014. The user has requested enhancement of the downloaded file.

  2. Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision Ioannis Kostavelis, Lazaros Nalpantidis and Antonios Gasteratos Robotics and Automation Lab., Production and Management Engineering Dept., Democritus University of Thrace, Greece. Abstract. This work presents a comparison between vision-based obstacle avoidance algorithms for mobile robot navigation. The issue of obstacle avoidance in robotics demands a reliable solution since mobile platforms often have to maneuver in arbitrary environments with high level of risk. The most significant advantage of the presented work is the use of only one sensor, i.e. a stereo camera, which significantly diminishes the computational cost. Three different versions of the proposed method have been developed. The implementation of these algorithms consists of a stereo vision module, which is common for all the versions, and a decision making module, which is different in each version and proposes an efficient method of processing stereo information in order to navigate a robotic platform. The algorithms have been implemented in C++ and the produced frame rate ensures that the robot will be able to accomplish the proposed decisions in real time. The presented algorithms have been tested on various different input images and their results are shown and discussed. 1. Introduction navigation tasks. As previously mentioned, one of the implemented The main purpose of this work is the modules performs the required stereo development and the comparison of processing. This module produces three vision-based obstacle avoidance reliable and detailed disparity images, algorithms. A successful obstacle i.e. depth maps, providing depth avoidance algorithm should be able to information about the scenery in front of adapt to local conditions and at the same the mobile robot. The second module time to be computationally efficient, that has been developed takes advantage even in unstructured and unknown of the depth information previously environments. This behavior becomes acquired and finds the most appropriate more demanded due to the restricted direction for the robot in order to avoid computational resources that a mobile any possible obstacles. The disparity platform usually provides. The only images have been created using the C++ sensor that has been used in the application program interface (API) of presented implementations is a stereo Point Grey Research [1], which is also camera. the manufacturer of the used stereo Stereo vision is a technique that offers a camera. The decision making methods lot of information and can produce are also written in the C++ programming efficient results when applied to robot RISE 2010 Page 1 of 5 Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision

  3. language and comprise innovative are factory calibrated. The Bumblebee2 methods for stereo vision obstacle uses two CCD image sensors and avoidance. provides quality 3D data and real-time processing speed. It is able to produce as 2. Related Work output 640x480 pixel images at 48 frames per second, or 1024x768 pixel In mobile robot navigation lots of images at 20 FPS, through its IEEE- techniques are used such as odometry, 1394 interface. active beacon, and GPS systems as The stereo camera is used in order to extensively mentioned in [2]. These capture two pre-calibrated images. The techniques can coexist as part of images are aligned and corrected in combinational efforts to define a mobile order to remove the lens distortion and robot’s position and determine the make sure that epipolar lines are parallel required navigation instructions. All the to the horizontal axis. A successful aforementioned methods demand a alignment can ensure the production of variety of sensors that should be correct disparity images because there is installed on the platform [3]. There are disparity only along the horizontal also other hybrid implementations that direction. The disparity is usually involve stereo vision systems and computed as a shift towards left of an ultrasonic sensors which are used in image feature when viewed in the right localization and mapping problems [4]. image. A point that appears at the Furthermore, the use of solely stereo horizontal coordinate x , in the left image vision can be applied to the detection of may be present at the horizontal 3D objects efficiently as it is described coordinate x-d in the right image, where in [5]. Considering the above as a by d is denoted the point’s disparity in background the contribution of this work pixels. The obstacles that are closer to is the development of an algorithm for the stereo camera should have greater obstacle avoidance with the use of only disparity values than the obstacles, one stereoscopic camera, shown in which are located at the background of Figure 1. This choice has the additional the scenery. In the present work the advantage that the proposed system depth maps are calculated using the could be easily integrated with other fixed functions, which are provided by vision-based methods such as object the Point’s Grey support development recognition and tracking. kit (SDK). The stereo SDK supplies an optimized fast-correlation stereo process that rapidly calculates the Sum of Absolute Differences (SAD) stereo correlation method. This is a very quick Figure 1. The stereoscopic camera and robust method and produces dense Bumblebee 2 of Point Grey Research. disparity images. The reference (left) image of a self-captured stereo pair is 3. Stereo Vision Module shown in Figure 2a, while Figure 2b The stereo vision equipment utilized in depicts the result of the stereo this work is the Bumblebee2 stereo processing, i.e. the disparity map, of that camera by Point Grey Research. Point stereo image pair. Grey Bumblebee2 stereo vision cameras RISE 2010 Page 2 of 5 Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision

  4. (a) (b) (a) (b) Figure 2. The reference image (a) and the Figure 3. Reference image (a) and disparity produced disparity map (b) for a stereo pair. map divided into three windows (b). 4. Implementation Methods for robot should make the decision to steer the Decision Making Module right. This method is very efficient when there are not many obstacles in one of The second module of this work takes the three windows. In order to verify this advantage of the information that is conclusion one more scene is tested, as stored in the disparity image in order to shown in Figure 4. The processing of navigate the mobile robot. When this image set gives the following mean obstacles are detected the algorithm has disparity values for each direction: Left to make the decision whether to move = 66.8 pixels, Central = 80.2 pixels and the robot forward or to steer it left or Right = 61.5 pixels. right. Three different possible methods In this case the algorithm would take the have been developed and all of them decision to steer right. However, as we have as common target to navigate the can see there is enough space in front of robot towards the direction with the the robot to move before it should have fewer obstacles. Another common to steer in order to avoid a collision. characteristic of the proposed methods is The conclusion is that occasionally the that all of them initially divide the algorithm presents hesitantly behavior. disparity map into three horizontally As a consequence, there should be tiled sub-regions or windows. defined another method in order to overcome this behavior. Thus, the 4.1. The mean estimation method threshold estimation method has been Firstly, the disparity map is divided into developed. a left-side window, a central window and a right-side window, as shown in Figure 3. For each window, the average disparity value is calculated. The window having the smaller average disparity value indicates the direction with the fewer obstacles. For example, in (a) (b) the disparity which is shown in Figure 3 Figure 4. The reference image (a) and the the mean values for each window are: produced disparity map (b). Left = 78.7 pixels, Central = 79.2 pixels and Right = 44.5 pixels. 4.2 The threshold estimation method Comparing the three mean values, it is This method also divides the disparity shown that the right window has the map into three windows of pixels, as smaller one. As a result there should be shown in Figure 3. The flow of this the fewer obstacles in that way and the method is as follows: RISE 2010 Page 3 of 5 Comparative Presentation of Real-Time Obstacle Avoidance Algorithms Using Solely Stereo Vision

Recommend


More recommend