For the past two years, I have been researching and developing a self-driving golf cart. ROS (Robot Operating System) is an open source, robust, and convenient robotics middleware that I have for the golf cart. In this three-parts-series, I will discuss how I developed the golf cart’s autonomous navigation stack with ROS.
There are three major components in the navigation stack of the autonomous vehicle: mapping, path planning, and motion execution. This post will only focus on mapping. You can learn more about path planning and motion control in the upcoming posts.
Before we get started, you should probably have some knowledge of ROS, such as nodes, topics and packages. If you want to learn more about ROS and the design principles behind ROS, please visit the documentation on their website.
Navigation Stack Overview
The self-driving vehicle’s navigation stack is based on the ROS navigation stack; you can learn more about that here. The flow chart above is composed of three major components.
- ZED vision system
The ZED vision system is no ordinary camera. It uses stereoscopic vision, largely inspired by the human eyes, to perceive depth information. It not only captures RGB images and generates depth maps, it also provides crucial odometry information for the move_base node and rtabmap, using the camera’s built-in IMU. If you are interested in learning more about ZED and maybe try it out, please visit StereoLab’s website. (There are other great stereoscopic cameras out there, ZED just happens to be my favorite).
RTABMap (realtime appearance based mapping) uses the depth maps and laser scan data to generate global maps of the environment, and also create local maps for the local navigation planner. Don’t worry if you are confused about global maps vs. local maps… I will elaborate on those topics later in this post. I will discuss local and global path planning in the upcoming blog post.
Last but not least, the source code for the move_base node is taken from the ROS navigation stack. Its name implies its function: the node calculate velocity commands used to move the base of the robot. More specifically, move_base includes a global planner, local planner, local & global costmaps.
When the navigation stack is running in ROS, a lot is going on. This graph below shows all running nodes. There are also auxiliary nodes such as the joystick node and linear actuator feedback.
Before the vehicle can navigate in a geo-fenced environment, it needs to have a basic understanding of the world. We can call this a global map. Global maps contains location and dimension of static objects, such as buildings, poles, and curbs. I use RTAB-Map to construct this global map for the self-driving vehicle. Here is a little bit of information about RTAB-map. According to the their project website:
“RTAB-Map (Real-Time Appearance-Based Mapping) is a RGB-D, Stereo and Lidar Graph-Based SLAM approach based on an incremental appearance-based loop closure detector. The loop closure detector uses a bag-of-words approach to determinate how likely a new image comes from a previous location or a new location. When a loop closure hypothesis is accepted, a new constraint is added to the map’s graph, then a graph optimizer minimizes the errors in the map… RTAB-Map can be used alone with a handheld Kinect, a stereo camera or a 3D lidar for 6DoF mapping, or on a robot equipped with a laser rangefinder for 3DoF mapping.”http://introlab.github.io/rtabmap/
RTAB-Map uses the stereo images captured by the ZED vision system and construct a 3D map of the environment. Furthermore, I can drive the vehicle in the same environment multiple times to improve the accuracy of the map. Every time I repeat a path, RTAB-map will automatically detect matches with any previous maps in memory.
After mapping the environment, RTAB-map creates a grid map based on the generated 3D mesh. The ROS global path planner works with 2D occupancy grids instead of RTAB-map’s 3D meshes. You can learn more about the occupancy grid message type here. Here is the visualization of the 2D occupancy grid in rviz.
The gray area in the map indicate mapped free space. The black grids means occupied space. The other area is unmapped unknown space.
Once we have a global occupancy grid, we can generate a gl0bal costmap. You might be foreign to the concept of costmaps. Here is a little description of costmap_2d from ROS.
This package provides an implementation of a 2D costmap that takes in sensor data from the world, builds a 2D or 3D occupancy grid of the data… and inflates costs in a 2D costmap based on the occupancy grid and a user specified inflation radius. This package also provides support for … rolling window based costmaps, and parameter based subscription to and configuration of sensor topics.Author: Eitan Marder-Eppstein, David V. Lu!!, Dave Hershberger
Occupied, free, inflated, and unknown space.
Each cell in the costmap can have one of 255 different cost values. In the actual application, each cell can be either free, occupied, or unknown. Each status has a special cost value assigned to it (you can specify this special cost value in the configuration). Columns that have a certain number of occupied cells are assigned a LETHAL_OBSTACLE cost value, columns that have a certain number of unknown cells are assigned a NO_INFORMATION cost value, and other columns are assigned a FREE_SPACE cost value.
In the visualization below, lethal obstacles are dark purple. The inflated spaces are shown in light blue. And the gradient in purple indicate the cost scaling, which is utilized by the global planner.
The costmap that is generated by the projection of the occupancy grid will allow the global planner to plan a global path for the robot.
In general, the local costmap is very similar to the global costmap. However, there are some key differences. The local costmap is not static and has a rolling window. In other words, the local map moves with the base of the robot.
The content of the local costmap is dynamic, not constant. Imagine that we have a global map that only contains information about buildings and roads. What if there is a vehicle parked on the side of the street, how about a pedestrian crossing the road? A static global map alone can’t handle all of these unique and dynamic situations. Local maps addressed the biggest weakness of the global map. Based on the sensor data from the ZED camera, lethal obstacles are constantly being added and cleared to the local map. Then, the local path planner will plan out a path that allows the vehicle to navigate around the immediate obstacles, while trying to stick to the global path.
Thanks for checking out my blog. I hope you found this article informative and helpful. Mapping is an integral component of the navigation stack of the autonomous vehicle. In the next two parts, I will discuss path planning and motion execution. Furthermore, if you have any questions, comments, or concerns, please reach out to me at firstname.lastname@example.org
If you are interested in learning more about the self-driving golf cart project, you might enjoy the following posts.
- Deep learning steering prediction
- Semantic segmentation
- Robot Operating System