Introduction

For the past two years, I have been researching and developing a self-driving golf cart. ROS (Robot Operating System) is an open source, robust, and convenient robotics middleware that I have for the golf cart. In this three-parts-series, I will discuss how I developed the golf cart’s autonomous navigation stack with ROS.

There are three major components in the navigation stack of the autonomous vehicle: mapping, path planning, and motion execution. This post will only focus on mapping. You can learn more about path planning and motion control in the upcoming posts.

Before we get started, you should probably have some knowledge of ROS, such as nodes, topics and packages. If you want to learn more about ROS and the design principles behind ROS, please visit the documentation on their website.

Navigation Stack Overview

The flowchart of the navigation stack for the self-driving vehicle. This diagram doesn’t include any of the machine learning components.

The self-driving vehicle’s navigation stack is based on the ROS navigation stack; you can learn more about that here. The flow chart above is composed of three major components.

  • ZED vision system

The ZED vision system is no ordinary camera. It uses stereoscopic vision, largely inspired by the human eyes, to perceive depth information. It not only captures RGB images and generates depth maps, it also provides crucial odometry information for the move_base node and rtabmap, using the camera’s built-in IMU. If you are interested in learning more about ZED and maybe try it out, please visit StereoLab’s website. (There are other great stereoscopic cameras out there, ZED just happens to be my favorite).

  • RTABMap

RTABMap (realtime appearance based mapping) uses the depth maps and laser scan data to generate global maps of the environment, and also create local maps for the local navigation planner. Don’t worry if you are confused about global maps vs. local maps… I will elaborate on those topics later in this post. I will discuss local and global path planning in the upcoming blog post.

  • move_base

Last but not least, the source code for the move_base node is taken from the ROS navigation stack. Its name implies its function: the node calculate velocity commands used to move the base of the robot. More specifically, move_base includes a global planner, local planner, local & global costmaps.

When the navigation stack is running in ROS, a lot is going on. This graph below shows all running nodes. There are also auxiliary nodes such as the joystick node and linear actuator feedback.

All active nodes while navigating and mapping. This graph doesn’t include the motion execution component.

RTABMap

Before the vehicle can navigate in a geo-fenced environment, it needs to have a basic understanding of the world. We can call this a global map. Global maps contains location and dimension of static objects, such as buildings, poles, and curbs. I use RTAB-Map to construct this global map for the self-driving vehicle. Here is a little bit of information about RTAB-map. According to the their project website:

“RTAB-Map (Real-Time Appearance-Based Mapping) is a RGB-D, Stereo and Lidar Graph-Based SLAM approach based on an incremental appearance-based loop closure detector. The loop closure detector uses a bag-of-words approach to determinate how likely a new image comes from a previous location or a new location. When a loop closure hypothesis is accepted, a new constraint is added to the map’s graph, then a graph optimizer minimizes the errors in the map… RTAB-Map can be used alone with a handheld Kinect, a stereo camera or a 3D lidar for 6DoF mapping, or on a robot equipped with a laser rangefinder for 3DoF mapping.”

http://introlab.github.io/rtabmap/
The visualization tool for rtabmap.

RTAB-Map uses the stereo images captured by the ZED vision system and construct a 3D map of the environment. Furthermore, I can drive the vehicle in the same environment multiple times to improve the accuracy of the map. Every time I repeat a path, RTAB-map will automatically detect matches with any previous maps in memory.

Global Map

After mapping the environment, RTAB-map creates a grid map based on the generated 3D mesh. The ROS global path planner works with 2D occupancy grids instead of RTAB-map’s 3D meshes. You can learn more about the occupancy grid message type here. Here is the visualization of the 2D occupancy grid in rviz.

The global occupancy grid.

The gray area in the map indicate mapped free space. The black grids means occupied space. The other area is unmapped unknown space.

Global Costmap

Once we have a global occupancy grid, we can generate a gl0bal costmap. You might be foreign to the concept of costmaps. Here is a little description of costmap_2d from ROS.

This package provides an implementation of a 2D costmap that takes in sensor data from the world, builds a 2D or 3D occupancy grid of the data… and inflates costs in a 2D costmap based on the occupancy grid and a user specified inflation radius. This package also provides support for … rolling window based costmaps, and parameter based subscription to and configuration of sensor topics.

Author: Eitan Marder-Eppstein, David V. Lu!!, Dave Hershberger

Occupied, free, inflated, and unknown space.

Each cell in the costmap can have one of 255 different cost values. In the actual application, each cell can be either free, occupied, or unknown. Each status has a special cost value assigned to it (you can specify this special cost value in the configuration). Columns that have a certain number of occupied cells are assigned a LETHAL_OBSTACLE cost value, columns that have a certain number of unknown cells are assigned a NO_INFORMATION cost value, and other columns are assigned a FREE_SPACE cost value.

In the visualization below, lethal obstacles are dark purple. The inflated spaces are shown in light blue. And the gradient in purple indicate the cost scaling, which is utilized by the global planner.

The global costmap with the inflated regions.

The costmap that is generated by the projection of the occupancy grid will allow the global planner to plan a global path for the robot.

Local Costmap

In general, the local costmap is very similar to the global costmap. However, there are some key differences. The local costmap is not static and has a rolling window. In other words, the local map moves with the base of the robot.

The local costmap is visualized with pink and blue. On the bottom left side of the image, you can see the costmap overlaid on the image captured by the front-facing camera.

The content of the local costmap is dynamic, not constant. Imagine that we have a global map that only contains information about buildings and roads. What if there is a vehicle parked on the side of the street, how about a pedestrian crossing the road? A static global map alone can’t handle all of these unique and dynamic situations. Local maps addressed the biggest weakness of the global map. Based on the sensor data from the ZED camera, lethal obstacles are constantly being added and cleared to the local map. Then, the local path planner will plan out a path that allows the vehicle to navigate around the immediate obstacles, while trying to stick to the global path.

Conclusion

Thanks for checking out my blog. I hope you found this article informative and helpful. Mapping is an integral component of the navigation stack of the autonomous vehicle. In the next two parts, I will discuss path planning and motion execution. Furthermore, if you have any questions, comments, or concerns, please reach out to me at contact@neilnie.com

If you are interested in learning more about the self-driving golf cart project, you might enjoy the following posts.

  1. Deep learning steering prediction
    1. Visualizing the Steering Model with Attention Maps
    2. Successfully Tested the Autonomous Steering System for the Self-Driving Golf Cart
    3. Predicting Steering Angle with Deep Learning — Part 2
    4. Predicting Steering Angle with Deep Learning — Part 1
  2. Semantic segmentation
    1. The Robustness of the Semantic Segmentation Network
    2. Autonomous Cruise Control System
    3. Understanding the World Through Semantic Segmentation
  3. Robot Operating System
    1. Hello, ROS
    2. Open Street Map with ROS
    3. Self-Driving Software + Carla Simulator
    4. GPS Localization with ROS, rviz, and OSM
Posted by:NeilNie

Student at Columbia University, School of Engineering and Applied Sciences. Prev. software engineer intern at Apple. More than six years of experience developing iOS and macOS applications. Experienced in electrical engineering/microcontrollers. From publishing several apps to presented a TEDx talk in machine learning. Striving to use my knowledge, skills, and passion to positively impact the world.

One thought on “Self-Driving Golf Cart: Autonomous Navigation with the ROS Navigation Stack – Part 1: Mapping

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.