Introduction

This is part two of a four-part series about autonomous navigation with ROS (Robot Operating System). In the previous blog post, I discussed the use of RTAB-mapping (real-time appearance-based mapping) in the self-driving vehicle. In this blog post, I will talk about how to plan out a path once we have a high definition map of the environment.

Before We Get Started

If you are unfamiliar with my project, I have been building a self-driving golf cart for the past two years. I mainly use deep learning and ROS. ROS is an open-source robotics middleware. The term, “operating system” can be very misleading in ROS’s case. ROS provides open-source libraries for the most basic, and most commonly used robotics functionalities and algorithms. ROS is often installed and run in Linux computers.

The exterior of the golf cart.

What’s the Problem

We are trying to create a robot vehicle that can navigate autonomously in a known environment, given some unknown obstacles and conditions. In part one, I talked about how a robot gets to know the environment (a.k.a. mapping). In part two, we assume that the robot has sufficient understanding of the environment. Now, we will try to solve the problem of: how does the robot navigate in a known environment?

This is not a trivial problem at all. The robot must be able to do at least three things:

  1. Know the starting location within this known environment.
  2. Interpret the given destination.
  3. Plan a path between the starting location and destination, minimizing the chances of a collision.

Take a Look at the Map

Let’s jump back to part one, which is about mapping. Understanding the map that we created using RTAB-map is the foundation of developing a suitable path planning system. RTAB-map uses 3D point cloud data and RGB images to generate the 2D occupancy grid shown above. The gray regions are traversable, the dark purple dots are lethal obstacles, and the light blue regions are in close proximity to the lethal obstacles. The vehicle needs to navigate in this defined environment without colliding with any obstacles. In other words, It must try to go around the blue and purple regions, and only staying in the gray regions. If absolutely necessary, the robot will drive into the purple inflated region, which is less desirable.

Global Path Planner

Once the vehicle has the global map which is pre-defined and constant, it needs to localize itself on that global map first before planning out a path. The car uses a built-in feature provided by RTAB-map for localization. It compares the current input image’s features with RTAB-map’s memory. If there is a match, then the vehicle can infer its position. Under the right conditions, especially when the environment is highly mapped out, this localization method performs very well. In several seconds, the robot will be properly localized. However, when the condition of the environment changes, the algorithm cannot detect matches with the map’s memory. Because of this drawback, before testing the vehicle, I needed to map the environment two or three times, even though I was testing on the same road every day.

With the 2D occupancy map, the vehicle’s position, and some arbitrary destination, we can generate a path from the origin to the destination.

An overview of the global planner

There are multiple path planning algorithms. Dijkstra’s algorithm and A* are very popular choices for self-driving vehicles. In fact, according to Sebastian Thrun, the Stanford team used A* in the DARPA challenge. Generating this global path is essentially running a searching algorithm on a graph, the graph being the 2D grid map (global map).

Local Path Planner

Once the global path has been computed, the car is ready to start following the path. When the car is driving, there will most likely be changed in the environment. For example, a pedestrian might walk towards the vehicle, and that person’s presence will be sensed by the stereoscopic camera of the car. This pedestrian just introduced a change to the map, and more importantly, he/she might be standing right in the middle of the global path that the vehicle just calculated. To avoid a catastrophe, the golf cart must abandon the already planned path for something new.

rviz and ROS navigation stack running in real-time

This is why the local path planner is crucial in the navigation stack. The local planner works with the local map, which is much smaller than the global map. But the local map updates frequently (at around 10 hz) and always checks for new obstacles or obstacles that have left the field of view. The local planner also updates frequently. It tries to stay true to the global path initially generated, as well as avoid any lethal collisions. In the case of that pedestrian walking directly in front of the golf cart, the local map will add him/her to the map, almost simultaneously, the local planner will calculate a short trajectory, slowing the vehicle down, and letting it go around that person, hopefully, no collision at all.

Let’s take a closer look at the image above. I hope the visual aid will make it easier to understand. The yellow path is the global path. It’s long and windy. The path starts at the base of the robot (green rectangle) and extends all the way to the goal position. The red path is the local path. It’s short and curved. The local path is in the general direction of the global path but might deviate when there are new obstacles.

The bright pink regions are lethal obstacles, the blue regions are in close proximity to the obstacles. The gray regions are drive-able space.

The Algorithms Behind the Path Planners

An illustration from the ROS base_local_planner package. The figure illustrates the principle behind the Trajectory Rollout approach, which is used by the local planner of the golf cart.

For the path planners of the self-driving golf cart, I am using the base_local_planner package provided by ROS, with small modifications. How this planner works behind the scene is quite interesting. Using the map, the planner creates what’s called “kinematic trajectory” for the vehicle from the start to the goal. The planner also creates a value function, represented as another grid map. This value function is also known as the cost function. Lethal obstacles represent the highest cost, free space has the lowest cost. The controller will use the values from the cost function to determine the dx, dy, and d theta velocities for the vehicle.

According to the base_local_planner’s package:

“The basic idea of both the Trajectory Rollout and Dynamic Window Approach (DWA) algorithms is as follows:

1. Discretely sample in the robot’s control space (dx,dy,dtheta)
2. For each sampled velocity, perform forward simulation from the robot’s current state to predict what would happen if the sampled velocity were applied for some (short) period of time.
3. Evaluate (score) each trajectory resulting from the forward simulation, using a metric that incorporates characteristics such as: proximity to obstacles, proximity to the goal, proximity to the global path, and speed. Discard illegal trajectories (those that collide with obstacles).
4. Pick the highest-scoring trajectory and send the associated velocity to the mobile base.
5. Rinse and repeat.”

http://wiki.ros.org/base_local_planner

This is a very basic overview of how the local planner actually works.

Challenges with Tuning the Path Planner

The mapping system, global and local planners will work seamlessly together to create reasonable trajectories for the golf cart. However, the vehicle can’t drive autonomously just yet. The path planners have a lot of parameters that need to be tuned. Some of them are quite intuitive. For example, I can change the grid size of the map. By increase the resolution (decreasing the grid size) I can make the motion more precise. However, this will drastically increase the computation time.

One of the best ways to tune those parameters is to test the robot in the real world. Starting from a reasonable value, I used trial and error to optimize those values.

The End

This is a brief overview of the local and global path planners for the golf cart. I hope you have gained some insights into the ROS navigation stack. In part three and four of this series, I will discuss motion control as well as the end result.

There are limitations to this path planning approach, and to the navigation stack in general. My implementation doesn’t rely on machine learning. Some might argue that the vehicle can’t adapt to different environments. In the future, I will try to combine the machine learning perception abilities with the robust navigation stack architecture. Computer vision will allow the car to more accurately identify more types of obstacles, thus building a more detailed global and local map for the golf cart.

If you are interested in the self-driving golf cart project, you might want to check out the following blog posts.

  1. Deep learning steering prediction
    1. Visualizing the Steering Model with Attention Maps
    2. Successfully Tested the Autonomous Steering System for the Self-Driving Golf Cart
    3. Predicting Steering Angle with Deep Learning — Part 2
    4. Predicting Steering Angle with Deep Learning — Part 1
  2. Semantic segmentation
    1. The Robustness of the Semantic Segmentation Network
    2. Autonomous Cruise Control System
    3. Understanding the World Through Semantic Segmentation
  3. Robot Operating System
    1. Hello, ROS
    2. Open Street Map with ROS
    3. Self-Driving Software + Carla Simulator
    4. GPS Localization with ROS, rviz, and OSM
Posted by:NeilNie

Student at Columbia University, School of Engineering and Applied Sciences. Prev. software engineer intern at Apple. More than six years of experience developing iOS and macOS applications. Experienced in electrical engineering/microcontrollers. From publishing several apps to presented a TEDx talk in machine learning. Striving to use my knowledge, skills, and passion to positively impact the world.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.