Localization and path planning are two of the most important components in autonomous robots. Robot Operating System provides a great foundation for working with maps and path planning. In this post, I will talk about how to use data provided by Open Street Map (OSM) and to visualize the osm data using rviz. I still have a long way to go before I can finish the localization and path planning module.
Some of you might know that the original self-driving software was completed written in python, with custom-made robotics middleware. I am currently working on integrating ROS into the self-driving golf cart. Here is a post about that.
Open Street Map
OpenStreetMap (OSM) is a collaborative project to create a free editable map of the world. The creation and growth of OSM have been motivated by restrictions on use or availability of map information across much of the world, and the advent of inexpensive portable satellite navigation devices. (Google does not allow self-driving vehicles to use their maps).
Since its creation in 2004, Open Street Map has grown to over 2 million registered users, who can collect data using manual survey, GPS devices, aerial photography, and other free sources. Rather than the map itself, the data generated by the OpenStreetMap project is considered its primary output. OSM provides detailed information paths, buildings and other landmarks, which plays a crucial role in self-driving car navigation and localization.
osm_cartography ROS Node
In order to visualize the data from OSM, I need a ROS node that can read the .xml file and publish a bunch of markers in rviz. OSM_cartography is an open-source package that accomplishes just that. Once the .osm file is visualized using rviz, it looks something like this:
Below is the map of Deerfield Academy. The red lines represent pathways, while the blue lines represent buildings or other structures. If you look very closely, you will find yellow marks, which are waypoints on a path. In the center of the image, you will find a tiny robot, which is the golf cart. This kind of visualization is quite useful when it comes to localization and path planning.
One crucial component of this visualization is transformation. I must use the ros tf module to apply a transformation to the coordinates before they could appear on the screen. If you want to see the details of how I did that, please check out this file in my Github project.
A Long Road Ahead
Simply visualizing the .osm data is not enough. I still have a long way to go before I could successfully localize the robot, and plan a path. The goal is to use particle filters (and other filters) to accurately find the location of the robot by comparing its sensor measurements to the landmarks. Once I could localize the robot, I could also plan a path based the map and the motion of the vehicle.
There are a few caveats regarding ROS as well. I need to learn more about tf (transform) and robot motion before I could move the robot around in the simulated environment.
Overall, it should be a very challenging and rewarding experience.
Feel free to reach out to me if you have any questions, comments or concerns. My email is firstname.lastname@example.org. You might also like these topics and posts:
- Driver by wire system (DBW)
- Robot Operating System