For this project you will write code to build an occupancy grid of the Turtlebot's environment using Kinect sensor data.
Create a ROS package named "mapping" to contain your project code,
and add the python files above to a nodes
directory.
Read over the code and comments in those two files. Briefly, the provided
mapper node just subscribes to the /scan
topic and
publishes a dummy OccupancyGrid
message every time the
scan callback is called. The localizer node broadcasts a tf transform
from the /map
coordinate frame to the /odom
coordinate frame.
The following steps will bring up the entire mapping application: (In the long run you will probably want to create a launch file to simplify this process.)
roslaunch turtlebot_bringup minimal.launch
roslaunch turtlebot_bringup 3dsensor.launch
rosrun mapping localizer.py
rosrun mapping mapper.py
Once all of the necessary nodes have been started, try visualizing the
map and the scan
messages in rviz
:
rosrun rviz rvizThis project will be much easier if you get comfortable using rviz.
PointStamped
message object. (This is the
approach I recommend.)
/scan
topic and
publishes PointCloud
messages to a new topic named
something like /scan_points
. This node would be nice
and reusable, but it adds communication overhead to an already slow
process.
/scan
topic altogether and subscribe instead
to /camera/depth_registered/points
. This topic includes
all of the Kinect data as points in the sensor coordinate frame.
Using this topic would avoid the need for trigonometry, but it would
require you to figure out which points correspond to the appropriate
scan line. This python module would probably be helpful: https://code.ros.org/trac/ros-pkg/browser/stacks/common_msgs/trunk/sensor_msgs/src/sensor_msgs
tf
.
Once all of this is working correctly you should be able to watch your
map being updated in rviz
. Marked grid squares should
correspond to the scan data.
Completing this version of the mapping algorithm correctly is worth 85% of the overall grade for this project.
The mapping code described above only handles the problem of constructing a map from sensor data; it doesn't handle the problem of controlling the robot during the mapping process. The assumption is that the robot will bo controlled through teleoperation.
If you have extra time, write a simple wandering robot node that randomly drives the robot around its environmnet, using the scan data (or the map!) to avoid colliding with obstacles.
Submit this project by putting a copy of your mapping package in
your submit folder. Please include a README file that briefly
describes what your code does, how it works, and how I should run
it. You must also include a copy of your completed map in the
format generated by the map_server
package.