The purpose of this lab is to gain experience incorporating vision code into ROS nodes and making use of depth data from the Kinect.
$ cd cs354_ws $ mkdir src $ cd src $ catkin_init_workspace
three_d_vision/
package.xml
CMakeLists.txt
src/ <-- Importable Python modules go here.
three_d_vision/ <-- Package for importable modules.
__init__.py <-- This file turns the folder into a package.
point_cloud2.py <-- Module for reading PointCloud2 messages.
kinect_subscriber.py <-- Module for subscribing to BOTH
PointCloud2 and image messages
simultaneously.
rviz/ <-- rviz configuration files.
vision.rviz
depth.rviz
launch/ <-- Launch files
view_vision.launch
view_depth.launch
setup.py
scripts/ <-- Scripts that create Python ROS nodes.
kinect_vision_demo.py
kinect_depth_demo.py
catkin_make in the root folder of the workspace.
This is necessary to properly set up the Python package for imports.
At this point, you should either open a new terminal or
source the setup.bash script in your devel
directory.
scripts/kinect_vision_demo.py .
view_vision.launch and look at the published image topic. You can look at the topic in rviz, or by starting an image_view node:
$ rosrun image_view image_view image:=/red_marked_image
scripts/kinect_depth_demo.py .
view_depth.launch and look at the published
topics in rviz. You should be able to see the location in space of
the red pixel relative to the robot.