Once again, students in the Engineering Department have meddled with forces they do not understand. In the early morning hours of December 5th 2015 members of an unauthorized capstone project accidentally released massive quantities of deadly zeta radiation into the second floor hallways of the HHS building on the campus of James Madison University.
An unknown number students and staff members were in the affected area at the time of the incident. It assumed that these individuals have been incapacitated by the effects of the zeta radiation. Zeta radiation poisoning progresses through two stages: In the first stage, the victim loses consciousness and transforms into a pumpkin-headed monstrosity. (See Figure 1 below.) The second stage is death. Fortunately, most victims make a full recovery if they receive treatment in time.
A response team is on the way with the necessary equipment to decontaminate the affected areas and treat the victims. Unfortunately, the decontamination process is slow, and the victims don't have much time. It is crucially important that we determine the location and identity of all victims so that the response team can allocate their resources as efficiently as possible.
The high levels of zeta radiation in the contaminated areas will interfere with any wireless communications. The only hope is to send in fully autonomous robots to find the victims and report back on their positions.
Your finished application will take three command line arguments: a pre-created map file of the area to search, a configuration file indicating the robot's starting location, and a data file listing a set of named locations in the provided map.
Once the application has been launched, no interaction with the robot will be allowed. The robot will be given 10 minutes to search for victims within the area defined by the map. By the end of the ten-minute time period the robot must return to its initial location and announce that it is ready to report its findings.
Once the user requests a report by clicking on the appropriate button, the robot must provide location and identity information for as many victims as possible.
You are free to use any existing ROS nodes in your solution. Your solution must use a Turtlebot, but you are free to re-configure the robot or add additional sensors.
You should use the following unfinished ROS package as the starting point for your project: zeta_rescue.zip. This package is organized as follows:
zeta_rescue/ package.xml CMakeLists.txt data/ map.yaml map.pgm initial_pose.yaml landmarks.csv logitech_calibration.yaml launch/ rescue.launch ar_pose_webcam_demo.launch ar_pose_kinect_demo.launch scripts/ button.py
map.yaml, map.pgm
- This is a pre-constructed map of the
hallway area outside of HHS 2002. You may use this for testing your
application, but you should not hard-code your solution for this map.
initial_pose.yaml
- This configuration file contains ROS
parameter settings representing the estimated starting position of the
robot in the provided map. This file places the robot at the at south
end of the hallway outside of HHS2002, facing north.
landmarks.csv
This file contains the names and
coordinates of designated locations in the provided map. This
information may be used for reporting victim locations: "Victim 1 is
approximately 1.3 meters from the drinking fountain". Again, this
file is provided for testing purposes. A different file may be
provided at the time of the competition.logitech_calibration.yaml
- This is the calibration
file for the Logitech C905 cameras. This calibration data is
necessary to use the logitech cameras for detecting markers.
rescue.launch
- This will be the main launch file
for your search and rescue application. You will need to modify this
file to start any nodes that are used in your application. It must be
possible to start the search process by executing the following
command in a terminal window:
roslaunch zeta_rescue rescue.launch
This launch file is configured to accept three command line arguments specifying the map to use, the initial pose in the map, and the set of landmarks:
roslaunch zeta_rescue rescue.launch map_file:=new_map.yaml initial_pose:=provided_pose.yaml landmark_file:=new_landmarks.csv
ar_pose_webcam_demo.launch,ar_pose_kinect_demo.launch
- These launch files demonstrate the use of the ar_pose node for
locating markers in images.
button.py
This ROS node publishes an empty
message to the report_requested
topic whenever the
button is pressed.
There will be some number of "victims" distributed through the map area. Each victim will look something like the following:
The black square in the center of the box is a marker that should be detectable by the ar_pose package. The red name tag will contain a unique name for each victim.
Minimally, victim locations must be reported through the following two channels when the report button is pressed.
results
directory of
the zeta_rescue
package. Each line of this file must
contain comma-separated x and y coordinates of a single victim in the
map coordinate frame followed by the name of the image file
corresponding to the victim. For example, if the robot detected three
victims, then the resulting file might have the following contents:
1.22,2.13,victim1.jpg -1.57,1.68,victim2.jpg 3.78,0.33,victim3.jpgThe file names must correspond to jpeg image files stored in the
results
folder. Each image file should store a
legible view of the corresponding victim's name tag. Coordinate
values must be limited to two decimal places.
For full credit, your application must reliably recognize victims and report their locations according to the specification above.
It is possible to receive full credit even if your robot does not discover all of the victims during the competition.
The final project presentations will be organized as a friendly competition between the project teams. Results will be scored on the basis of the number of victims that each team locates and reports.
The exact scoring rubric will be released as competition date gets closer. The following factors will be considered:
The scoring rubric will not explicitly reward smart search strategies, but random wandering will probably result in fewer detections than a systematic search.
Your grade for this project will be based on the quality of your final solution, as well as on making adequate progress on intermediate checkpoints.
For each checkpoint you must:
doc
folder in your package.
There are no specific requirements for the functionality that should be finished by each checkpoint deadline. However, for full credit, there must be clear progress in functionality from one checkpoint to the next. Also, the code submitted for each checkpoint must represent a complete, executable application. I should be able to run your code after each submission and evaluate the level of functionality.
The following checkpoint schedule has an example of the kind of thing I have in mind:Overall final project grades will be calculated as follows:
Checkpoint 1 | 10% |
Checkpoint 2 | 10% |
Checkpoint 3 | 10% |
Peer Evaluation* | 20% |
Final Functionality and Code Quality | 40% |
Competition Score | 10% |