Once again, students in the Engineering Department have meddled with forces they do not understand. In the early morning hours of April 30th 2019 members of an unauthorized capstone project accidentally released massive quantities of deadly zeta radiation into the first floor hallways of the EngGeo building on the campus of James Madison University.
An unknown number students and staff members were in the affected area at the time of the incident. It assumed that these individuals have been incapacitated by the effects of the zeta radiation. Zeta radiation poisoning progresses through two stages: In the first stage, the victim loses consciousness and transforms into a pumpkin-headed monstrosity. (See Figure 1 below.) The second stage is death. Fortunately, most victims make a full recovery if they receive treatment in time.
A response team is on the way with the necessary equipment to decontaminate the affected areas and treat the victims. Unfortunately, the decontamination process is slow, and the victims don't have much time. It is crucially important that we determine the location and identity of all victims so that the response team can allocate their resources as efficiently as possible.
The high levels of zeta radiation in the contaminated areas will interfere with any wireless communications. The only hope is to send in fully autonomous robots to find the victims and report back on their positions.
Your finished application will take three command line arguments: a pre-created map file of the area to search, a configuration file indicating the robot's starting location, and a data file listing a set of named locations in the provided map.
Once the application has been launched, no interaction with the robot will be allowed. The robot will have a limited amount of time to search for victims within the area defined by the map. By the end of the time period the robot must return to its initial location and announce that it is ready to report its findings.
Once the user requests a report by clicking on the appropriate button, the robot must provide location and identity information for as many victims as possible.
You are free to use any existing ROS nodes in your solution. Your solution must use a Turtlebot, but you are free to re-configure the robot or add additional sensors.
This starter package is organized as follows:
zeta_rescue/ package.xml CMakeLists.txt data/ engeo_map.yaml engeo_map.pgm initial_pose.yaml landmarks.csv logitech_calibration.yaml marker_0.pdf launch/ rescue.launch kinect_alvar_demo.launch usb_cam_alvar_demo.launch msg/ Victim.msg rviz/ kinect_alvar_demo.rviz usb_cam_alvar_demo.rviz scripts/ button.py config_ros_network.py
engeo_map.yaml, engeo_map.pgm
- This is a pre-constructed
map of the classroom and hallway area outside of EnGeo 1203. You may
use this for testing your application, but you should not hard-code
your solution for this map.
initial_pose.yaml
- This configuration file contains ROS
parameter settings representing the estimated starting position of the
robot in the provided map. This file places the robot just inside the
doorway of 1203, facing into the room.
landmarks.csv
This file contains the names and
coordinates of designated locations in the provided map. This
information may be used for reporting victim locations: "Victim 1 is
approximately 1.3 meters from the podium". Again, this
file is provided for testing purposes. A different file will be
provided at the time of the competition.logitech_calibration.yaml
- This is the calibration
file for the Logitech C905 cameras. This calibration data is
necessary to use the logitech cameras for detecting markers.
marker_0.pdf
- This is a pdf of a sample alvar marker
that you can use for testing.
rescue.launch
- This will be the main launch file
for your search and rescue application. You will need to modify this
file to start any nodes that are used in your application. It must be
possible to start the search process by executing the following
command in a terminal window:
roslaunch zeta_rescue rescue.launch
This launch file is configured to accept three command line arguments specifying the map to use, the initial pose in the map, and the set of landmarks:
roslaunch zeta_rescue rescue.launch map_file:=new_map.yaml initial_pose:=provided_pose.yaml landmark_file:=new_landmarks.csv
kinect_alvar_demo.rviz,usb_cam_alvar_demo.rviz
- These launch files demonstrate the use of the ar_track_alvar node for
locating markers in images.
button.py
This ROS node publishes an empty
message to the report_requested
topic whenever the
button is pressed.
config_ros_network.py
- This script automates the
process of configuring the robot laptop so that ROS can be operated
across multiple computers.
There will be some number of "victims" distributed through the map area. Each victim will look something like the following:
The black square in the center of the box is a marker that should be detectable by the ar_pose package. The red name tag will contain a unique name for each victim.
During execution your application must publish victim information to the following three topics:
/victim_image (sensor_msgs/Image)
- Image of the
most recently detected victim./victim_marker (visualization_msgs/Marker)
- RViz
marker illustrating the location of the most recently detected
victim. This must be a spherical orange marker .5 meters in
diameter. It should remain visible until the application exits.
/victim (zeta_rescue/Victim)
This message will
contain the full information about a single victim including an
image and a location. Note that the image here does not need to match the
image published to /victim_image. The image in this message should
provide the clearest possible view of the victim, ideally with a
legible view of the name tag. Additionally, when the report button is pressed the robot must present a spoken summary report describing the locations of the victims that were located.
For full credit, your application must reliably recognize victims and report their locations according to the specification above.
It is possible to receive full credit even if your robot does not discover all of the victims during the competition.
The final project presentations will be organized as a friendly competition between the project teams. Results will be scored on the basis of the number of victims that each team locates and reports.
The exact scoring rubric will be released as competition date gets closer. The following factors will be considered:
The scoring rubric will not explicitly reward smart search strategies, but random wandering will probably result in fewer detections than a systematic search.
Your grade for this project will be based on the quality of your final solution, as well as on making adequate progress on intermediate checkpoints.
For the fist two checkpoints you must:
doc
folder in your package.
There are no specific requirements for the functionality that should be finished for the first two checkpoints. However, for full credit, there must be clear progress in functionality from one checkpoint to the next. Also, the code submitted for each checkpoint must represent a complete, executable application. I should be able to run your code after each submission and evaluate the level of functionality.
The following checkpoint schedule has an example of the kind of thing I have in mind:For the final checkpoint, you will need to make an appointment with me to conduct a competition dry-run.
Overall final project grades will be calculated as follows:
Checkpoint 1 | 10% |
Checkpoint 2 | 10% |
Checkpoint 3 (Dress Rehearsal) | 15% |
Peer Evaluation* | 15% |
Final Functionality and Code Quality | 40% |
Competition Score | 10% |