Once again, students in the Engineering Department have meddled with forces they do not understand. In the early morning hours of Dec. 13th 2023 members of an unauthorized capstone project accidentally released massive quantities of deadly zeta radiation into King Hall on the campus of James Madison University.
An unknown number students and staff members were in the affected area at the time of the incident. It is assumed that these individuals have been incapacitated by the effects of the zeta radiation. Zeta radiation poisoning progresses through two stages: In the first stage, the victim loses consciousness and transforms into a pumpkin-headed monstrosity. (See Figure 1 below.) The second stage is death. Fortunately, most victims make a full recovery if they receive treatment in time.
Figure 1
A response team is on the way with the necessary equipment to decontaminate the affected areas and treat the victims. Unfortunately, the decontamination process is slow, and the victims don’t have much time. It is crucially important that we determine the location and identity of all victims so that the response team can allocate their resources as efficiently as possible.
The high levels of zeta radiation in the contaminated areas will interfere with any wireless communications. The only hope is to send in fully autonomous robots to find the victims and report back on their positions.
Once your application has been launched, no interaction with the robot will be allowed. The robot will have a limited amount of time to search for victims within the area defined by the map. By the end of the time period the robot must return to its initial location.
Once the user requests a report by clicking on the appropriate button, the robot must provide location and identity information for as many victims as possible.
You are free to use any existing ROS Packages in your solution.
The following three repositories provide ROS packages that will be necessary or helpful in developing your application:
A (simulated) round of competition would proceed as follows:
The judge launches the simulator:
ros2 launch zeta_competition sim.launch.py
The judge launches the competition:
ros2 launch zeta_competition competition.launch.py
or
ros2 launch zeta_competition competition.launch.py scoring:=true
The second version brings up an extra RViz window that is configured to allow comparisons between reported victim locations and ground-truth victim locations.
This launch file also has an optional gui
argument that makes it
possible start the Gazebo simulator with no GUI. This should
reduce computational overhead.
ros2 launch zeta_competition competition.launch.py gui:=false
This launch file will starts the following:
Judge starts YOUR rescue application:
ros2 launch zeta_rescue rescue.launch.py
This launch file takes an optional time_limit
argument
indicating the time limit for the competition in seconds:
ros2 launch zeta_rescue rescue.launch.py time_limit:=60
You are free to modify rescue.launch.py
, but you should not add
or remove any command line arguments.
The judge waits until the time limit has elapsed. By this point, the robot should have returned to the starting area.
The judge will click on the “CLICK FOR REPORT” button, indicating to the rescue application that it should publish victim information.
The judge will then compare the reported victim locations to the true victim locations.
Each of the launch files described above takes additional command-line
arguments that will be used to run the competition on the real robots
instead of the simulator. You can always see the full list of command
line arguments for a launch file by using the --show-args
flag.
The node(s) that you write for the competition must conform to the following API.
/report_requested
(std_msgs/msg/Empty
) - When a message is
received on this topic, victim information must be
published. Any victim information published before the button is
pressed will be ignored./victim
(zeta_competition_interfaces/msg/Victim
) This
message will contain the full information about a single victim
including an image and a location. The image in this message
should provide the clearest possible view of the victim, ideally
with a legible view of the name tag.The simplest search strategy is to select random navigation targets until time elapses. The time limit will be short enough that a random strategy will be unlikely to discover all victims. There are many other approaches ranging from simple greedy strategies to complex search algorithms. If you are interested in exploring the literature in this area, A survey on coverage path planning for robotics by Enric Galceran and Marc Carreras [1] would provide a reasonable starting point.
Fortunately, all of the victims were wearing name tags with ArUco
augmented reality markers [2] at the time of the accident. The
ros2_aruco
package provides a ROS2 wrapper for performing ArUco
marker identification. You can try out on the competition markers by
starting the competition launch file and then executing the following
launch file to start the aruco detector and an appropriate
visualization:
ros2 launch zeta_rescue aruco_demo.launch.py
The marker detection is not perfectly accurate or reliable, and it only works if the victim is observed from the correct direction. You are welcome to use additional strategies for identifying and locating victims.
The actual competition will not use the simulator. There is no guarantee that solutions that work will in simulation will work well (or at all) on the real robots. Make sure to leave time to evaluate and tune your solution in the real world.
The final project day will be organized as a friendly competition between the project teams. Results will be scored on the basis of the number of victims that each team locates and reports.
The exact scoring rubric will be released as competition date gets closer. The following factors will be considered:
The scoring rubric will not explicitly reward smart search strategies, but random wandering will probably result in fewer detections than a systematic search.
Your grade for this project will be based on the quality of your final solution, as well as on making adequate progress on intermediate checkpoints.
For the fist two checkpoints you must:
The text documents should be named README1.md, README2.md etc., and should be stored in a doc folder in your package.
There are no specific requirements for the functionality that should be finished for the first two checkpoints. However, for full credit, there must be clear progress in functionality from one checkpoint to the next. Also, the code submitted for each checkpoint must represent a complete, executable application. I should be able to run your code after each submission and evaluate the level of functionality. The following checkpoint schedule has an example of the kind of thing I have in mind:
Overall final project grades will be calculated as follows:
Checkpoint 1 | 15% |
Checkpoint 2 | 20% |
Peer Evaluation* | 15% |
Final Functionality and Code Quality | 40% |
Competition Score | 10% |
[1]Galceran, Enric, and Marc Carreras. “A survey on coverage path planning for robotics.” Robotics and Autonomous systems
[2] Garrido-Jurado, Sergio, et al. “Automatic generation and detection of highly reliable fiducial markers under occlusion.” Pattern Recognition 47.6 (2014): 2280-2292.
* We reserve the right to increase the weight of this factor if there is strong evidence that a group member has not make a good-faith effort to contribute to the project.