Zeta Rescue!

Introduction

Once again, students in the Engineering Department have meddled with forces they do not understand. In the early morning hours of Dec. 13th 2023 members of an unauthorized capstone project accidentally released massive quantities of deadly zeta radiation into King Hall on the campus of James Madison University.

An unknown number students and staff members were in the affected area at the time of the incident. It is assumed that these individuals have been incapacitated by the effects of the zeta radiation. Zeta radiation poisoning progresses through two stages: In the first stage, the victim loses consciousness and transforms into a pumpkin-headed monstrosity. (See Figure 1 below.) The second stage is death. Fortunately, most victims make a full recovery if they receive treatment in time.

Figure 1

A response team is on the way with the necessary equipment to decontaminate the affected areas and treat the victims. Unfortunately, the decontamination process is slow, and the victims don’t have much time. It is crucially important that we determine the location and identity of all victims so that the response team can allocate their resources as efficiently as possible.

The high levels of zeta radiation in the contaminated areas will interfere with any wireless communications. The only hope is to send in fully autonomous robots to find the victims and report back on their positions.

Project Outline

Once your application has been launched, no interaction with the robot will be allowed. The robot will have a limited amount of time to search for victims within the area defined by the map. By the end of the time period the robot must return to its initial location.

Once the user requests a report by clicking on the appropriate button, the robot must provide location and identity information for as many victims as possible.

You are free to use any existing ROS Packages in your solution.

Provided Code

The following three repositories provide ROS packages that will be necessary or helpful in developing your application:

Competition Steps

A (simulated) round of competition would proceed as follows:

Each of the launch files described above takes additional command-line arguments that will be used to run the competition on the real robots instead of the simulator. You can always see the full list of command line arguments for a launch file by using the --show-args flag.

Competition API

The node(s) that you write for the competition must conform to the following API.

Issues To Consider

Searching

The simplest search strategy is to select random navigation targets until time elapses. The time limit will be short enough that a random strategy will be unlikely to discover all victims. There are many other approaches ranging from simple greedy strategies to complex search algorithms. If you are interested in exploring the literature in this area, A survey on coverage path planning for robotics by Enric Galceran and Marc Carreras [1] would provide a reasonable starting point.

Identifying Victims

Fortunately, all of the victims were wearing name tags with ArUco augmented reality markers [2] at the time of the accident. The ros2_aruco package provides a ROS2 wrapper for performing ArUco marker identification. You can try out on the competition markers by starting the competition launch file and then executing the following launch file to start the aruco detector and an appropriate visualization:

ros2 launch zeta_rescue aruco_demo.launch.py

The marker detection is not perfectly accurate or reliable, and it only works if the victim is observed from the correct direction. You are welcome to use additional strategies for identifying and locating victims.

Real Robots vs. Simulator

The actual competition will not use the simulator. There is no guarantee that solutions that work will in simulation will work well (or at all) on the real robots. Make sure to leave time to evaluate and tune your solution in the real world.

Competition

The final project day will be organized as a friendly competition between the project teams. Results will be scored on the basis of the number of victims that each team locates and reports.

The exact scoring rubric will be released as competition date gets closer. The following factors will be considered:

The scoring rubric will not explicitly reward smart search strategies, but random wandering will probably result in fewer detections than a systematic search.

Deadlines and Grading

Your grade for this project will be based on the quality of your final solution, as well as on making adequate progress on intermediate checkpoints.

For the fist two checkpoints you must:

The text documents should be named README1.md, README2.md etc., and should be stored in a doc folder in your package.

There are no specific requirements for the functionality that should be finished for the first two checkpoints. However, for full credit, there must be clear progress in functionality from one checkpoint to the next. Also, the code submitted for each checkpoint must represent a complete, executable application. I should be able to run your code after each submission and evaluate the level of functionality. The following checkpoint schedule has an example of the kind of thing I have in mind:

Overall final project grades will be calculated as follows:

Checkpoint 1 15%
Checkpoint 2 20%
Peer Evaluation* 15%
Final Functionality and Code Quality 40%
Competition Score 10%

References

[1]Galceran, Enric, and Marc Carreras. “A survey on coverage path planning for robotics.” Robotics and Autonomous systems

[2] Garrido-Jurado, Sergio, et al. “Automatic generation and detection of highly reliable fiducial markers under occlusion.” Pattern Recognition 47.6 (2014): 2280-2292.


* We reserve the right to increase the weight of this factor if there is strong evidence that a group member has not make a good-faith effort to contribute to the project.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.