CADDY Underwater Gestures Dataset
10K stereo pair images collected in 8 different scenarios
Diver sign language (CADDIAN) to communicate with Autonomous Underwater Vehicles (AUVs)
|
CADDY Underwater Stereo-Vision DatasetHuman-Robot Interaction (HRI) for Divers and AUVs activitiesThis is an open access dataset distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0). |
A. Gomez Chavez, A. Ranieri, D. Chiarella, E. Zereik, A. Babić, and A. Birk, “Caddy underwater stereo-vision dataset for human–robot interaction (HRI) in the context of diver activities,” Journal of Marine Science and Engineering, vol. 7, no. 1, 2019.
D. Chiarella, M. Bibuli, G. Bruzzone, M. Caccia, A. Ranieri, E. Zereik, L. Marconi, P. Cutugno. “A Novel Gesture-Based Language for Underwater Human–Robot Interaction”, Journal of Marine Science and Engineering, Volume 6, Issue 3, Special Issue "Intelligent Marine Robotics Modelling, Simulation and Applications", 2018 |
10K stereo pair images collected in 8 different scenarios
Diver sign language (CADDIAN) to communicate with Autonomous Underwater Vehicles (AUVs)
12K stereo pair images synchronized with diver body pose measurements
Diver tracked by an AUV through stereo camera and suit of IMUs (DiverNet)
Quick links to software for underwater camera calibration and parsing of datasets
Reference to the main project website, results and gallery of the final review meeting