RIASSUNTO
In the maritime environment, proper diver-robot communication is a bottleneck to enable collaborative work underwater. Many applications like companion or safety robots require a robust system to communicate underwater. In this paper, we propose a four-step vision-based underwater hand gesture recognition system to address this need. First, the input images are enhanced to account for the frequency-dependent loss underwater and made scale and rotation invariant. This allows the segmentation and extraction of the arm region. In a second step, a wrist line normalization is performed to extract the visible hand region. Then a hand posture detection is performed. Therefore, a new underwater posture dataset is created. Two detection algorithms are compared. A convex hull defect-based approach achieves 92.22% accuracy and a finger segmentation-based algorithm achieves an accuracy of 94.81%. In the last step, a dynamic hand gesture recognition is implemented. The presented system is based on the creation of generalizable posture parameters and can be adapted to any given posture. This makes the system easy to customize and therefore enables a wide range of new applications for human-robot interaction underwater.