Reinforcement Learning in Topology-based Representation for Human Body Movement with Whole Arm Manipulation

Abstract

Moving a human body or a large and bulky object may require the strength of whole-arm manipulation (WAM). This type of manipulation places the load on the robot’s arms and relies on global properties of the interaction to succeed—rather than local contacts such as grasping or non-prehensile pushing. In this paper, we learn to generate motions that enable WAM for holding and transporting humans in certain rescue or patient care scenarios. We model the task as a reinforcement learning problem to provide a robot behavior that can directly respond to external perturbations and human motion. For this, we represent global properties of the robot-human interaction with topology-based coordinates that are computed from arm and torso positions. These coordinates also allow transferring the learned policy to other body shapes and sizes. For training and evaluation, we simulate a dynamic sea rescue scenario and show in quantitative experiments that the policy can solve unseen scenarios with differently shaped humans, floating humans, or with perception noise. Our qualitative experiments show the subsequent transporting after holding is achieved, and we demonstrate that the policy can be directly transferred to a real-world setting.

Publication
IEEE-RAS International Conference on Robotics and Automation