Localization error, sensor noise, and occlusions can lead to an imperfect model of the environment, which can result in collisions between a robot arm and unobserved obstacles when manipulating. The robot must navigate around these obstructions despite not knowing their shape or location. Without tactile sensors, the robot only observes that a contact occurred somewhere on its surface, a measurement containing very little information. We present the Collision Hypothesis Sets representation for computing a belief of occupancy from these observations, and we introduce a planning and control architecture that uses this representation to navigate through unknown environments. Despite the dearth of information, we demonstrate through experiments that our algorithms can navigate around unseen obstacles and into narrow passages. We test in multiple environments in simulation and on a physical robot arm both with and without the aid of a 2.5D depth sensor. Compared to a baseline representation Collision Hypothesis Sets produce an approximately 1.5-3x speed-up and improve the success rate from 40-60% to 100% in tested scenarios with narrow passages.
Watch the demo: