Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations


Zhenyu Jiang (University of Texas at Austin),
Yifeng Zhu (University of Texas at Austin),
Maxwell Svetlik (University of Texas at Austin),
Kuan Fang (Stanford University),
Yuke Zhu (University of Texas at Austin)
Paper Website
Paper Website
Paper #024
Interactive Poster Session III Interactive Poster Session VI

0d 00h 00m

0d 00h 00m


Abstract

Grasp detection in clutter requires the robot to reason about the 3D scene from incomplete and noisy perception. In this work, we draw insight that 3D reconstruction and grasp learning are two intimately connected tasks, both of which require a fine-grained understanding of local geometry details. We thus propose to utilize the synergies between grasp affordance and 3D reconstruction through multi-task learning of a shared representation. Our model takes advantage of deep implicit functions, a continuous and memory-efficient representation, to enable differentiable training of both tasks. We train the model on self-supervised grasp trials data in simulation. Evaluation is conducted on a clutter removal task, where the robot clears cluttered objects by grasping them one at a time. The experimental results in simulation and on the real robot have demonstrated that the use of implicit neural representations and joint learning of grasp affordance and 3D reconstruction have led to state-of-the-art grasping results. Our method outperforms baselines by over 10% in terms of grasp success rate.

Spotlight Presentation

Previous Paper Paper Website Next Paper