Occlusion-Aware Reconstruction and Manipulation
of 3D Articulated Objects
Xiaoxia Huang, Ian Walker, and Stan Birchfield
Abstract
We present a method to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of the object in two different configurations. A novel combination of Procrustes analysis and RANSAC facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, enabling the robotic system to plan paths to parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of objects with both revolute and prismatic joints.
Algorithm
The proposed system automatically learns the properties of a priori unknown articulated objects in unstructured environments in order to facilitate further manipulation of those objects. Figure 1 shows an overview of our system. First, a set of images is captured by a camera of the object from different viewpoints while the object remains stationary. Structure-from-motion techniques are applied to the imagery to build a 3D model of the object. In order to learn the object¡¯s kinematic structure, the configuration of the object is interactively changed by exercising its degrees of freedom.
Fig. 1. Overview of the system.
Additional images are gathered of the object in the new configuration, and structure-from-motion is again applied to obtain a different 3D reconstruction. These two 3D models are segmented into the object¡¯s constituent components (rigid links) using Procrustes analysis with a RANSAC sampling strategy. From this information, the axis of each joint between neighboring links is found using a geometric method utilizing an axis-angle representation. Based on these models, the robot with eye-in-hand can automatically compute the transformation between the object and robot coordinate systems, enabling it to manipulate the object around the articulation axis with a given grasp point. Due to the complete 3D model, the robot can also interact with occluded, unseen parts of the object, as shown in Figure 2.
Fig. 2. The PUMA 500 robotic arm manipulates a toy truck.
Results
We evaluated the performance of our approach on a variety of different objects:
Fig. 3. Axis estimation of the toy dump truck and
the Barrett robot hand with one revolute joint. Red
line is estimated 3D axis.
Prismatic drawer and scraper truck
Fig. 4. Axis estimation of the drawer with a prismatic joint and the scraper truck with two revolute joints. Red lines are estimated 3D axes.
Videos
A video describing the system is here:
(AVI, 4.8 MB, XVID codec)
Publication
X. Huang, I. Walker, and S. Birchfield, Occlusion-Aware Reconstruction and Manipulation of 3D Articulated Objects. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), St. Paul, Minnesota, May 2012.
Acknowledgement
This work was supported by NSF grant IIS-1017007.