Model for Unfolding Laundry using Interactive Perception

Bryan Willimon, Stan Birchfield, and Ian Walker

Abstract

We present an algorithm for automatically unfolding a piece of clothing. A piece of laundry is pulled in different directions at various points of the cloth in order to flatten the laundry. The features of the cloth are extracted and calculated to determine a valid location and orientation in which to interact with it. The features include the peak region, corner locations, and continuity / discontinuity of the cloth. In this paper we present a two-stage algorithm, introducing a novel solution to the unfolding / flattening problem using interactive perception. Simulations using 3D simulation software, and experiments with robot hardware demonstrate the ability of the algorithm to flatten pieces of laundry using different starting configurations. These results show that, at most, the algorithm flattens out a piece of cloth from 11.1% to 95.6% of the canonical configuration.

Big Picture

Our laundry-folding system involves two parts:

(A) First Phase
(B) Second Phase

(A) First Phase Process of the first phase to unfolding / flattening laundry. Each step of the process is numbered along with the orientation that is used to transform one configuration into another. In each step the outer edge of the piece of clothing is grasped and pulled away from the center of the object.

(B) Second Phase The second phase uses depth information to locate possible folds in the cloth, and to find grasp points and directions to enable the folds to be removed. Each iteration of this phase involves six steps.

1). Locate peak ridge - In the depth image, larger points are farther from the table, so the peak is the highest point above the table.
2). Find corner locations - To detect corners along the edge of the cloth, we run the Harris corner detector on the binary image that results from thresholding the depth image so that points on the table are zero while points on the cloth are one.
3). Find discontinuities on object - This function uses a sliding window to locate large slopes and contour discontinuities to determine which areas of the object are connected.
4). Continuity check of the peak ridge - This step combines steps 1) and 3) to find the contour / region on top of the object.
5).Continuity check for all peak corner locations - This step combines steps 1) and 2) to determine which corners will be used to grasp the article.
6). Cloth model grasp point and direction - This final step determines the grasp point from step 5) and which direction to pull / push the article to further flatten the object.

Experimental Results

The proposed approach was applied to a variety of initial configurations of cloths to test its ability to perform under various scenarios using Houdini 3D simulation software. In each experiment, different initial configurations were used. We tested our approach on a single washcloth to demonstrate the process of our algorithm on a piece of laundry.

Taxonomy of Possible Starting Configurations

  • The initial and final configurations of three different starting configurations after going through the first phase of the proposed algorithm in eight iterations.
  • The dropped cloth was created by dropping the cloth onto the table from a predefined height.
  • The folded cloth was created by sliding the article across the corner of the table and allowing it to fold on top of itself.
  • The placed cloth was placed on the table from the same position as the dropped cloth.

กก

Test to Fully Flatten the Cloth

  • This experiment tested the proposed algorithm in determining if this approach would completely flatten a piece of clothing.
  • The test used the first and second phase of the algorithm to grasp the cloth at various locations and moved the cloth at various orientations until the cloth obtained a flattened percentage greater than 95%.

Experiment using PUMA 500

  • The goal of this experiment is to test the performance of our algorithm in a real world environment using a PUMA 500 manipulator.
  • We used a Logitech QuickCam 4000 for an overhead view to capture the configuration of the cloth.

  • The four steps illustrated above show how the robot interacted with the cloth in each iteration

Publications

Acknowledgements

We wish to acknowledge Brian Peasley for helpful insight and feedback during our research discussions. This research was supported by the U.S. National Science Foundation under grants IIS-1017007, IIS-0844954, and IIS-0904116.