Using Geometry to Detect Grasp Poses in 3D Point Clouds ten Pas, - - PowerPoint PPT Presentation

using geometry to detect grasp poses in 3d point clouds
SMART_READER_LITE
LIVE PREVIEW

Using Geometry to Detect Grasp Poses in 3D Point Clouds ten Pas, - - PowerPoint PPT Presentation

Using Geometry to Detect Grasp Poses in 3D Point Clouds ten Pas, Platt Northeastern University September 15, 2015 Helping Hands Lab Objective Three possibilities: Instance-level grasping Category-level grasping Novel object


slide-1
SLIDE 1

Using Geometry to Detect Grasp Poses in 3D Point Clouds

ten Pas, Platt Northeastern University September 15, 2015

Helping Hands Lab

slide-2
SLIDE 2

Objective

Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

slide-3
SLIDE 3

Objective

Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

The robot has a detailed description of the object to be grasped.

slide-4
SLIDE 4

Objective

Grasp the banana Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

The robot has general information about the object to be grasped.

slide-5
SLIDE 5

Objective

Grasp the thing in the box

The robot has no information about the object to be grasped.

Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

slide-6
SLIDE 6

Objective

“Easier” “Harder” Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

slide-7
SLIDE 7

Objective

Most research assumes this “Easier” “Harder” Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

slide-8
SLIDE 8

Objective

Our focus:

  • 1. Grasping novel or partially

known objects

  • 2. Robustness in clutter

Three possibilities: – Instance-level grasping – Category-level grasping – Novel object grasping

Related Work:

  • 1. Fischinger and Vincze. Empty the basket - a shape based learning approach for grasping piles of unknown
  • bjects. IROS'12.
  • 2. Fischinger et al. Learning grasps for unknown objects in cluttered scenes. IROS 2013.
  • 3. Jiang et al. Efficient grasping from rgbd images: Learning using a new rectangle representation. IROS 2011.
  • 4. Klingbeil et al. Grasping with application to an autonomous checkout robot. IROS 2011.
  • 5. Lenz et al. Deep learning for detecting robotic grasps. RSS 2013.
slide-9
SLIDE 9

Differences to Prior Work

– Localizing 6-DOF poses instead of 3-dof grasps – Point clouds obtained from multiple range sensors instead of a single RGBD image – Systematic evaluation in clutter

slide-10
SLIDE 10

Novel Object Grasping

slide-11
SLIDE 11

Novel Object Grasping

Input: a point cloud Output: hand poses where a grasp is feasible.

slide-12
SLIDE 12

Novel Object Grasping

Input: a point cloud Output: hand poses where a grasp is feasible.

Each blue line represents a full 6- DOF hand pose

slide-13
SLIDE 13

Novel Object Grasping

Input: a point cloud Output: hand poses where a grasp is feasible.

– don't use any information about object identity Each blue line represents a full 6- DOF hand pose

slide-14
SLIDE 14

what was there what the robot saw

Why Novel Object Grasping is Hard

slide-15
SLIDE 15

Why Novel Object Grasping is Hard

slide-16
SLIDE 16

what was there what the robot saw (monocular depth) what the robot saw (stereo depth)

Why Novel Object Grasping is Hard

slide-17
SLIDE 17
  • 2. Classification

Our Algorithm has Three Steps

  • 3. Outlier removal
  • 1. Hypothesis generation
slide-18
SLIDE 18
  • 2. Classification

Our Algorithm has Three Steps

  • 3. Outlier removal
  • 1. Hypothesis generation
slide-19
SLIDE 19

Step 2: Grasp Classification

We want to check each hypothesis to see if it is an antipodal grasp

slide-20
SLIDE 20

… then we could check geometric sufficient conditions for a grasp

If we had a “perfect” point cloud...

We would check whether an antipodal grasp would be formed when the fingers close

slide-21
SLIDE 21

If we had a “perfect” point cloud...

But, this is closer to reality...

Missing these points!

So, how do we check for a grasp now?

slide-22
SLIDE 22

If we had a “perfect” point cloud...

But, this is closer to reality...

Missing these points!

So, how do we check for a grasp now?

Machine Learning (i.e. classification)

slide-23
SLIDE 23

Classification

We need two things:

  • 1. Learning algorithm + feature representation
  • 2. Training data
slide-24
SLIDE 24

Classification

We need two things:

  • 1. Learning algorithm + feature representation

– SVM + HOG – CNN

  • 2. Training data
slide-25
SLIDE 25

Classification

We need two things:

  • 1. Learning algorithm + feature representation

– SVM + HOG – CNN

  • 2. Training data

– automatically extract training data from arbitrary point clouds containing graspable objects

slide-26
SLIDE 26

Training Set

slide-27
SLIDE 27

Training Set

97.8% accuracy (10-fold cross validation)

slide-28
SLIDE 28

Test Set

slide-29
SLIDE 29

Test Set

94.3% accuracy on novel objects

slide-30
SLIDE 30

Experiment: Grasping Objects in Isolation

slide-31
SLIDE 31

Results: Objects Presented in Isolation

slide-32
SLIDE 32

Experiment: grasping objects in clutter

slide-33
SLIDE 33

Results: Clutter

73% average grasp success rate in 10-object dense clutter

slide-34
SLIDE 34

Conclusions

  • New approach to novel object grasping
  • Use grasp geometry to label hypotheses

automatically

  • Average grasp success rates:
  • 88% for single objects
  • 73% in dense clutter
slide-35
SLIDE 35

Questions?

atp@ccs.neu.edu http://www.ccs.neu.edu/home/atp

ROS packages – Grasp pose detection: wiki.ros.org/agile_grasp – Grasp selection: github.com/atenpas/grasp_selection