image processing based object recognition and
play

Image Processing-Based Object Recognition and Manipulation with a - PowerPoint PPT Presentation

Image Processing-Based Object Recognition and Manipulation with a 5-DOF Smart Robotic Arm through a Smartphone Interface Using Human Intent Sensing Haiming Gang hg1169@nyu.edu Background Some of manipulator work followed by fixed built-in


  1. Image Processing-Based Object Recognition and Manipulation with a 5-DOF Smart Robotic Arm through a Smartphone Interface Using Human Intent Sensing Haiming Gang hg1169@nyu.edu

  2. Background • Some of manipulator work followed by fixed built-in command • Some of manipulator work followed by people command • Need lots of sensors or marks • Low intelligence • Difficult to participate the daily life of human

  3. Solution • Adjustable manipulator Adjust itself according to various requirements and environment • Easy using manipulator Less sensors and marks • Smart manipulator • Human partner Help people finish some tasks according to daily activity data of people

  4. System diagram

  5. Structure of arm robot Wrist joint Elbow Finge joint r Shoulder lift joint Shoulder pan joint Bas e

  6. How it work? 1. Mobile phone or tablet receives image data from camera according to target object 2. Mobile phone or tablet deals with image frame by frame 3. Mobile phone or tablet sends position information of target object and robot 4. Raspberry pi solves inverse kinematic equation to get joint angular and sends these data to Arbotix-M microcontroller 5. Arbotix-M covert joint angular to electrical signal and operate robot to pick up target object 6. Place target object to a fixed and preset position

  7. Mobile phone or tablet receives image data from camera according to target object • Camera connect with raspberry pi by usb cable and normally open • Raspberry pi sends image frame to mobile phone via mjpg-streamer (Wireless Transmission) • Choose object by clicking button in the interface or mobile phone chooses preset object automatically according to pace of running (Step count algorithm for iphone 5 and motion coprocessor for newer version)

  8. Mobile phone or tablet deals with image frame by frame • Object recognition by haar feature classifier (10 FPS) • Obtain the position information between target object and manipulator (1 FPS) • The position information of mark can be saved in phone, so the phone does not need to obtain this data every time when using Box Beer Toothpast Cup Pump e

  9. Haar cascade Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. For this, haar features shown in below image are used. They are just like convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels under white rectangle from sum of pixels under black rectangle.

  10. For example, consider the image below. Top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is often darker than the region of the nose and cheeks. The second feature selected relies on the property that the eyes are darker than the bridge of the nose. The haar feature reflects the change of the gray scalar of the image.

  11. Haar Cascade Classifier Positive Negative image image Box 700 3500 Beer 550 2750 Toothpaste 700 3500 Pump 700 3500 Cup 450 2250

  12. Distinguish Condition Box: B>G>R (a) (b) Beer: G>B, G>R, G<40 and B>30 Toothpaste box R>G>B Pump R>G, R>B and R<27 (c) (d) Cup R>B>G (e)

  13. Obtain position information 1. Features2D + Homography to find mark 2. Houghline function to find the lines’ equations and use the equations to calculate the position of intersection points 3. Affine Transformation to get the birds’-eye-view and calculate the pixel distance between mark and object

  14. Affine transformation • Perspective Transform • Distance measurement • Fixed height of object Bottom of Object Global Coordinate y x Robot Mark

  15. Publish the position information to the robot by the ios devices

  16. Improvement • Faster image process speed • More accurate recognition algorithm • More functions combined with mobile phone a) Set an alarm to pick up preset object b) Utilize habits of people to determine different target objects in different time

  17. Demo

  18. Thank you! &Question?

Recommend


More recommend