Machine Learning Applications for Personal Service Robots Jun Miura - - PowerPoint PPT Presentation

machine learning applications for personal service robots
SMART_READER_LITE
LIVE PREVIEW

Machine Learning Applications for Personal Service Robots Jun Miura - - PowerPoint PPT Presentation

LAIAR-2018, Baden-Baden, Germany, June 11, 2018 TOYOHASHI UNIVERSITY OF TECHNOLOGY Machine Learning Applications for Personal Service Robots Jun Miura Active Intelligent Systems Laboratory (AISL) Department of Computer Science and


slide-1
SLIDE 1

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Machine Learning Applications for Personal Service Robots

Jun Miura

Active Intelligent Systems Laboratory (AISL) Department of Computer Science and Engineering

Toyohashi University of Technology

LAIAR-2018, Baden-Baden, Germany, June 11, 2018

slide-2
SLIDE 2

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Personal Service Robot Projects at AISL

⼈への働きか け 危険状態の発 ⾒ 光・熱・空気 の状態監視 機器の制御 attend watch guide stay aside Detect potential dangers Measure physical Conditions Give cares to person Control appliances

slide-3
SLIDE 3

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Our person following robots

slide-4
SLIDE 4

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Robotic lifestyle support

Speech-based control Delivery task BMI-controlled robot Visual Stimuli TOYOTA Human Support Robot

slide-5
SLIDE 5

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Functional Components of Robot

Attend

Environment with Human

Action

Recognition Planning

Robotics in Computer Science

slide-6
SLIDE 6

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Important functions of personal service robots

 Person recognition

– Person detection – Person identification – Person state estimation

 Person-aware behavior generation

– Person’s awareness estimation – Recognize person’s intention – Attending behavior generation

 Autonomous navigation

– Localization and path planning

 Object recognition and manipulation

– Specific and general object recognition – Hand motion generation

slide-7
SLIDE 7

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Person detection and identification

slide-8
SLIDE 8

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

People detection

 Shape-based person modeling  Image-based person modeling

(YOLO)

2D LIDAR x2 (waist and legs) [Koide 2016] 3D LIDAR [Kidono 2011]

slide-9
SLIDE 9

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Person identification

 Use of multiple features [Koide 2016]

– Cannot know which features are effective in advance  adaptive feature selection using an online boosting algorithm

Clothing features Face features People detection, orientation estimation, and identification

slide-10
SLIDE 10

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Person state estimation

slide-11
SLIDE 11

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Combined tracking and body orientation estimation [Ardiyanto 2014]

 Image-based orientation estimation (8 orientations)  Use of motion-orientation consistency for a better estimation

→∞ →0 →ω (motion) →0 (shape)

  • ,
  • Correlation at

higher speed

  • ,
  • Image-based

Motion-based

slide-12
SLIDE 12

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

 CNN-based body orientation estimation [Kohari 2018]

– Train the network using SURREAL dataset

One-shot body orientation estimation

Accuracy [%] (±0 [deg]) Accuracy [%] (±10 [deg]) Accuracy [%] (±20 [deg]) Average error [deg] Averaged time [msec]

47.7 89.7 97.5 6.94 8.73

slide-13
SLIDE 13

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Pose estimation from depth images using deep neural network [Nishi 2017]

Experimental scene Input depth data Pose estimation result Extracted human region Thermal point cloud

slide-14
SLIDE 14

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Illumination normalization for face detection and recognition [Dewantara 2016]

 Use a fast GA-optimized

fuzzy inference on-line

 Have a similar face image in

any illumination conditions

 Appearance of face changes

due to illumination changes.

  • utput

input

slide-15
SLIDE 15

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Person-aware robotic behavior

slide-16
SLIDE 16

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Estimating person’s awareness of an obstacle [Koide 2016]

 Key assumption:

– If a person is not aware of an obstacle, the person acts as if there is no obstacle.

Machine learning aware unaware Machine learning exist not exist

Red : aware Green : not aware

slide-17
SLIDE 17

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Social force guiding model [Dewantara 2016]

slide-18
SLIDE 18

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Q-learning in perceived state

Target state Robot state Obstacles state

Relative position Body and head orientation Static and dynamic obstacles

slide-19
SLIDE 19

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Autonomous navigation

slide-20
SLIDE 20

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

View-based localization using two-stage SVM [Miura 2008]

positive negative negative Location A Location X recognizer Location B recognizer Location A recognizer Recognizer is constructed for every location. Object recognition results SVM learning algorithm

Learning Recognition

Is location A or Is not location A Object recognition result

Object recognition

Localization SVMs

slide-21
SLIDE 21

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Single SVM-based localization: results

 Input image is tested for all location models.

– A single location is not always chosen.

Input image Outputs of all location models Best matched learned image Good result Multiple location models have positive

  • utput

The highest output is negative

slide-22
SLIDE 22

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Introducing Markov localization

 Output of localization SVM is used as likelihood values in

Markov localization

Input image Distribution of locations Best matched learned image

slide-23
SLIDE 23

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Combination with Bayesian filtering

Input image:

  • Jun. 22, 5pm

rainy Best matched image:

  • Jun. 20, 5pm

sunny Estimated probability distribution

  • f locations

Location ID Location on the map:

trained image- location relations are manually assigned.

 SVM output is used as likelihood in the correction step of

discrete Bayes filter.

slide-24
SLIDE 24

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

To apply machine learning to robotics

 Simulation for training and testing  Learning from humans  Post-processing of module outputs before applying to robots

slide-25
SLIDE 25

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Dataset generation for depth image- based pose estimation

slide-26
SLIDE 26

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Person state monitoring in unusual situations

 Head position estimation for various postures  Generating training data for head position estimation [Nishi 2015]

Estimation results

slide-27
SLIDE 27

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Generating depth data with body part labels [Nishi 2017]

Generating CG models Adding pose data Generating depth data and body part label images Adding body parts and skeletal information

Construct various human body models Attach body part and skeletal information to the models Use a motion capture system for giving various pose data

slide-28
SLIDE 28

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Motion capture for generating various pose data

slide-29
SLIDE 29

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Generated data examples

slide-30
SLIDE 30

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Dataset for partially-occluded cases

 Put occluding objects before rendering [Nishi 2017]

Input depth data Correct labeling Results Input depth data Results Test scene (RGB, Thermal)

slide-31
SLIDE 31

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Behavior simulation

 Simulating human behaviors in cafeteria [Shigemura 2011]

– Walking towards a destination while avoiding collisions, queueing, searching for a seat to sit, … – We can specify the number of people, their objectives, floorplan, …

slide-32
SLIDE 32

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

To apply machine learning to robotics

 Simulation for training and testing  Learning from humans  Post-processing of module outputs before applying to robots

slide-33
SLIDE 33

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Learn how to attend from person (what attending actions are comfortable to people)

 Attending behavior measurement system [Koide 2017]

Measure position and poses 3D LIDAR A long and wide-area measurement is possible Person-person-environment behaviors are measured using a pre-constructed map

slide-34
SLIDE 34

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Measurement example

 Observe an attending task of a caregiver in a nearby hospital

Scene Person detection Relative distance Relative position

slide-35
SLIDE 35

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

To apply machine learning to robotics

 Simulation for training and testing  Learning from humans  Post-processing of module outputs before applying to robots

slide-36
SLIDE 36

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

LAIAR 2018 June 11, 2018

Summary

 Machine learning methods are applicable to many robotic

recognition and planning tasks, and quite useful with an enough amount and variety of data.

 To use machine learning methods in robotics:

– Simulation for training and testing – Learning from humans – Post-processing of their outputs before applying to robots

 How to introduce ML methods?

– End-to-end learning directly? – Step-by-step fashion (e.g., replace one module after another)?

slide-37
SLIDE 37

TOYOHASHI

UNIVERSITY OF TECHNOLOGY

Thanks to:

Kenji Koide Igi Ardiyanto Bima Sena Bayu Dewantara Kaichiro Nishi