LAIAR-2018, Baden-Baden, Germany, June 11, 2018 TOYOHASHI UNIVERSITY OF TECHNOLOGY Machine Learning Applications for Personal Service Robots Jun Miura Active Intelligent Systems Laboratory (AISL) Department of Computer Science and Engineering Toyohashi University of Technology
Personal Service Robot Projects at AISL TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 guide stay aside attend watch 光・熱・空気 Measure physical 機器の制御 Control appliances の状態監視 Detect Conditions 危険状態の発 potential dangers ⾒ ⼈への働きか Give cares to person け
Our person following robots TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018
Robotic lifestyle support TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 Speech-based June 11, 2018 control Delivery task Visual Stimuli TOYOTA Human Support Robot BMI-controlled robot
Functional Components of Robot TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Robotics in Planning Computer Science Action Recognition Attend Environment �with Human�
Important functions of personal service robots TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Person recognition – Person detection – Person identification – Person state estimation Person-aware behavior generation – Person’s awareness estimation – Recognize person’s intention – Attending behavior generation Autonomous navigation – Localization and path planning Object recognition and manipulation – Specific and general object recognition – Hand motion generation
TOYOHASHI UNIVERSITY OF TECHNOLOGY Person detection and identification
People detection TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Shape-based person modeling 3D LIDAR [Kidono 2011] 2D LIDAR x2 (waist and legs) [Koide 2016] Image-based person modeling (YOLO)
Person identification TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Use of multiple features [Koide 2016] – Cannot know which features are effective in advance adaptive feature selection using an online boosting algorithm Face features People detection, orientation estimation, and identification Clothing features
TOYOHASHI UNIVERSITY OF TECHNOLOGY Person state estimation
Combined tracking and body orientation estimation [Ardiyanto 2014] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Image-based orientation estimation (8 orientations) Use of motion-orientation consistency for a better estimation � � � � � � � � Correlation at higher speed � � � , � � , � � �� � Image-based Motion-based � � →∞ � � → 0 →ω (motion) → 0 (shape)
One-shot body orientation estimation TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 CNN-based body orientation estimation [Kohari 2018] – Train the network using SURREAL dataset Accuracy [%] Accuracy [%] Accuracy [%] Average error Averaged ( ± 0 [deg]) ( ± 10 [deg]) ( ± 20 [deg]) [deg] time [msec] 47.7 89.7 97.5 6.94 8.73
Pose estimation from depth images using deep neural network [Nishi 2017] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Experimental Thermal point Extracted human Input depth data Pose estimation scene cloud region result
Illumination normalization for face detection and recognition [Dewantara 2016] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Appearance of face changes input due to illumination changes. Have a similar face image in output any illumination conditions Use a fast GA-optimized fuzzy inference on-line
TOYOHASHI UNIVERSITY OF TECHNOLOGY Person-aware robotic behavior
Estimating person’s awareness of an obstacle [Koide 2016] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Key assumption: – If a person is not aware of an obstacle, the person acts as if there is no obstacle. aware exist Machine Machine learning learning unaware not exist Red : aware Green : not aware
Social force guiding model [Dewantara 2016] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018
Q-learning in perceived state TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 Static and dynamic obstacles Relative position Body and head orientation June 11, 2018 Target state Robot state Obstacles state
TOYOHASHI UNIVERSITY OF TECHNOLOGY Autonomous navigation
View-based localization using two-stage SVM [Miura 2008] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Learning Recognition Localization SVMs Is location A or SVM learning Location A Is not location A algorithm recognizer Location B recognizer Object Object recognition results recognition Location X negative result recognizer positive Object recognition Location A negative Recognizer is constructed for every location.
Single SVM-based localization: results TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 Input image is tested for all location models. June 11, 2018 – A single location is not always chosen. Multiple location models have positive output Good result The highest output is negative Input image Outputs of all location models Best matched learned image
Introducing Markov localization TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Output of localization SVM is used as likelihood values in Markov localization Input image Distribution of locations Best matched learned image
Combination with Bayesian filtering TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 SVM output is used as likelihood in the correction step of June 11, 2018 discrete Bayes filter. Input image: Jun. 22, 5pm Location on rainy the map: trained image- Best matched location image: relations are Jun. 20, 5pm manually assigned. sunny Estimated probability distribution of locations Location ID
To apply machine learning to robotics TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Simulation for training and testing Learning from humans Post-processing of module outputs before applying to robots
TOYOHASHI UNIVERSITY OF TECHNOLOGY Dataset generation for depth image- based pose estimation
Person state monitoring in unusual situations TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Head position estimation for various postures Estimation results Generating training data for head position estimation [Nishi 2015]
Generating depth data with body part labels [Nishi 2017] TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Adding body Generating depth Generating parts and skeletal data and body part Adding pose data CG models information label images Attach body part and skeletal information Construct to the models various human Use a motion capture body models system for giving various pose data
Motion capture for generating various pose data TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018
Generated data examples TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018
Dataset for partially-occluded cases TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Put occluding objects before rendering [Nishi 2017] Input depth Input depth Correct Test scene Results Results data data labeling (RGB, Thermal)
Behavior simulation TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Simulating human behaviors in cafeteria [Shigemura 2011] – Walking towards a destination while avoiding collisions, queueing, searching for a seat to sit, … – We can specify the number of people, their objectives, floorplan, …
To apply machine learning to robotics TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Simulation for training and testing Learning from humans Post-processing of module outputs before applying to robots
Learn how to attend from person (what attending actions are comfortable to people) TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Attending behavior measurement system [Koide 2017] 3D LIDAR A long and wide-area Measure measurement is position and possible poses Person-person-environment behaviors are measured using a pre-constructed map
Measurement example TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Observe an attending task of a caregiver in a nearby hospital Scene Person detection Relative Relative distance position
To apply machine learning to robotics TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Simulation for training and testing Learning from humans Post-processing of module outputs before applying to robots
Summary TOYOHASHI UNIVERSITY OF TECHNOLOGY LAIAR 2018 June 11, 2018 Machine learning methods are applicable to many robotic recognition and planning tasks, and quite useful with an enough amount and variety of data. To use machine learning methods in robotics: – Simulation for training and testing – Learning from humans – Post-processing of their outputs before applying to robots How to introduce ML methods? – End-to-end learning directly? – Step-by-step fashion (e.g., replace one module after another)?
Recommend
More recommend