deep learning in smart spaces
play

Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): - PowerPoint PPT Presentation

Fakultt fr Informatik Technische Universitt Mnchen [1] Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): Marc-Oliver Pahl, Stefan Liebald Supervisor: Prof. Dr.-Ing. Georg Carle Chair of Network Architectures and Services


  1. Fakultät für Informatik Technische Universität München [1] Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): Marc-Oliver Pahl, Stefan Liebald Supervisor: Prof. Dr.-Ing. Georg Carle Chair of Network Architectures and Services Department of Informatics Technical University of Munich (TUM)

  2. Outline Motivation Analysis Background • Application • Related Work Design & Implementation Evaluation Quantitative • Qualitative • Conclusion 2

  3. Motivation Why Deep Learning in Smart Spaces? • Enable interesting use cases such as facilitating the daily life routines (e.g. self-adapting rooms) ● Find complex coherences among the data [11] ● Goal: Enable users with little or even no pre-knowledge to build and train a neural network ● Approach: Provide easy-to-use machine learning functionality ● Modularize to suitable building block ● Enable Mash-Up [9] 3

  4. Analysis: How does Deep Learning work? Deep Learning – Feed-forward Neural Network Forward Pass w 1 x 1 a w 2 x 2 b w 3 x 3 y = a ( x * W + b ) Backward Pass 4

  5. Analysis Deep Learning Architectures – Recurrent Neural Network e t w . . . . . . . . . n e t 5

  6. Analysis Deep Learning Architectures – Deep Belief Network Classifier 6

  7. Analysis Application of machine learning / deep learning in smart spaces ● Knowledge required in • Machine learning / deep learning • Machine learning framework / library ● Design of neural network based on • Particular problem → Each problem requires an appropriate learning algorithm • Available training data ● Provide machine learning as a service ● Reduce complexity of machine learning • Users do not require pre-knowledge ● Ensure usability & reusability • Rapid prototyping 7

  8. Related Work Use cases mentioned in related work → e.g Health & Home Care, Comfort, Security, Energy Saving, etc. ● Supporting (disabled) people [2], [3] → Security (e.g. fire alarm) by FFNN → Automation (e.g. controlling of all devices) by RNN ● Recognizing human activites [4], [5], [6] → Pre-Training of a DBN (unsupervised learning) → Fine-Tuning of the DBN (supervised learning) → Output: 1 out of 10 activities ● Predicting human behaviour [7] → Pre-Training & Fine-Tuning of a DBN → Hybrid architecture with a classifier on top of the DBN → RNN: takes previous actions into account ● Smart Grid: Q-Learning, Reinforcement Learning [8] 8

  9. Design Modular Approach ● Separation of the state of a neural network and the respective learning algorithm ● Usability & Reusability All parameters and hyperparameters of a neural network contained in a configuration file Three machine learning services ● Feedforward Neural Network ● Deep Belief Network ● Recurrent Neural Network 9

  10. Insertion: Neural Network - Parameters & Hyperparameters Optimization Regularization technique Learning rate technique Number of Type of activation hidden layers function Neural Network Number of Weight initialization training epochs Size of mini-batch Bias initialization Cost function 10

  11. Functionality 11

  12. Implementation Machine Learning Services ● One service for each neural network ● Based on a context model ● Neural network implementation with the help of TensorFlow ● Configuration file reader ● Prepare training data → Bring data into the right shape → each neural network requires other conditions on the input: • FFNN: (data, label)-pairs of size [batch_size, feature_size] or [batch_size, label_size] • DBN: data of size [batch_size, feature_size] • RNN: (data, label)-pairs of size [batch_size, num_steps, feature_size] or [batch_size, label_size] 12

  13. Evaluation Main aspects: Performance, Usability & Reusability Quantitative Evaluation Latency of the modular approach ➢ Maybe a bit slower due to modularization Qualitative Evaluation Accuracy and training-time ➢ Probably comparable Experience with the concept Lines of Code for implementing the use case by Usability using my machine / deep learning modules ➢ Significantly less Effect and application of reusability Time for ➢ Implementing the use cases ➢ Creating the corresponding neural networks 13

  14. Evaluation Accuracy / Reconstruction Error – Comparison Approach Accuracy / Lines of Code Reconstruction Error Service FFNN 97.50 % 2 Regular FFNN 97.58 % 85 Service DBN 0.02 2 Regular DBN 0.02 148 Service RNN 98.50 % 2 Regular RNN 98.44 % 128 14

  15. Evaluation Training Time - Comparison Regular FFNN 800 Service FFNN Regular DBN Service DBN Regular RNN Service RNN 600 Training Iterations [x10] 400 200 0 0 500 1000 1500 2000 2500 3000 Training Time [s] 15

  16. Evaluation Training Time - Comparison Regular FFNN 800 Service FFNN Regular DBN Service DBN Regular RNN Service RNN 600 Training Iterations [x10] 400 200 0 0 500 1000 1500 2000 2500 3000 Training Time [s] 16

  17. Evaluation Training Time - Comparison Regular FFNN 800 Service FFNN 50 Regular DBN Service DBN 16.0 16.2 16.4 16.6 16.8 Regular RNN Training Time [s] Service RNN 600 Training Iterations [x10] 50 400 86 87 88 89 90 91 92 Training Time [s] 200 50 860 865 870 875 880 885 890 Training Time [s] 0 0 500 1000 1500 2000 2500 3000 Training Time [s] 17

  18. Evaluation Running Time - Comparison 12 10 8 Runtime [s] 6 4 2 0 Regular FFNN Service FFNN Regular DBN Service DBN Regular RNN Service RNN Machine Learning Approach 19

  19. Evaluation Running Time - Comparison 8 7 6 Runtime [s] 5 4 3 2 Regular DBN Service DBN 11 0.50 10 0.45 9 Runtime [s] Runtime [s] 8 0.40 7 0.35 6 5 0.30 4 0.25 3 Regular FFNN Service FFNN Regular RNN Service RNN 20

  20. Evaluation Main aspects: Performance, Usability & Reusability Quantitative Evaluation Qualitative Evaluation Latency of the modular approach Experience with the concept ➢ Maybe a bit slower due to ➢ Service handling ++ ➢ Code understanding +/0/0 (1) modularization ➢ Service understanding + Difference of about 0.3 s ➢ Configuration file understanding ++ Accuracy and Time ➢ Configuration file modifying ++ ➢ Probably comparable ➢ Neural network creation + Acc: similar, Time: comparable ➢ Neural network training +/++/0 (1) Lines of code for implementing the ➢ Understanding without ML/DL + use case by using my machine / knowledge deep learning modules Usability ➢ Significantly less ++/++/+ (1) 83, 146, 126 lines of code less (1) Reusability Time for ++/++/+ (1) ➢ Implementing the use cases Service: ~30 s, Regular: 5 - 10 min ➢ Creating the corresponding neural networks (1) FFNN/DBN/RNN Service: ~30 s, Regular: 2 - 5 min 21

  21. Conclusion ● Realization of three machine learning services which ● are easy-to-use ● do not require knowledge in Structure of the learning ● machine learning algorithms and the neural networks already implemented ● a machine learning library ● yield ● a high usability Separation of the state of a neural network and the ● reusability corresponding learning ● a good performance algorithm → configuration file 22

  22. Thank you 23

  23. Sources [1] https://storiesbywilliams.com/tag/ibm-watson-supercomputer/ [2] A. Hussein, M. Adda, M. Atieh, and W. Fahs, “Smart home design for disabled people based on neural networks,” Procedia Computer Science, vol. 37, pp. 117 –126, 2014. [3] A. Badlani and S. Bhanot, “Smart home system design based on artificial neural networks,” in Proc. of the Word Congress on Engineering and Computer Science, 2011. [4] H. Fang and C. Hu, “Recognizing human activity in smart home using deep learning algorithm,” in Proceedings of the 33rd Chinese Control Conference, July 2014, pp.4716–4720. [5] H. D. Mehr, H. Polat, and A. Cetin, “Resident activity recognition in smart homes by using artificial neural networks,” in 2016 4th International Istanbul Smart Grid Congress and Fair (ICSG), April 2016, pp. 1–5. [6] H. Fang, L. He, H. Si, P. Liu, and X. Xie, “Human activity recognition based on feature selection in smart home using back-propagation algorithm,” {ISA} Transactions, vol. 53, no. 5, pp. 1629 – 1638, 2014, {ICCA} 2013. [7] S. Choi, E. Kim, and S. Oh, “Human behavior prediction for smart homes using deep learning,” in 2013 IEEE RO-MAN, Aug 2013, pp. 173–179. [8] D. Li and S. K. Jayaweera, “Reinforcement learning aided smart-home decision- making in an interactive smart grid,” in 2014 IEEE Green Energy and Systems Conference (IGESC), Nov 2014, pp. 1–6. [9] https://www.google.de/search?q=pr%C3%A4sentation+m%C3%A4nnchen+ idee&source=lnms&tbm=isch&sa=X&ved=0ahUKEwihgLvYyPzUAhWIh7QKHZ [10]07D68Q_AUICigB&biw=1855&bih=990#imgrc=dTd6OykL30b4QM: [11] http://www.allnetflats-in-deutschland.de/smarthome 24

Recommend


More recommend