deep learning review and possible applications
play

- PowerPoint PPT Presentation

2018.8/7 @ 2018 (Deep Learning: Review and Possible Applications) (Masato Taki) RIKEN , iTHEMS


  1. 画風2 Neural Style Transfer [Gatys et al., 2015] 画風1 Image1 Image2’ Image2

  2. Neural Style Transfer [Gatys et al., 2015]

  3. Neural Style Transfer [Gatys et al., 2015]

  4. Neural Style Transfer [Gatys et al., 2015]

  5. Neural Style Transfer [Gatys et al., 2015] [Gatys et al., 2015]

  6. ‘Solving’ Games

  7. 神経科学研究者D.ハサビスらが率い るベンチャー(2014年、Googleが 500億円程で買収) Deep Q-Network [DeepMind,`16] Deep Mind

  8. Deep Q-Network [DeepMind,`16] https://www.youtube.com/watch?v=iqXKQf2BOSE

  9. 3. Useful methods for Basic Science

  10. 1. Curse of Dimensionality 次元の呪い

  11. Curse of Dimensionality & Lazy Learning Near Far Without learning, just recording the whole data. To find pattern/ classify new data, measure similarity (distance) between them and new data point. (eg. K-mean clustering) new sample

  12. in D-dimension Volume of radius R ball Curse of Dimensionality & Lazy Learning V D ( R ) = R D × V D (1)

  13. A ball whose radius is one Ratio of vol. of thin skin ( ) in radius1 ball Curse of Dimensionality & Lazy Learning V D ( R ) = R D × V D (1) 0 . 99 ≤ r ≤ 1 V D (1) − V D (0 . 99) = 1 − (0 . 99) D V D (1)

  14. A ball whose radius is one 0.0297 = 3% Curse of Dimensionality & Lazy Learning . . . . . . 0.0956 = 10% 0.0199 = 2% 0.01 = 1% 10 3 2 1 Ratio of vol. of thin skin ( ) in radius1 ball 1 − (0 . 99) D D 0 . 99 ≤ r ≤ 1 V D (1) − V D (0 . 99) = 1 − (0 . 99) D V D (1)

  15. Curse of Dimensionality & Lazy Learning 0.01 = 1% In higher dim, data is too sparse to extract a . . . . . . 0.0956 = 10% 0.0297 = 3% 0.0199 = 2% 10 Thin skin dominates the volume!! Opposite to 3 2 1 Separated Maximally 0.9999 = 99.99..% 1000 intuition geometric structure even in ‘big data’ situation. 1 − (0 . 99) D D

  16. Representation Learning & Dim. Compression Good information representation Clustering in lower dim. Anomaly detection

  17. How to make Good Representation Good rep. 1. Unsupervised: Auto-Encoder 2. Supervised Good rep. x x x y

  18. 2. Search 探索

  19. Reinforcement Learning (強化学習) Reward r Agent (program) (点数) Action a State s ’

  20. (点数) Action value function Total reward under the policy π Q-Learning Reward r Agent (program) policy a ∼ π (a | s ) Action a State s ’ Q π ( s, a )

  21. Deep Q-Learning Action value Optimization We know deep learning is powerful learner. MonteCarlo + Reinforcement Learning Q π ( s, a ) State s Q π ( s, a 1 ) Action a n Q π ( s, a 2 ) Q π ( s, a 3 )

  22. 3. Detection 検出

  23. YOLO

  24. 4. Libraries

  25. Language is Python basically Many Libraries for DL are Python-based. Coding is not the main purpose. Analyzing data and doing Machine Learning is (basically) our job.

  26. Libraries for DL Libraries I have used are for instance + Theano etc. Which is the best? → Whichever you like. by Google by Googler by Facebook by Preferred Network

  27. A criterion for usual user Libraries for DL http://www.timqian.com/star-history/#tensorflow/tensorflow&BVLC/caffe&caffe2/caffe2&Microsoft/CNTK&apache/ incubator-mxnet&torch/torch7&pytorch/pytorch&deeplearning4j/deeplearning4j&Theano/Theano&amzn/amazon- dsstne&chainer/chainer

  28. Keras is my recommendation Libraries for DL https://towardsdatascience.com/battle-of-the-deep-learning- frameworks-part-i-cff0e3841750

  29. 5. Scientific Application - breast cancer -

  30. New Model for segmentation [M.T & Murata, TBA]

Recommend


More recommend