ming
play

Ming Colin Scribes ming : - , Motivation Inferring from - PowerPoint PPT Presentation

l Variation al Auto 16 Lecture encoders : Ming Colin Scribes ming : - , Motivation Inferring from Images Latent Variables : Dataset MNIST Goh images ; hand of digits written - Goal two variables Infer labels 1.


  1. l Variation al Auto 16 Lecture encoders : Ming Colin Scribes ming : - ,

  2. Motivation Inferring from Images Latent Variables : Dataset MNIST Goh images ; hand of digits written - Goal two variables Infer labels 1. Digit , Style { 9 } y e 0 - , . . , variables 2 RD 7 €

  3. Deep Generative Models Andisi equally " probable ) Neural Network Ideal Use to define : a model generative a Generative Model ,q } label { Digit o f , ... 1 ) Discrete ( yn ~ ,0 0.1 . ... , 1 ) Normal ×neRP ( Zn a o ~ , , Bernoulli ( y×( yn ,7n :& ) ) xn ~ boh P=76P D= 2-50 N= Distribution Joint f p N 4,7 ;D ) M Style ;D ) PC plxn.yn.tn Image = x. pixels ) ( € Rb n= , { , } × × ,X× = , , ...

  4. Generative Training Models Deep Gradient Use Ascent Idea Stochastic 2 : '~ph→ Etpcyi likelihood sloypcx perform estimation to maximum ,7 :O) pcx y , [ log 110 , ) # )= ;o7 pc it y pcy ,77 | To log POLIO ) ] pc x. ) yitso = , s ) ( lg § y p( y ) ~ T.logpcx.ci " " " so ) a 's , z , Prior Problem to broad samples too good : propose

  5. Llo ,¢)= Generative Training Models Deep Gradient Use Ascent Idea Stochastic 3 : Vaniational perform inference to 750 ' Pl×' [ log " logpcx ) E s :o) qcyitso ) :O) qcy ,7 Perform Combining 2+3 gradient both i ascent on ( Max Generative VfL 7 ( O Learn Likelihood , ¢ ) : . ) Model g* log Ptt 0 ) argngae ' = ( posterior ) Learns to £ ( 0,9 ) Vaniatianal Inference 2 : . ) ) ( ¢* Hpcyitlx KL ) qlyitso = angngtn

  6. Generative Training Models Deep Use Idea 4 neural network to define : ( . ) inference variatianal model the a.k.a. the dist Dtstf ynt{ ,9 } label Digit 0 , ... Inference model ( have ) supervision Discrete ( FY 019 ) ) Image ( yn xn ; ~ P qly.tl/)=M9olyn.7nlXn R -9 xe Neural Nets n \I ittlxmynsgts ( ) | Normal 7n a Parameters Vaniaticnal N ) ZNERD stale vows nil ) ( supervision no

  7. Variation al Autoencoders Learn Objective model generative deep and : a a by inference corresponding Medel optimizing ' .n=0a¢( E Tomko # www.nlbspgfxiy?niI*

  8. A Intermezzo encoders Wto :

  9. Differentiable ← latent Encoder Mapping from code to image : x z ¢h Multi layer - c. p2 e RHK e pzas 6 ( bh ) hn Whxn + = Perception ± new mapping b ) Zooh 500 7 6 ( 7 a ~ hn W pwnams t pawns zn = , ; ÷ 7 Activation functions 6kt X

  10. Mapping Decoder from to code image 7 × : Multi Perception layer . Oh " ) . tin = 6( Whzn b + ÷ ' ) 6 ( In b + When = ,¢)=§l,§.!×n,p Loss Binary - entropy Cross : L( log In to ,p { + ( l , p ) Xn - ,p ) ) this Minimize log ( c- In

  11. Learned Auto Latent Codes encoder

  12. Auto Variation Treat latent al variable encoder a : as

  13. Auto Variation Treat latent al variable encoder a : as Model ( Encoded ( Decade ) Inference Model Generative 0 ) ,7n Q ) 91711in plxn ; 's bh ) 6( Whxnt Normal ( 0 I ) hn 7h = ~ , = 6( When bh ) wtuhntbl " µ ? hn + = ' ] a exp ( ' 6(W×hn+b× ) hntb y×n= w = ( y×n ) ( µI I ) Normal Bernoulli 7h , bit ~ xn ~ 's ] Llo ;o , ,[ logp "gYzYa Objective Ean , a) : = .

  14. Variutional Auto Learned Latent Codes encoder

  15. Variation Auto Auto encoder encoder vs

  16. Training The Re Trick parameterization : fowl batch Compute of gradient images : on Gradient of O ] "u!if( In §B " , variance " } ) polx.MS { × .iq#a.p.,n.lbgaF .x , , ... fqLl0ioD-2stE.qlegqoaHnxd@gP093L.a Analogue BBVI REINFORCE Estimator of style : . 13 ) 7,141 9¢( b= , ( sl Problem be might high Zb qlttskb ) : ~

  17. Training The Re Trick parameterization : Idea Sample ' zb " teparameterized dist using a : Normal B- } " ' ( ° ' 7 eb ~ ' as ) 's F) 's ' ;¢Mlt6( pit ( xb " 75 xb Eb = dsstpcel \ '~µmm(µ?66 Result Estimator Re parameterized : Do ,|£ 2- ( Xb ,{bs$tD pq( Xb pc e) [ 68 , to £ # ( O ,¢ ) = qgczks.es:491 's ) , =f[ [ B S 2- ( Xb ,{bs¢tD pqkb , leg 0g 9g ( 7 ( Xs ,Ebs¢tl1×b ) b- s= , , In 5=1 often enough practice :

  18. Variation al Autoencoders Learn Objective model generative deep and : a a by inference corresponding Medel optimizing ' .n=0a¢( E Tomko # www.nlbspgfxiy?niI*

  19. Practical / Tensor Py Torch Implementations flow :

  20. ( 7 style & digit ) Continuous ,7n ) ( Xn both pp nlxnl 9$17 : encodes , ft style ) Continuous Discrete ynifnl pfxn encodes qfbn lxn ) : ,7n + , ,

  21. Disentangled Representations Hao Learn Interpretable Sarthaht Features Bab ah + : Xn 7in 1 , 7h , 2 . ) . 1 : n ,7

Recommend


More recommend