pixel wise conditioning of generative adversarial networks
play

Pixel-wise Conditioning of Generative Adversarial Networks December - PowerPoint PPT Presentation

Pixel-wise Conditioning of Generative Adversarial Networks December 6, 2019 1 Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76 000 Rouen, France 2 Belgian Nuclear Research, Institute Environment, Health and Safety, Boeretang 200 - BE-2400


  1. Pixel-wise Conditioning of Generative Adversarial Networks December 6, 2019 1 Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76 000 Rouen, France 2 Belgian Nuclear Research, Institute Environment, Health and Safety, Boeretang 200 - BE-2400 Mol, Belgium Cyprien Ruffino 1 , Romain Hérault 1 , Eric Laloy 2 and Gilles Gasso 1

  2. Problem • Full-size image generation Constraints (c) Pixel Inpainting (b) Regular Image (a) Original • Unstructured information 1 Objective • Very few information Differences with inpainting geosciences • Motivation: applications in constraints • Generation under pixel • Image reconstruction task ( ∼ 0 . 5 % )

  3. 2 from fake ones Two networks, a generator G and a discriminator D : D • Discriminator : binary classifier, tries to distinguish real samples Generative Adversarial Networks [GPAM + 14] • Generator : produces synthetic data from a random z ∼ P z , where P z is a known distribution [ ] [ ] min G max L ( D , G )= E log( D ( X )) + E log( 1 − D ( G ( z ))) X ∼ P r z ∼ P z x GAN D cost z G

  4. Conditional GAN [MO14] D • Conditional variant of the GANs 3 • A constraint/label c is simply given as an input to both G and D • Works well for generating image with a class constraints [ ] [ ] log( 1 − D ( G ( z , C ′ ) , C ′ )) min G max L ( D , G )= log( D ( X , C )) + E E X ∼ P r z ∼ P z C ′ ∼ P C C ∼ P C | X x GAN c D cost G z

  5. Constrained CGAN Theoretical objective Strictly constrained objective is non-differentiable Problem 4 D Explicit verification, hard-constraint on the respect of C [ ] [ ] min G max L ( D , G )= log( D ( X , C )) + log( 1 − D ( G ( z , C ′ ) , C ′ )) E E X ∼ P r z ∼ P z C ′ ∼ P C C ∼ P C | X s.c. C = M ( C ) ⊙ G ( z , C ) where M ( C ) gives the binary mask of the constraints

  6. Constrained CGAN Theoretical objective Strictly constrained objective is non-differentiable Problem 4 D Explicit verification, hard-constraint on the respect of C [ ] [ ] min G max L ( D , G )= log( D ( X , C )) + log( 1 − D ( G ( z , C ′ ) , C ′ )) E E X ∼ P r z ∼ P z C ′ ∼ P C C ∼ P C | X s.c. C = M ( C ) ⊙ G ( z , C ) where M ( C ) gives the binary mask of the constraints

  7. Relaxation of the constrained CGAN D Our approach 2 5 Relaxation of the strict constraints by a regularization term [ ] min G max L reg ( D , G ) = L ( D , G ) + λ E ∥ C − M ( C ) ⊙ G ( z , C ) ∥ 2 z ∼ P z C ∼ P C x GAN c D cost G z L2 cost

  8. Experiments i • Objective: find evidence of a controllable trade-off between quality and respect of the constraints • Experiments repeated 10 times each Metrics • Respect of the constraints: Mean Square Error on constrained pixels distance between the distributions of the features of real and generated samples at the output of a deep classifier. 6 Task: Hyperparameter search on λ • Visual quality: Fréchet Inception Distance [HRU + 17]:

  9. Experiments ii Datasets • MNIST and FashionMNIST • Split in train, validation and test sets • 10% of each set used to sample constraints, then discarded Networks architecture • DCGAN [RMC15]-like, with only 2 convolutional/transposed convolutional layers in D and G 7

  10. Results on FashionMNIST (a) Original Image (b) Constraints (c) Generated Image (d) Satisfied Consts. This method can generate samples that respects pixel precise constraints 8

  11. Results on FashionMNIST • The constraints seem able to enhance quality MSE and FID 9 MSE / FID relative to λ 3500 =0 =0 Conditional GAN Conditional GAN 0.8 No constraints No constraints GAN GAN 3000 0.6 Conditional GAN 2500 MSE FID 0.4 2000 0.2 1500 0.0 1000 10 1 10 0 10 1 10 2 10 3 10 4 10 1 10 0 10 1 10 2 10 3 10 4 lambda lambda • From λ = 0 . 1 upwards, there seem to be a trade-off between

  12. Results on FashionMNIST • Trade-off clearly visible • Regularization enhance both quality and respect of constraints • Adding constraints can enhance visual quality 10 10 4 =0 Conditional GAN 0.8 No constraints GAN 10 3 0.6 10 2 lambda MSE 0.4 10 1 0.2 10 0 1 10 0.0 500 1000 1500 2000 2500 3000 3500 FID

  13. Results on FashionMNIST respect of the constraints) 11 Some generated samples at λ = 1 (best ratio between quality and

  14. Conclusion Conclusion • Conditional GANs can learn pixel-wise constraints • The L 2 regularization term allows to control a trade-off between visual quality and respect of the constraints Extensions • Applications on real-world datasets • Extension to other kind of constraints (moments on zones, ...) 12

  15. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Mehdi Mirza and Simon Osindero. arXiv preprint arXiv:1511.06434 , 2015. adversarial networks. Unsupervised representation learning with deep convolutional generative Alec Radford, Luke Metz, and Soumith Chintala. arXiv preprint arXiv:1411.1784 , 2014. Conditional generative adversarial nets. In Advances in Neural Information Processing Systems , pages 6626–6637, 2017. Sherjil Ozair, Aaron Courville, and Yoshua Bengio. equilibrium. Gans trained by a two time-scale update rule converge to a local nash Sepp Hochreiter. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and In Advances in neural information processing systems , pages 2672–2680, 2014. Generative adversarial nets. 13

Recommend


More recommend