Method: Training • Task 1: Complete relighting Encoder Bottleneck Decoder target image source image source light target light
Method: Training • Task 1: Complete relighting L1 loss Encoder Bottleneck Decoder target image source image Log L1 loss source light target light
Method: Training Encoder Bottleneck Decoder
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder source image
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder source image source light
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder source image Rotate source light
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder source image Rotate source light target light
Method: Training • Task 2: Illumination retargeting Encoder Bottleneck Decoder source image target image Rotate source light target light
Method: Training • Task 2: Illumination retargeting L1 loss Encoder Bottleneck Decoder source image target image Rotate Log L1 loss source light target light
Method: Training
Method: Training • Network structure • U-Net Spatial Resolution: 256 x 256 128 x 128 64 x 64 32 x 32 16 x 16 k k k k-dimensional k-dimensional k x k weighted concatenation tiling loss input/label activation conv layer average
Method: Training • Network structure • U-Net Spatial Resolution: 256 x 256 128 x 128 64 x 64 32 x 32 16 x 16 Output 16 Source 3 16 x 32 x 3 x Source 3 64 3 64 3 128 3 128 3 256 3 256 3 512 3 512 3 512 3 512 3 3 32 Image Light x 16 x 32 x 1 7 29 3 True 16 x Source 32 Light x 3 k k k k-dimensional k-dimensional k x k weighted concatenation tiling loss input/label activation conv layer average
Method: Training • Network structure • U-Net • Predict and feed in light at bottleneck Spatial Resolution: 256 x 256 128 x 128 64 x 64 32 x 32 16 x 16 Output 16 Source 3 16 x 32 x 3 x Source 3 64 3 64 3 128 3 128 3 256 3 256 3 512 3 512 3 512 3 512 3 3 32 Image Light x 16 x 32 x 1 7 29 3 True 16 x Source 32 Light x 3 k k k k-dimensional k-dimensional k x k weighted concatenation tiling loss input/label activation conv layer average
Method: Training • Network structure confidence • U-Net learning module • Predict and feed in light at bottleneck Spatial Resolution: 256 x 256 128 x 128 64 x 64 32 x 32 16 x 16 Output 16 Source 3 16 x 32 x 3 x Source 3 64 3 64 3 128 3 128 3 256 3 256 3 512 3 512 3 512 3 512 3 3 32 Image Light x 16 x 32 x 1 7 29 3 True 16 x Source 32 Light x 3 k k k k-dimensional k-dimensional k x k weighted concatenation tiling loss input/label activation conv layer average
Method: Training • Network structure confidence • U-Net learning module • Predict and feed in light at bottleneck Spatial Resolution: 256 x 256 128 x 128 64 x 64 32 x 32 16 x 16 Output True 16 Source 3 16 x 32 x 3 x Source Target 3 64 3 64 3 128 3 128 3 256 3 256 3 512 3 512 3 512 3 512 3 3 32 3 Image Light x 16 x 32 x 1 Image 7 29 3 Output True 16 16 Target 256 256 512 512 512 32 64 64 128 128 x x Target Source 3 3 512 32 32 Light 3 3 3 3 3 3 3 3 3 3 Image Light x x 32 256 256 512 512 256 64 64 128 128 3 3 k k k k-dimensional k-dimensional k x k weighted concatenation tiling loss input/label activation conv layer average
Method: Training
Method: Training • Confidence learning
Method: Training • Confidence learning
Method: Training • Confidence learning Several conv layers
Method: Training • Confidence learning Several conv layers
Method: Training • Confidence learning resolution Several of the light conv layers
Method: Training • Confidence learning resolution Several of the light conv layers Light prediction on each image patch
Method: Training • Confidence learning • Predict the confidence of light prediction resolution Several of the light conv layers Confidence of Light prediction on prediction on each each image patch image patch
Method: Training • Confidence learning • Predict the confidence of light prediction Reshape resolution Several of the light conv layers = * Confidence of Light prediction on prediction on each each image patch image patch
Method: Training • Confidence learning • Predict the confidence of light prediction • Allow network to say “I don’t know” Reshape resolution Several of the light conv layers = * Confidence of Light prediction on prediction on each each image patch image patch
Recommend
More recommend