human visual system models in
play

Human Visual System Models in Computer Graphics Tun O. Aydn MPI - PowerPoint PPT Presentation

Human Visual System Models in Computer Graphics Tun O. Aydn MPI Informatik Computer Graphics Department HDR and Visual Perception Group Outline Reality vs. Perception Why even bother modeling visual perception The Human Visual


  1. Human Visual System Models in Computer Graphics Tunç O. Aydın MPI Informatik Computer Graphics Department HDR and Visual Perception Group

  2. Outline  Reality vs. Perception – Why even bother modeling visual perception  The Human Visual System (HVS) – How the “wetware” affects our perception  HVS models in Computer Graphics – Visual Significance of contrast – Contrast Detection  Our contributions – Key challenges

  3. Invisible Bits & Bytes Low High Reference (bmp, 616K) Compressed (jpg, 48K) Difference Image (Color coded)

  4. Variations of Perception No one-to-one correspondence between visual perception and reality ! “Perceived Visual Data” instead of luminance or arbitrary pixel values

  5. The Human Visual System (HVS)  Experimental Methods of Vision Science – Micro-electrode – Radioactive Marker – Vivisection – Psychophysical Experimentation

  6. HVS effects (1): Glare  Disability Glare (blooming) Video Courtesy of Tobias Ritschel

  7. Disability Glare  Model of Light Scattering – Point Spread Modulation Function in spatial domain – Optical Transfer Function in Fourier Domain [Deeley et al. 1991] Spatial Frequency [cy/deg]

  8. (2): Light Adaptation Adaptation Level: Adaptation Level: Time 10 -4 cd/m 2 17 cd/m 2

  9. Perceptually Uniform Space  Transfer function: Maps Luminance to Just Noticeable Response [JND] Differences (JNDs) in Luminance. [Mantiuk et al. 2004, Aydın et al. 2008] Luminance [cd/m 2 ]

  10. (3): Contrast Sensitivity Contrast Spatial Frequency CSF(spatial frequency, adaptation level, temporal freq., viewing dist, … )

  11. Contrast Sensitivity Function (CSF)  Steady-state CSF S : Returns the Sensitivity (1/Threshold contrast), given the adaptation luminance and spatial frequency [Daly 1993]. November 6, 2011

  12. (4): Visual Channels Cortex Transform

  13. (5): Visual Masking Loss of sensitivity to a signal in the presence of a “similar frequency” signal “nearby”.

  14. Modeling Visual Masking  Example: JPEG’s pointwise extended masking: C’: Normalized Contrast

  15. HVS Models in Graphics/Vision Rate the Quality HDR LDR Panorama Tone Mapping Compression Quality Assessment Stitching

  16. Visual Significance Pipeline ˆ    1 / k N   k k R R R tst ref  n 1

  17. Contrast Detection Pipeline Log Threshold Elevation Probability of Detection N    ˆ    P 1 1 P n  n 1 Log Contrast Contrast Difference

  18. CONTRIBUTIONS: VISUAL SIGNIFICANCE

  19. Visually Significant Edges  Key Idea: Use the magnitude of the HVS model’s response as the measure of edge strength, instead of gradient magnitude.  Result (1): Significant improvement in application results, especially for HDR images  Result (2): Only minor improvements observed in LDR retargeting and panorama stitching. [ Aydın , Čadík , Myszkowski, Seidel. 2010 ACM TAP ]

  20. Calibration Procedure  CSF from the Visible Differences Predictor [Daly’93]  JPEG’s pointwise extended masking  Calibration: CSF derived for sinusoidal stimuli, not for edges.  Perceptual experiment for measuring edge thresholds

  21. Calibrated Calibration Function Metric Ideal Metric Response Response Subjective Measurements Metric Predictions Polynomial Fit Polynomial Fit R: Visual Significance for sinusoidal stimulus, Calibration function: R’: Visual Significance for edges.

  22. Image Retargeting

  23. Visual Significance Maps Low High

  24. Display Visibility under Dynamically Changing Illumination  Key Idea: Extending steady-state HVS models with temporal adaptation model  Result: A visibility class estimator integrated into a software that simulates illumination inside an automobile. [ Aydın , Myszkowski, Seidel. 2009 EuroGraphics ]

  25. cvi for Steady-State Adaptation  Contrast vs. Intensity (cvi): function assumes perfect adaptation   L + ∆L L cvi : L C  Contrast vs. Intensity Threshold Background and adaptation (cvia) Luminance Luminance accounts for mal adaptation   cvia : L , L C a

  26. cvia for Maladaptation cvia cvi

  27. Adaptation over time High Visual Significance t = 0 t = 0.2s t = 0.4s t = 0.8s Low

  28. Rendering Adaptation Dark Adaptation Bright Adaptation [ Pająk , Čadík , Aydın , Myszkowski, Seidel. 2010 Electronic Imaging ]

  29. CONTRIBUTIONS: CONTRAST DETECTION

  30. Quality Assessment (IQA, VQA) Rate the Quality + Reliable - High cost

  31. Perceptually Uniform Space  Key Idea: Find a transformation from Luminance to pixel values, such that: – An increment of 1 pixel value corresponds to 1 JND Luminance in both HDR and LDR domains. – The pixel values in LDR domain should be close to sRGB pixel values  Result: Common LDR Quality metrics (SSIM, PSNR) extended to HDR through the PU space transformation [ Aydın , Mantiuk, Seidel. 2008 Electronic Imaging ]

  32. Perceptually Uniform Space  Derivation : for i = 2 to N L i = L i-1 + tvi(L i-1 ); end for  Fit the absolute value and subject sensitivity to sRGB within CRT luminance range

  33. Dynamic Range Independent IQA  Key Idea: Instead of the traditional contrast difference, use distortion measures agnostic to dynamic range difference.  Result: An IQA that can meaningfully compare an LDR test image with an HDR reference image, and vice versa. Enables objective evaluation of tone mapping operators. [ Aydın , Mantiuk, Myszkowski, Seidel. 2008 SIGGRAPH ]

  34. HDR vs. LDR Luminance Luminance 5x LDR HDR LDR HDR

  35. Problem with Visible Differences Local Gaussian Blur Detection Probability 95% Contrast Loss 75% 50% 25% HDR Reference LDR Test HDR-VDP

  36. Distortion Measures Reference Test Contrast Reversal Reference Contrast Loss Test Contrast Amplification Reference Test

  37. Novel Applications Inverse Tone Mapping Tone Mapping

  38. Video Quality Assessment HDR Video DRIVQM [Aydin et al. 2010] DRIVDP [Aydin et al. 2008] (tone mapped for (frame-by-frame) presentation)

  39. Dynamic Range Independent V QA  Key Idea: Extend the Dynamic Range Independent pipeline with temporal aspects to evaluate video sequences.  Result: An objective VQM that evaluates rendering quality, temporal tone mapping and HDR compression. [ Aydın , Čadík , Myszkowski, Seidel. 2010 SIGGRAPH Asia ]

  40. Contrast Sensitivity Function  CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity.

  41. Contrast Sensitivity Function  CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity. Spatio-temporal CSF T

  42. Contrast Sensitivity Function  CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity. Steady-state CSF S

  43. Contrast Sensitivity Function CSF T ( ω , ρ ) CSF( ω , ρ ,L a = L) CSF T ( ω , ρ , L a = 100 cd/m 2 ) f( ρ ,L a ) x = CSF S ( ρ ,L a ) CSF S ( ρ ,100 cd/m 2 ) ( ) f = ÷ L a = 100 cd/m 2

  44. Extended Cortex Transform Sustained and Transient Temporal Channels [Winkler 2005] Spatial

  45. Evaluation of Rendering Methods With temporal filtering No temporal filtering Predicted distortion map [Herzog et al. 2010]

  46. Evaluation of Rendering Qualities High quality Low quality Predicted distortion map

  47. Evaluation of HDR Compression Medium Compression High Compression

  48. Validation Study  Noise, HDR video compression, tone mapping  “2.5D videos”  HDR-HDR, HDR-LDR, LDR-LDR

  49. Psychophysical Validation (1) Show videos side-by-side (2) Subjects mark regions on a HDR Display where they detect differences [ Čadík , Aydın , Myszkowski, Seidel. 2011 Electronic Imaging ]

  50. Validation Study Results Stimulus DRIVQM PDM HDRVDP DRIVDP 1 0.765 -0.0147 0.591 0.488 2 0.883 0.686 0.673 0.859 3 0.843 0.886 0.0769 0.865 4 0.815 0.0205 0.211 -0.0654 5 0.844 0.565 0.803 0.689 6 0.761 -0.462 0.709 0.299 7 0.879 0.155 0.882 0.924 8 0.733 0.109 0.339 0.393 9 0.753 0.368 0.473 0.617 Average 0.809 0.257 0.528 0.563

  51. Conclusion  Starting Intuition: Working on “perceived” visual data, instead of “physical” visual data.

  52. Limitations and Future Work  What about the rest of the brain? – Visual Attention – Prior Knowledge – Gestalt Properties – Free will – …  User interaction?  Depth perception

  53. Acknowledgements  Advisors – Karol Myszkowski, Hans Peter Seidel  Collaborators – Martin Čadík , Rafał Mantiuk, Dawid Pająk , Makoto Okabe  AG4 Members – Current and past  AG4 Staff – Sabine Budde, Ellen Fries, Conny Liegl, Svetlana Borodina, Sonja Lienard.  Thesis Committee – Phillip Slusallek, Jan Kautz, Thorsten Thormählen.  Family – Süheyla and Vahit Aydın , Irem Dumlupınar

  54. Tunç O. Aydın <tunc@mpii.de> THANK YOU.

Recommend


More recommend