addressing the false negative problem of deep learning
play

Addressing The False Negative Problem of Deep Learning MRI - PowerPoint PPT Presentation

Addressing The False Negative Problem of Deep Learning MRI Reconstruction Models by Paper #28 Adversarial Attacks and Robust Training Kaiyang Cheng*, Francesco Caliv*, Rutwik Shah, Misung Han, Sharmila Majumdar, Valentina Pedoia Disclosure


  1. Addressing The False Negative Problem of Deep Learning MRI Reconstruction Models by Paper #28 Adversarial Attacks and Robust Training Kaiyang Cheng*, Francesco Calivà*, Rutwik Shah, Misung Han, Sharmila Majumdar, Valentina Pedoia

  2. Disclosure I have no financial interests or relationships to disclose with regard to the subject matter of this presentation. Funding source This project was supported by R00AR070902 (VP), R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, (NIH-NIAMS). MIDL 2020 victorcheng21@Berkeley.edu 2

  3. Outline Motivation • False negative problem in accelerated MRI reconstruction • Adversarial examples • FNAF attack • Adversarial robustness training • FNAF robust training • Experimental results • Conclusions • MIDL 2020 victorcheng21@Berkeley.edu 3

  4. Adversarial Examples in Medical Imaging Analysis MIDL 2020 victorcheng21@Berkeley.edu 4

  5. Adversarial Examples in Medical Imaging Analysis MIDL 2020 victorcheng21@Berkeley.edu 5

  6. IID Machine Learning vs Adversarial Machine Learning ! (#,%)~( [*(+, ,, -)] ! (#,%)~( [max 2∈4 *(-, + + 6, ,)] IID: Adversarial: Average Case Worst Case MIDL 2020 victorcheng21@Berkeley.edu 6

  7. Accelerated MRI Reconstruction Fully-sampled k-space Under-sampled k-space Methods MIDL 2020 victorcheng21@Berkeley.edu 7

  8. FastMRI results: loss of meniscal tear MIDL 2020 victorcheng21@Berkeley.edu 8

  9. The False Negative Phenomenon MIDL 2020 victorcheng21@Berkeley.edu 9

  10. Two hypotheses for the false negative problem: 1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare MIDL 2020 victorcheng21@Berkeley.edu 10

  11. FNAF: false-negative adversarial feature A perceptible small feature which is present in the ground truth MRI but has disappeared upon MRI reconstruction. A B C D E F G H MIDL 2020 victorcheng21@Berkeley.edu 11

  12. Adversarial Examples and Attacks max $∈& '(), + + -, .) MIDL 2020 victorcheng21@Berkeley.edu 12

  13. Adversarial Examples and Attacks MIDL 2020 victorcheng21@Berkeley.edu 13

  14. FNAF Attack ROB��� �RAINING F�ll�sampled max $∈& '(), + + -, .)] MRI placeholder �+\del�a^�\��ime� �ECON����C�ION NE� A��ACKE� NE� max $∈& '(), + + -, . + -′)] predicted encoder decoder Undersampled Rob�st training loss: RECONSTRUCTED FNAF� � FNAF MRI �ith fnaf placeholder placeholder placeholder - = 3(- 4 ) �+\del�a,� � F�ll�sampled A��ack l���: Rec������c�i�� l���: MRI placeholder � 3(.) = ℱ 67 (8 ℱ . 100 ) FNAF map penali�ing �50� map placeholder T 89:(; + , ; . ) � 0�� MIDL 2020 victorcheng21@Berkeley.edu 14

  15. ROB��� �RAINING ROB��� �RAINING Under-sampling information preservation F�ll�sampled MRI placeholder F�ll�sampled �+\del�a^�\��ime� MRI placeholder ! " + $, " > ' �ECON����C�ION NE� �+\del�a^�\��ime� A��ACKE� NE� �ECON����C�ION NE� A��ACKE� NE� predicted encoder decoder Undersampled Rob�st training loss: RECONSTRUCTED FNAF� � FNAF MRI �ith fnaf placeholder placeholder placeholder �+\del�a,� � F�ll�sampled predicted A��ack l���: Rec������c�i�� l���: MRI placeholder encoder decoder Undersampled Rob�st training loss: � RECONSTRUCTED 100 FNAF� � FNAF MRI �ith fnaf placeholder placeholder placeholder MIDL 2020 victorcheng21@Berkeley.edu 15 FNAF map penali�ing �+\del�a,� �50� � map placeholder F�ll�sampled T A��ack l���: Rec������c�i�� l���: MRI placeholder � � 0�� 100 FNAF map penali�ing �50� map placeholder T � 0��

  16. Adversarial robustness training ! (#,%)~( [max -∈/ 0(1, 2 + 4, 5)] ROB��� �RAINING F�ll�sampled MRI placeholder �+\del�a^�\��ime� �ECON����C�ION NE� A��ACKE� NE� predicted encoder decoder Undersampled Rob�st training loss: RECONSTRUCTED FNAF� � FNAF MRI �ith fnaf placeholder placeholder placeholder �+\del�a,� � F�ll�sampled A��ack l���: Rec������c�i�� l���: MRI placeholder � 100 FNAF map penali�ing �50� map placeholder T � 0�� MIDL 2020 victorcheng21@Berkeley.edu 16

  17. Experimental Results MIDL 2020 victorcheng21@Berkeley.edu 17

  18. Qualitative Results A B C D E F G H The top row (A-D) shows a ”failed” FNAF attack. The bottom row (E-H) shows a ”successful” FNAF attack. Column 1 contains the under-sampled zero-filled images. Column 2 contains the fully-sampled ground truth images. Column 3 contains U-Net reconstructed images. Column 4 contains FNAF-robust U-Net reconstructed images. (C-G-D-H) FNAF reconstruction: (C) adversarial loss of 0.000229. (G) adversarial loss of 0.00110. (D) adversarial loss of 9.73 · 10−5. (H) adversarial loss of 0.000449 MIDL 2020 victorcheng21@Berkeley.edu 18

  19. Information Preservation (IP) ! " + $, " > ' MIDL 2020 victorcheng21@Berkeley.edu 19

  20. FNAF Attack Loss vs. IP Loss MIDL 2020 victorcheng21@Berkeley.edu 20

  21. FNAF Location Distribution and Transferability A B C FNAF location distribution within the 120x120 center crop of the image of (A) U-Net, (B) I-RIM, (C) FNAF-robust U-Net We take FNAF examples from U-Net and apply them to I-RIM, and observe a 89.48% attack rate. MIDL 2020 victorcheng21@Berkeley.edu 21

  22. Real-world Abnormalities reconstruction (A) Ground truth: small cartilage lesion in femur. (B) U-Net: Area of cartilage lesion not defined and resembles increased signal intensity. (C) FNAF-robust U-Net: Cartilage lesion preserved but less clear. MIDL 2020 victorcheng21@Berkeley.edu 22

  23. Limitations FNAF attack hit rate was defined heuristically • Attack inner maximization optimization has no guarantee and can be expensive • Adversarial training is only empirically robust • Limited real world abnormalities evaluation • MIDL 2020 victorcheng21@Berkeley.edu 23

  24. Conclusions and Future directions Two hypotheses • 1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare Address our limitations • Robustness in other medical imaging tasks • MIDL 2020 victorcheng21@Berkeley.edu 24

  25. Acknowledgements Valentina Pedoia’s Lab Francesco Calivà Rutwik Shah Sharmila Majumdar’s Lab Misung Han Claudia Iriondo Funding source This project was supported by R00AR070902 (VP), R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, (NIH-NIAMS). victorcheng21@berkeley.edu Paper #28 MIDL 2020 victorcheng21@Berkeley.edu 25

Recommend


More recommend