Deep learning for MR imaging and analysis Shanshan Wang Paul C. - - PowerPoint PPT Presentation

deep learning for mr imaging and analysis
SMART_READER_LITE
LIVE PREVIEW

Deep learning for MR imaging and analysis Shanshan Wang Paul C. - - PowerPoint PPT Presentation

Deep learning for MR imaging and analysis Shanshan Wang Paul C. Lauterbur Research Center for Biomedical Imaging Shenzhen Institutes of Advanced Technology (SIAT) 2020-04-15 Learning Reconstruction and Analysis Image reconstruction a.


slide-1
SLIDE 1

Deep learning for MR imaging and analysis

Shanshan Wang Paul C. Lauterbur Research Center for Biomedical Imaging Shenzhen Institutes of Advanced Technology (SIAT)

2020-04-15

slide-2
SLIDE 2

Image reconstruction Image analysis

a. Stroke lesion segmentation b. Breast tumor classification and segmentation c. Cervical cancer classification

Summary

a. Summary b. Linear reconstruction c. Non-linear iterative reconstruction d. Deep learning-based reconstruction a. Background

Learning Reconstruction and Analysis

slide-3
SLIDE 3

Background: MRI A powerful tool for clinical diagnosis A powerful tool for scientific research

slide-4
SLIDE 4

Challenges In Imaging Imaging Time Resolution

Correlated &interacting

SNR (Signal To Noise Ratio)

Short imaging time may cause issues like low resolution and SNR, while long time may cause issues like claustrophobia, motion artifact and signal distortion

Challenges Challenges In Diagnosis ➢ Highly dependent on the doctor's experience ➢ Tedious and cumbersome manual review ➢ Imaging data-heavy nature requires new solutions like AI

slide-5
SLIDE 5

Factors determining Acquisition Time Kx Ky

MR Image Sampling Data (K-Space) Recontruction

N

T = N × TR

Scan time acquisition lines repetition

Acquire an image using Spin Echo Sequence: T1 Weighted Image (T1w):TR=800ms, N=256, T=3.4min T2 Weighted Image (T2w):TR=2000ms, N=256, T=8.5min

slide-6
SLIDE 6

Fast MRI Techniques

MR Physics (1970’s)

  • Pulse sequence design

➢ Hardware (2000’s)

  • Parallel imaging with phased array coils

Image reconstruction from incomplete K-space data (past decade)

  • Modeling using priori knowledge

Encoding Reconstruction Radio Frequency Pulse Phased Array Coil K-Space Data Image for diagnosis

slide-7
SLIDE 7

Sanity (relation to measurements) Prior or regularization y : Given measurements x : Unknown to be recovered E : Encoding matrix Pr(x)

Different prior knowledge Image Reconstruction

Incomplete K-space data Reconstructed image

𝑔 𝑦 = 1 2 𝐹𝑦 − 𝑧 2

2 +

𝜇𝑄

𝑠(𝑦)

slide-8
SLIDE 8

Sparsity domain

Big dataset collection Deep prior learning

Big datasets 1 st phase linear reconstruction (IFFT, SENSE, SMASH, GRAPPA, ….) 2 nd phase Nonlinear iterative reconstruction (CS, low-rank, dictionary learning, …)

3 rd phase Deep learning MR reconstruction (CNN, ADMM-NET,VN-net, Automap, MoDL, U-NET, …)

Ky Kx

Image domain

[1] Pruessmann, Klaas P., et al. Magnetic Resonance in Medicine, 1999. [2] Sodickson, Daniel K., and Warren J.

  • Manning. Magnetic Resonance in Medicine, 1997.

[3] Griswold, Mark A., et al. Magnetic Resonance in Medicine, 2002. [4] Lustig et al. Magnetic Resonance in Medicine, 2007. [5] Jianhua Luo, Shanshan Wang, et al. Journal of Magnetic Resonance 224 (2012): 82-93 [1] Justin H, et al. Magnetic Resonance in Medicine, 2016 [2] Zhi-Pei Liang, et al. IEEE Transactions in Medical Image, 2003. [3] Zhou, Yihang, et al. IEEE International Symposium on Biomedical Imaging, 2015. [4] Shanshan Wang,et al, IEEE Transactions Medical Imaging, 37(1):251- 261, 2018 [1] Shanshan Wang, et al. . IEEE International Symposium on Biomedical Imaging, 2016. [2] Bo Zhu, Nature, 2018. [3] Florian Knoll, et al. Magnetic Resonance Imaging, 2018. [4] Yang, Guang, et al. IEEE Transactions in Medical Image, 2018. [5] J. Schlemper, et al. IEEE Transactions in Medical Image, 2018.

Partial K-space

Wavelet transform / TV Inverse transform

MR Recon from Incomplete K-space Data

slide-9
SLIDE 9

➢ Layer deconvolution spectral (LDS) analysis method

Jianhua Luo, Shanshan Wang, et al. Journal of Magnetic Resonance 224 (2012): 82-93.

  • Estimate the truncated k-space

from the image containing truncation artifact

  • Compute sparse representation

parameters from truncated k- space data

  • Use computed parameters to

recover Missing k-space data

  • Obtain the artifact-removed

image through Inverse transform

  • f updated k-space data

Our Work - 1st phase

➢ Main steps Convolution/deconvolution is always a very powerful tool !

slide-10
SLIDE 10
  • (a)

Initial ZF image containing truncation artefacts (with truncation frequency c = 64). (b–d) are respectively the images after removing the artefacts in (a) using the Hamming window, the TV and the proposed LDS methods.

  • Results of removing truncation artefact in real

MR images. (a and c) Represent respectively a stomach MR image and a brain MR image having truncation artefacts. (b and d) Show the images after removing artefacts in (a) and (c), respectively, using the proposed LDS method.

Jianhua Luo, Shanshan Wang, et al. Journal of Magnetic Resonance 224 (2012): 82-93.

Our Work - 1st phase

slide-11
SLIDE 11

1st phase Sub-summary

➢1st phase Linear analytical reconstruction

➢ Pros:

  • Simple and straightforward model
  • Easy implementation

➢ Cons:

  • Object prior is not considered
  • Long scan (acquisition) time
slide-12
SLIDE 12

Compressed Sensing (CS)

  • Incoherent projection
  • Signal sparsity

𝑛 × 1 measurements

𝑛 ≅ 𝑙 log 𝑜 ≪ 𝑜

𝑐 = 𝐵 𝑌 𝑜 × 1 vector 𝑙 # of non-zeros

Wakin, Michael, et al. "Compressive imaging for video representation and coding." Picture Coding Symposium. Vol. 1. No. 13. 2006. Takhar, Dharmpal, et al. "A new compressive imaging camera architecture using optical-domain compression." Computational Imaging IV. Vol. 6065. International Society for Optics and Photonics, 2006.

Sparse unknown vector

slide-13
SLIDE 13

Dynamic imaging time course

Low rank

Example 1 Example 2 Prior knowledge can be roughly categorized as non-adaptive and adaptive ones. Non-adaptive: Fixed transform, statistical modelling, model fitting, low-rank. Adaptive: Dictionary learning, data-driven tight frame

2nd phase nonlinear reconstruction

slide-14
SLIDE 14
  • Dictionary learning in Fenchel-dual space
  • Impulse-noise removal with L1-L1 minimization
  • Improving image reconstruction accuracy with good convergence property

Dictionary Learning

Shanshan Wang, et al. IEEE Transactions on Image Processing 22.12 (2013): 5214-5225. Shanshan Wang, et al. Signal Processing 93.9 (2013): 2696-2708. Dong Pei, Shanshan Wang*, et al. IEEE Transactions on Image Processing 25.11 (2016): 5035-5049 Qiegen Liu, Shanshan Wang, et al. IEEE Transactions on Image Processing 22.12 (2013): 4652-4663

Our Work- 2nd phase ➢ Sparse representation: based on fixed and adaptive dictionary

slide-15
SLIDE 15

Our Work – Dictionary learning for MRI

Shanshan Wang, Dong Liang,et al, IEEE Transactions on Medical Imaging,37(1):251-261, 2018 Shanshan Wang, Dong Liang, et al, BioMed Research International(SCI), 2860643, 2016 Qiegen Liu, Shanshan Wang, et al, IEEE Transactions on Medical Imaging, 2013

  • Parallel imaging with L2,1 norm

adaptive joint sparse coding

  • One - layer and two - layer tight

frame learning for MR imaging

  • Improve the accuracy of image

reconstruction and accelerate the convergence.

➢ Multi-channel correlation and multi-layer sparse development

  • Figure. Achieves 6X acceleration in 2D with the

smallest reconstruction error. DL-PI SparseSENSE Sparse BLIP CaLM MRI Proposed Label

slide-16
SLIDE 16

➢ Iterative Feature Refinement-Compressed Sensing (IFR-CS) consists

  • f three main steps:

IFR-CS

Shanshan Wang, Dong Liang, et al. Physics in Medicine and Biology, 2016,61, 3291-3316

Our Work - Iterative Feature Extraction 𝑣 = arg min

𝑣

𝐽 − 𝑣 2

2 + 𝜇𝑀1 𝑣 𝑀1

𝐽𝑢 = 𝑣 + 𝑈 ⊗ 𝑤 𝐽 = arg min

𝐽

𝐺

𝑞𝐽 − 𝑔 2 2 + 𝜈 𝐽 − 𝐽𝑢

  • Sparsity-promoting denoising
  • Feature refinement
  • Tikhonov regularization
slide-17
SLIDE 17

The technology has applied for a patent : 201410452350.1 Physics in Medicine and Biology(SCI)

Our Work - Iterative Feature Extraction ➢ Extracting fine structure and details from the residual image

=

Detected structure Feature descriptor Residual image

  • This paper were selected as Featured

Article and 2016 Research Highlight by Physics in Medicine and Biology.

slide-18
SLIDE 18

➢ Results:

Reference IRM-TV DLMRI Proposed Reconstruction Error Enlarged region

Our Work - Iterative Feature Extraction

slide-19
SLIDE 19

➢ 2nd phase Nonlinear Iterative reconstruction

➢ Pros:

  • Improved image quality
  • Image prior is included in the model
  • Nice theoretical explanation

➢ Cons:

  • Long reconstruction time
  • Limited prior model capacity
  • Hand tuned parameters

2nd phase Sub-summary

slide-20
SLIDE 20

➢ Deep Learning (DL) MRI beyond Compressed Sensing (CS) (Citation 256) Scan Recon

Linear

Scan Recon

CS-MRI

Scan

DL-MRI Recon

S Wang, L. Ying, D Liang etal,. IEEE International Symposium on Biomedical Imaging 2016: 514-517.

1ST Work in This Area going beyond CS

Offline Training Online Reconstruction Input Reconstruction Training data

Θ

slide-21
SLIDE 21

Combination with CS-MRI reconstruction methods

S Wang, L. Ying, D Liang, “Accelerating Magnetic Resonance Imaging via Deep Learning,” IEEE-ISBI 2016: 514-517.

Sequential model:

Initialize CS Reconstruction model

Integration model:

Network Prediction Reconstructed MR image

Pioneering work

slide-22
SLIDE 22

Reference Sampling mask Initialization Network Output Final reconstruction Error

Initial Results

➢ 3T scanner ➢ 32-channel coil ➢ T1-weighted

(spoiled GRE)

➢ TE min full ➢TR = 7.5ms ➢ 256×256, ➢ thickness = 17mm ➢ R = 3

S Wang, L. Ying, D Liang, “Accelerating Magnetic Resonance Imaging via Deep Learning,” IEEE-ISBI 2016: 514-517.

slide-23
SLIDE 23

➢ Unrolled iteration ➢ Automatically learning parameters for CS ➢ Relatively reduce reconstruction time ➢ Works (ADMM-NET, VN-NET, ISTA-NET, MoDL, Dictionary transform learning…)

Sun, Jian, Huibin Li, and Zongben Xu. "Deep ADMM-Net for compressive sensing MRI." Advances in Neural Information Processing Systems. 2016.

Model based network learning for CSMRI

slide-24
SLIDE 24

➢Model based network learning for CS reconstruction ➢End-to-end learning ➢ Image domain end-to-end learning

(MLP, U-NET, GAN, DAGAN, GANCS…)

➢ K-space domain end-to-end learning

(Deep ALOHA ,DeepSPIRiT, FCRNN, k-space learning…)

➢ K-space to image-space hybrid learning

(DIMENSION, AUTOMAP, D5C5, SMS…)

Deep Learning Reconstruction

slide-25
SLIDE 25

Deep Learning based MRI

➢ Single channel validation ➢ Multi-coil MRI ➢ Multi-contrast MRI ➢ Dynamic-MR Imaging

Our research on deep learning based MRI

Model-based CSMRI learning reconstruction

  • Multi-coil measurements
  • Parallel reconstruction
  • One combined measurement
  • Simultaneous multi-contrast MR imaging
  • Complex-valued operation
  • K-I space hybrid learning
  • Single-coil measurements
  • Constrained reconstruction

End-to-end data driven learning reconstruction

slide-26
SLIDE 26

Y X 𝑇 = 𝑌 + 𝑗𝑍 X 𝑇 = 𝑌2 + 𝑍2

∠𝑇 = tan−1 𝑍 𝑌

𝑇(𝑙𝑧, 𝑙𝑦) = ෍

𝑧

𝑦

𝐽 𝑦, 𝑧 𝑓

−𝑗2𝜌(𝑙𝑧𝑧+𝑙𝑦𝑦) 𝑂

𝑆𝑓𝑏𝑚(𝐽) Imaginary(𝐽) 𝑁𝑏𝑕𝑜𝑗𝑢𝑣𝑒𝑓(𝐽) 𝑄ℎ𝑏𝑡𝑓(𝐽)

Complex valued nature of MRI

slide-27
SLIDE 27

Two ways of dealing with complex-valued MR images:

  • 1. Reconstruct the magnitude and the phase respectively.
  • 2. Treat real and imaginary parts independently.

The correlation between real and imaginary part is not fully considered. Our Work - Complex-Valued Neural Network

Shanshan Wang, Dong Liang, et al, DeepcomplexMRI, Magnetic Resonance Imaging 68, 136-147 Shanshan Wang, Huitao Cheng, et al. ISMRM, 2018;

Complex-Valued convolution needs to be explored !

slide-28
SLIDE 28

➢ Network framework

Shanshan Wang, Hairong Zheng, Dong Liang, et al, DeepcomplexMRI, Magnetic Resonance Imaging 68, 136-147 Shanshan Wang, Dong Liang, et al. ISMRM, Paris, France 2018;

Implemented in tensorflow as real-valued networks representing real and imaginary components with complex-valued operations and initializations.

Our Work - Complex-Valued Neural Network

slide-29
SLIDE 29

➢ Results: Our Work - Complex-Valued Neural Network

This figure shows : The comparison of SPIRiT, L1-SPIRiT, VN and the proposed method with real convolution (rc) and with complex convolution half parameter (cc/0.5) and with the same parameter (cc/1) under 1D random (top two lines) and uniform (bottom two lines) sampling at a sampling rate

  • f

33%.

slide-30
SLIDE 30

PSNR RMSE SSIM CS-MRI 29.8495 0.1369 0.8483 DCCNN 31.8295 0.1090 0.8747 Complex-DCCNN 32.9362 0.0960 0.8759

This figure shows : (a)the reference image and the under-sampled image; reconstructed images by (b) CS-MRI, (c) DCCNN, (d) proposed Complex-DCCNN and their corresponding error maps. The sampling pattern is 2D Poisson disc and the acceleration factor is 5x.

(a) (b) (c) (d)

Our Work - Complex-Valued Neural Network

➢ Results:

slide-31
SLIDE 31

➢ Network framework

Shanshan Wang et al. International Society for Magnetic Resonance in Medicine, 2017.

Our Work - Parallel MR Imaging

slide-32
SLIDE 32

Shanshan Wang et al. International Society for Magnetic Resonance in Medicine, 2017.

Net acceleration factor = 4

Ground truth Deep Learning GRAPPA SPIRiT L1-SPIRiT 7.69 95.80 48.27 40.48

Our Work - Parallel MR Imaging

➢ Results:

slide-33
SLIDE 33

➢ Theory

Our Work –Parallel imaging

  • A de-aliasing parallel imaging model with both spatial redundancy and multi-coil

correlation explored with learned filters

arg min

𝒀 1 2

𝐁𝒀 − 𝒁 2

2 + λ𝑡𝓆 𝚾𝑡𝒀 ) + λ𝑑𝑝𝑗𝑚𝑡𝓆(𝚾𝑑𝑝𝑗𝑚𝑡𝒀)

Yanxia Cheng, et al. Shanshan Wang*, MICCAI 2019. Oral (Top 3%)

Code: https://github.com/yanxiachen/ConvDe-AliasingNet. Calibration free No explicit Sensitivity Calculation

slide-34
SLIDE 34

➢ Results: Our Work - Parallel MR Imaging

PSNR/SSIM

Yanxia Chen Shanshan Wang*, MICCAI 2019.

slide-35
SLIDE 35

35/50

k

X

1

9

 =

2

15

 = + noise

k

X

Trained EDAEP model

Mean

  • peration

LS solver +1 k

X

Real/imaginary part prior gradient prior gradient

  • f

prior gradient

  • f

Final prior gradient

f F T

u

1

2

m loops/iteration

MEDAEP multi-contrast MRI reconstruction

Xiangshun Liu, Shanshan Wang*, et al, ISBI 2020

slide-36
SLIDE 36

Our Work - Multi-Contrast MR Imaging

proposed 42.51/0.977 proposed 42.96/0.977 proposed 45.54/0.984 zero-filled PD 24.97/0.658 zero-filled T1 24.48/0.618 zero-filled T2 28.89/0.697 DLMRI 36.90/0.927 DLMRI 37.65/0.913 DLMRI 38.75,0.939

Ground truth PD random 6.7x Ground truth T1 random 6.7x Ground truth T2 random 6.7x

slide-37
SLIDE 37

➢ KI-Net: Cascaded Hybrid K-space and I-space learning for dynamic imaging ➢ Methodology: K-Net and I-Net are cascaded. Prior knowledges are learned from K-space and image space. TV restriction is adopted in loss function for better dynamic heart imaging.

Shanshan Wang, Ziwen Ke, et al. International Society for Magnetic Resonance in Medicine, 2018.

Our Work - Dynamic MR Reconstruction

slide-38
SLIDE 38

MSE PSNR SSIM Input 0.004213 23.7540 0.7746 D2C2 0.000406 33.9164 0.9708 KI 0.000223 36.5118 0.9818 Reference Undersampled image D2C2 Recon D2C2 Error KI Error KI Recon

Shanshan Wang, Ziwen Ke, et al. International Society for Magnetic Resonance in Medicine, 2018.

➢ Results: Our Work - Dynamic MR Reconstruction

slide-39
SLIDE 39

Shanshan Wang, Ziwen Ke, Hairong Zheng, Dong Liang, et al. NMR in Biomedicine, DOI:10.1002/nbm.4131 .

Our Work - Dynamic MR Reconstruction ➢ DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training.

Code:https://github.com/Keziwen/DIMENSION

slide-40
SLIDE 40

Our Work - Dynamic MR Reconstruction

Shanshan Wang, Hairong Zheng, Dong Liang, et al. “ NMR in Biomedicine, DOI:10.1002/nbm.4131 ..

slide-41
SLIDE 41

➢ 3rd phase Deep learning MR reconstruction

➢ Pros:

  • Improved image quality
  • Short reconstruction time
  • Big capacity prior is included in the model

➢ Cons:

  • Poor theoretical explanation
  • Complex hyperparameters and structures setting

3rd phase Sub-summary

The reconstruction research is still ongoing

slide-42
SLIDE 42

➢ Handling hard samples ➢ Exploring multi-modality ➢ Designing light network

MR image analysis

Deep Neural Network T1C T2W Deep Neural Network with Massive Number

  • f Parameters

Light Network Deep Neural Network Samples with Large range of stroke lesion scales

Motivation and focus Issue 1 Issue 2 Issue 3

slide-43
SLIDE 43

➢ Challenges

  • Large range of stroke lesion scales
  • The tissue intensity similarity
  • The insufficient uses of multi-scale

features and context information by the famous encoder-decoder structure

0.5x 0.5x 0.5x 0.5x 2x 2x 2x 2x

Encoder-Decoder T1 GroundTruth

Stroke Segmentation

slide-44
SLIDE 44

➢ Proposed CLCI-Net: Cross-Level fusion and Context Inference Network

➢ Contribution:

  • CLF.

Full use of different levels of features

  • Extension of ASPP.

Address the challenges with big variety of lesion scales

  • Context Inference.

More fine structures captured

Our Work - Stroke Segmentation with CLCI-Net

Hao Yang& Shanshan Wang*, MICCAI 2019.

c

32 32 256 64 32× 2 64 512 128 128 128× 4 1024 256 256 256× 5 2048 512 512 1 1 32 32 64 64 128 256 128 64 32 1024 256 256 128 256* 4 256 128 64 32 Conv LSTM Conv LSTM Conv LSTM Conv LSTM 256*9 C1 C2 C3 C4 64× 3 256 128 64 32

(a) Cross-Level fusion and Context Inference Network (CLCI-Net)

Conv 1× 1 Stride 1 BN ReLU c Concatenation Conv 3× 3 Stride 1 BN ReLU Up Conv 2× 2 Max Pooling Conv 3× 3 Stride 2 BN ReLU Conv 3× 3 Stride 4 BN ReLU Conv 3× 3 Stride 8 BN ReLU Conv 3× 3 Stride 16 BN ReLU

n

... ...

Cn

(b) Cross-Level fusion (CLF)

Conv 1× 1

(c) Extension of Atrous Spatial Pyramid Pooling (ASPP)

Conv 3× 3 rate 2 Conv 3× 3 rate 4 Conv 3× 3 rate 6 Image Poolin g

...

ASPP Conv 1× 1 Stride 1 Sigmoid

slide-45
SLIDE 45

GT Ours Baseline DenseUnet DeepLabv3+ PSPNet FCN-8s T1

➢ Results:

a

Lesions with different scales

Our Work - Stroke Segmentation with CLCI-Net

Hao Yang &Shanshan Wang*, MICCAI 2019.

slide-46
SLIDE 46

➢ Limitations of the proposed CLCI-Net: Numerous number of parameters Our Work - Stroke Segmentation with X-Net

➢ Contribution:

  • Adoption of depthwise
  • convolution. Reduce

the network size.

  • Feature Similarity

Module (FSM). Capture long-range dependencies

Kehan Qi&Shanshan Wang*, MICCAI 2019.

FSM Encoder Decoder

X-Block Maxpooling 2×2 Upsampling 2×2 and Conv 3×3 Feature Map Convolution Layer Skip and Concate

64 128 256 512 1024 512 256 128 64

  • We further proposed an X-net with reduced model size without

compromising ➢ the performances

slide-47
SLIDE 47

➢ Results: Our Work - Stroke Segmentation with X-Net

Kehan Qi&Shanshan Wang*, MICCAI 2019. U-net SegNet PSPNet DenseU Deeplab V3+ ResUNet Ours Groundtruth MR

slide-48
SLIDE 48

Our Work - Stroke Segmentation with D-UNet

➢ Limitations of pure 2D or 3D network: Spatial context or Resource occupation

Yongjin Zhou&Shanshan Wang*, IEEE/ACM Transactions on Computational Biology and Bioinformatics 2019.

  • Considering that 2D-based networks ignore spatial context information, and 3D-

based networks require too much computing resources, we propose D-UNet.

Up- Sampling 2D Max- Pooling 2d Concatenate BN Conv2d Conv2d BN Conv3d BN Conv3d BN Max- pooling 3d 192×192×1

(a) D-Unet architecture

96×96×2 ×32 96×96×2 ×64 96×96×2 ×64 96×96 ×64 48×48×1 ×64 192×192×4 ×32 48×48×1 ×128 48×48 ×128

(b) Dimension Transform Block

HxWxC

Global pooling FC ReLu FC Sigmoid Mul

1×1×C 1×1×C/r 1×1×C/r 1×1×C Squeeze-and-Excitation Block H×W×C

Global pooling FC ReLu FC Sigmoid Mul

1×1×C 1×1×C/r 1×1×C/r 1×1×C

Conv3d (1×1×1) Squeeze Conv2d (3×3) H×W×D×C H×W×D×1 H×W×D H×W×C H×W×C Add

H×W×C Dimension Transform Block

➢ Contribution:

  • Dimensional fusion.

Extracted 3D information from MRI data and reduced resource consumption.

  • EML (Enhance Mixing

Loss). Reduce the time required to train the network.

slide-49
SLIDE 49

Our Work - Stroke Segmentation with D-UNet

Yongjin Zhou&Shanshan Wang*, IEEE/ACM Transactions on Computational Biology and Bioinformatics 2019.

       − − − = − − =

 

= =

b f

N i N i

  • therwise

p p g if p p g p FL

1 1

), 1 log( ) 1 ( 1 ), log( ) 1 ( ) , (

 

 

( )

 

= =

+ + +  − =

N i i i N i i i

g p g p g p DL

1 2 2 1

2 1 ) , (  

)) , ( log( ) , ( 1 ) , ( g p DL g p FL N g p EML − =

➢ Traditional Focal Loss(FL) and Dice Loss(DL): ➢ Proposed Enhance Mixing Loss(EML):

slide-50
SLIDE 50

Ground Truth Ours UNet Deeplab v3+ PSPNet Original SegNet (1) (2) (3) (4) (5) (6) (7)

➢ Results:

Yongjin Zhou&Shanshan Wang*, IEEE/ACM Transactions on Computational Biology and Bioinformatics 2019.

Our Work - Stroke Segmentation with D-UNet

Method DSC Precision Recall SegNet 0.329±0.251 0.385±0.288 0.332±0.265 PSP 0.446±0.263 0.500±0.291 0.470±0.278 Deeplab v3+ 0.453±0.291 0.563±0.325 0.446±0.303 UNet 0.497±0.291 0.551±0.330 0.504±0.304 D-UNet 0.535±0.276 0.633±0.296 0.524±0.291

slide-51
SLIDE 51

➢ Multi-modal breast MRI: High-sensitive contrast-enhanced MRI (T1C) + High-specific T2-weighted MRI (T2W) → Accurate diagnosis. ➢ Challenges in multi-modal image segmentation: Effective multi-modal information fusion.

Organs Dense glandular tissues

Mass Segmentation on Multi-modality MRs

Cheng Li &Shanshan Wang*, MICCAI 2019.

slide-52
SLIDE 52

➢ Supervised image fusion to selectively fuse the useful information from different modalities and suppress the respective noise signals.

Overall architecture Master-assistant cross- modal learning framework Cross-modal supervision learning module

Multi-modal MR Image Segmentation

Cheng Li &Shanshan Wang*, MICCAI 2019.

slide-53
SLIDE 53

➢ Results on a two-modal in-vivo breast MR dataset (contrast- enhanced T1W & T2W)

  • Better segmentation

performance achieved.

  • Cross-modal supervision

is important.

  • Attention maps learned from the

respective modalities.

  • Red arrows: Correct highlights.
  • Blue arrows: Incorrect highlights.

Multi-modal MR Image Segmentation

Cheng Li &Shanshan Wang*, MICCAI 2019.

slide-54
SLIDE 54

Challenges and Existing Methods

  • 1. Can only be determined through invasive pathological diagnosis
  • 2. Slight differences in MRI image
  • 3. Lack of MRI image data

Challenges Existing Methods

  • 1. Radiomics analysis
  • 2. Radiomics combined with clinical parameters
  • 1. Can not make full use of existing image data
  • 2. Difficulties in collecting complete clinical parameters

Limitations

[1] Afshar P, Mohammadi A, Plataniotis K N, et al. From Hand-Crafted to Deep Learning-based Cancer Radiomics: Challenges and Opportunities[J]. arXiv preprint arXiv:1808.07954, 2018.

Two types of tumor

LVSI Prediction in Early-stage Cervical Cancer

slide-55
SLIDE 55

Contributions and Methods

.

Overall workflow

  • 1. Simultaneously explored deep learning and radiomics features for LVSI prediction.
  • 2. Multi-modal feature extraction facilitated more comprehensive representation.
  • 3. Transfer learning was applied to cope with the lack of image data.
  • 4. An ensemble of classifiers was utilized to boost the prediction performance.

Xiran Jiang, Shanshan Wang*, et al, IEEE/ACM Transactions on Computational Biology and Bioinformatics 2019 Wenqing Hua, Shanshan Wang*, et al, Biomedical signal processing and control, 2020

Explore both tumoral and peri-tumoral; Utilize Multimodality

slide-56
SLIDE 56

Siemens 3T United Imaging 3T

Research center Named after the Nobel Laureate

Research bases

➢ Shenzhen Biomedical Imaging Technology Engineering Laboratory ➢ Shenzhen Key Laboratory of Magnetic Resonance Imaging ➢ Guangdong Province Biological Medical Equipment Technology Innovation Platform

Research projects

➢ The National 973 Program ➢ National Key Technology R&D Program ➢ The National Natural Science Foundation of China ➢ The Chinese Academy of Sciences ➢ …

Cooperation company

Paul C. Lauterbur Research Center

slide-57
SLIDE 57

My SIAT Team:

➢ Cheng Li ➢ Taohui Xiao ➢ Yanxia Chen ➢ Hao Yang ➢ Kehan Qi ➢ Haoyun Liang ➢ Chuyu Rong ➢ Weijian Huang ➢ Yu Gong ➢ Zhenkun Peng ➢ Chen Hu ➢ Rui Yang ➢ Juan Zou

Collaborators:

➢ Hairong Zheng(Chinese Academy of Sciences) ➢ David Dagan Feng (The University of Sydney) ➢ Dong Liang (Chinese Academy of Sciences) ➢ Lesilie Ying (University at Buffalo, The State University of New York) ➢ Qiegen Liu (NanChang University) ➢ Yong Xia (Northwestern University of Technology) ➢ Meiyun Wang (Henan Provincial Peoples Hospital) ➢ Zaiyi Liu (Guangdong Provincial Peoples Hospital) ➢ Xiran Jiang (China Medical University) ➢ Yuemin Zhu (University of Lyon 1) ➢ Wanqing Li (University of Wollongong)

Fundings:

➢ National Natural Science Foundation of China ➢ Natural Science Foundation of Guangdong Province ➢ Shenzhen Science and Technology Innovation Committee ➢ …

Acknowledgement

slide-58
SLIDE 58

Code: http://www.escience.cn/people/ShanshanWang/project73837.html