high quality ultrasonic multi line transmission through
play

High quality ultrasonic multi-line transmission through deep - PowerPoint PPT Presentation

High quality ultrasonic multi-line transmission through deep learning Sanketh Vedula Technion, Israel Machine Learning for Medical Image Reconstruction Workshop Joint work with: Ortal Senouf Grigoriy Zurakhov Alex Bronstein Oleg Michailovich


  1. High quality ultrasonic multi-line transmission through deep learning Sanketh Vedula Technion, Israel Machine Learning for Medical Image Reconstruction Workshop

  2. Joint work with: Ortal Senouf Grigoriy Zurakhov Alex Bronstein Oleg Michailovich Michael Zibulevsky Technion Technion Technion U of Waterloo Technion This research was partially supported by ERC StG RAPID Dan Adam Diana Gaitini Technion Technion

  3. Ultrasound imaging • Non-invasive, cheap, no ionising radiation, and recently, also portable. • But, all the above advantages come at the expense of image quality. • “Point-of-care ultrasound (POCUS): the visual stethoscope of the 21st century” [Gillman et al., 2012] [Gillman et al., 2012], Portable bedside ultrasound: the visual stethescope of the 21st century, J. Trauma. Resusc. Emerg. Med., 2012 Image credits: GE VScan, Butterfly I/Q

  4. Ultrasound acquisition pipeline Reconstruction/ Full/reduced Time-delay & Receive (Rx) artifact correction transmissions phase rotation Envelope Apodization/ Ultrasound image Post-processing detection element summation

  5. Ultrasound acquisition pipeline Reconstruction/ Full/reduced Time-delay & Receive (Rx) artifact correction transmissions phase rotation Envelope Apodization/ Ultrasound image Post-processing detection element summation Our goal: Design end-to-end fast learning-based algorithms to improve the quality of point-of-care ultrasound imaging (POCUS).

  6. Ultrasound acquisition pipeline Reconstruction/ Full/reduced Time-delay & Receive (Rx) artifact correction transmissions phase rotation Envelope Apodization/ Ultrasound image Post-processing detection element summation [Vedula et al., 2017] Our goal: Design end-to-end fast learning-based algorithms to improve the quality of point-of-care ultrasound imaging (POCUS). [Vedula et al., 2017] Towards CT-quality ultrasound imaging with deep learning, arXiv: 1710.06304

  7. Ultrasound acquisition pipeline Reconstruction/ Full/reduced Time-delay & Receive (Rx) artifact correction transmissions phase rotation Envelope Apodization/ Ultrasound image Post-processing detection element summation [Vedula et al., 2017] This work & [Senouf et al., 2018] Our goal: Design end-to-end fast learning-based algorithms to improve the quality of point-of-care ultrasound imaging (POCUS). [Vedula et al., 2017] Towards CT-quality ultrasound imaging with deep learning, arXiv: 1710.06304 [Senouf et al., 2018] High frame-rate cardiac ultrasound imaging using deep learning, MICCAI 2018.

  8. Motivation High frame-rate is a crucial consideration while performing • Echocardiography: for functional analysis of heart • 3D-sonography: to scan large volumes Existing methods for increasing frame-rate provide low- quality images that su ff er from poor resolution and contrast

  9. Multi-line transmission [Mallart et al., 1992] Improved imaging rate through simultaneous transmission of several ultrasound beams

  10. MLT artifacts Single-line transmission Multi-line transmission linear phased array of transducers Loss of contrast due to cross-talk between the lines transmitted simultaneously

  11. Traditional methods • Constant and adaptive apodization (Tukey, = 0.5) [Tong et al., 2013] α • Filtered delay-multiply-and-sum (FDMAS) [Matrone et al., 2017] Limitations: • Apodization a ff ects resolution • FDMAS exhibits poor contrast-to-noise ratio [Tong et al., 2013] Multi-transmit beam forming for fast cardiac imaging-a simulation study, IEEE T-UFFC, 2013. [Matrone et al., 2017] High frame-rate, high resolution ultrasound imaging with multi-line transmission and filtered-delay multiply and sum beamforming, IEEE TMI, 2017.

  12. Proposed CNN-based pipeline Inputs: time-delayed MLT Apodization Layer: element-wise I/Q data M x N x 64 Summing over elements Outputs: Corresponding M x N x 64 SLT I/Q images Strided- Conv 3x3 Conv 3x3 BatchNorm BatchNorm Stride 1 Stride 1 M/2 x N/2 x 256 ReLU ReLU Strided- Conv 3x3 M x N x 128 Conv 3x3 BatchNorm BatchNorm Stride 1 Stride 1 ReLU M/4 x N/4 x1024 ReLU Strided- Conv 3x3 M/2 x N/2 x 512 Conv 3x3 BatchNorm BatchNorm Stride 1 Stride 1 ReLU ReLU M/4 x N/4 x 2048 b = Concatenate Strided- Conv 3x3 Conv 3x3 = Average pooling BatchNorm BatchNorm Stride 1 Stride 1 = Skip connections ReLU ReLU b = Number of bifurcations M/2^b x N/2^b x 64*2^b*2^b M/2^b x N/2^b x 64*2^b*2^b

  13. Dataset • In-vivo data was collected from quasi-static organs (e.g. bladder, kidney) from 6 volunteers. • MLT data (with beam-separation angle of at least 15 degrees) can be approximated through summation of the corresponding sequentially transmitted lines from SLT [Prieur et al., 2013]. • In total, 750 frames were used for training. [Prieur et al., 2013] Multi-Line Transmission in Medical Imaging Using the Second-Harmonic Signal, IEEE UFFC, 2013

  14. Results (phantom)

  15. Results (phantom)

  16. Results (in-vivo)

  17. Results (in-vivo) The cine-loop of a scan taken the abdominal region

  18. Results (in-vivo) The cine-loop of a scan taken the abdominal region

  19. Takeaway (this paper) • We proposed a CNN-based approach for MLT artifact correction. • First use of deep learning for MLT artifact correction. • The proposed approach reconstructs the SLT data well and generalises across new patients, anatomies and to the phantom data.

Recommend


More recommend