BE PART OF THE REVOLUTION TRANSFORMING HEALTHCARE WITH AI CALIFORNIA — THE RITZ-CARLTON, LAGUNA NIGUEL 11–14 DECEMBER 2019 1000 ATTENDEES 80 SPEAKERS 10 WORKSHOPS www.aimed.events/northamerica-2019/ 2 SOCIAL EVENTS #AIMed19 1 AIMed19
AIMed NORTH AMERICA, CALIFORNIA 11–14 DECEMBER 2019 Teaching AI to Clinicians Dennis P. Wall, Associate Professor, Stanford University dpwall@stanford.edu dpwall @dpwall00 www.aimed.events/northamerica-2019/ Wall-lab@Stanford.edu
Underst Und stand nd Co Common n failures (…but it’s not (…b ot just ab abou out what at can an go o wron rong) • Diagnostic: failure to order appropriate tests or to properly interpret test results; use of outdated tests; wrong diagnosis or delay of accurate diagnosis; and failure to act on test results. • Treatment: choosing suboptimal, outdated or wrong therapies; errors in administering the treatment; errors of medication dosing; and treatment delays. • Prevention: failures in preventive follow-up and administration of prophylactic therapies such as vaccinations. • Other: errors involving communication or equipment failures, etc.
Ri Rising innov ovat ation ons Predict atrial fibrillation and prevent heart attacks Diagnosing stroke, autism or electroencephalographic Avoid low oxygenation during surgery Finding suitable clinical trials for oncologists Selecting viable embryos for in vitro fertilization Pre-empting surgery for patients with breast cancer
AI AI is common Go Gone ne i in 9 n 90 se seconds… nds…
Lo Lookin ing under the hood: : Py PyTorc Torch Build a CNN self.fc = nn.Linear(320, 10) Linear nn.MaxPool2d torch.nn.Conv2d (kernel_size) (in_channels, out_channels, kernel_siz) self.mp = nn.MaxPool2d(2) self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
Bu Build ild CNN NN in in 60 60 mi minutes es… Train Epoch: 9 [46080/60000 (77%)] Loss: 0.108415 class Net(nn.Module): Train Epoch: 9 [46720/60000 (78%)] Loss: 0.140700 Train Epoch: 9 [47360/60000 (79%)] Loss: 0.090830 def __init__(self): Train Epoch: 9 [48000/60000 (80%)] Loss: 0.031640 Train Epoch: 9 [48640/60000 (81%)] Loss: 0.014934 super(Net, self).__init__() Train Epoch: 9 [49280/60000 (82%)] Loss: 0.090210 self.conv1 = nn.Conv2d(1, 10, kernel_size=5) Train Epoch: 9 [49920/60000 (83%)] Loss: 0.074975 self.conv2 = nn.Conv2d(10, 20, kernel_size=5) Train Epoch: 9 [50560/60000 (84%)] Loss: 0.058671 Train Epoch: 9 [51200/60000 (85%)] Loss: 0.023464 self.mp = nn.MaxPool2d(2) Train Epoch: 9 [51840/60000 (86%)] Loss: 0.018025 self.fc = nn.Linear( 320 , 10) # 320 -> 10 Train Epoch: 9 [52480/60000 (87%)] Loss: 0.098865 Train Epoch: 9 [53120/60000 (88%)] Loss: 0.013985 Train Epoch: 9 [53760/60000 (90%)] Loss: 0.070476 def forward(self, x): Train Epoch: 9 [54400/60000 (91%)] Loss: 0.065411 Train Epoch: 9 [55040/60000 (92%)] Loss: 0.028783 in_size = x.size(0) Train Epoch: 9 [55680/60000 (93%)] Loss: 0.008333 x = F.relu(self.mp(self.conv1(x))) Train Epoch: 9 [56320/60000 (94%)] Loss: 0.020412 x = F.relu(self.mp(self.conv2(x))) Train Epoch: 9 [56960/60000 (95%)] Loss: 0.036749 Train Epoch: 9 [57600/60000 (96%)] Loss: 0.163087 x = x.view(in_size, -1) # flatten the tensor Train Epoch: 9 [58240/60000 (97%)] Loss: 0.117539 x = self.fc(x) Train Epoch: 9 [58880/60000 (98%)] Loss: 0.032256 return F.log_softmax(x) Train Epoch: 9 [59520/60000 (99%)] Loss: 0.026360 Test set: Average loss: 0.0483, Accuracy: 9846/10000 (98%)
Inception v3
Object Detection/Segmentation
AI failures and false alarms Apple Watch Spots Heart Issues, With Limits Something on Afib false alarms
Ob Object detection and segmentation Pr Probabilities
Hot Dog vs. Not Hot Dog Ga Garba bage i in n Ga Garba bage O Out
Therapy-to-data feedback loop COMPUTER VISION LIBRARIES Parent Guesses Child Acts GAME PLAY DATA Data Logged
Interpretation Rest Video Video End Start ⍺ β T=0 T=90 Face Changed Acknowledgement Prompt Changed
Transfer Learning Custom Training Set ~ 1800 Frames Per Emotion Train last fc layer on Trained on Large Dataset Freeze lower layers smaller custom dataset
Perform rmance impro roving… New FCNN Glass model Consumer models* * gu guesswhat.stanford.edu
Th Think L nk Like ke a a D Data ta S Scienti ntist st Pr Preci cisi sion h n health wi with A AI
Th Think L nk Like ke a a D Data ta S Scienti ntist st Preci Pr cisi sion h n health wi with A AI Inefficiencies Data loss Repetition Fatigue Waiting lists
Translational Opportunity Space Use a conceptual framework
Und Underst stand nd the regulatory st struct cture
Breakthroughs and iterative design
o Early FDA dialogue essential (Breakthrough designation helps) o Personalized control – stakeholder driven with stakeholder input o Vetted AI against a consistent set of reference data o Adaptability and retest performance
Precision Pediatric Health in your hands REASONS • Privacy • Control • Continuity • Transparency • Scale • Utility • Better Health
Ensemble classification can boost the AI Features run through each classifier ASD Non-ASD Sensitivity Specificity 100% 22.4% ADTree8 37.3% ADTree7 94.5% Features extracted SVM5 100% 54.9% Video uploaded 77.4% 94.5% LR5 Rate the quality of 4 Mins the child’s social initiations LR9 100% 31.4% Excellent Good Satisfactory SVM12 100% 0% Poor N/A SVM10 100% 17.6% Raters score video LR10 100% 51.0% A. Mobile Video Rater Platform B. Classifiers C. Autism Risk Classification
Creating an action-to-data feedback loop Render Decision (Dx) l e d o Expand m C F V w Labeled Image e n d l i u B Database
Ta Take ke h home m messa ssages for clinical for al AI AI AI is universal and not that hard Prevent common errors Improve speed of diagnosis Improve quality of treatment & care flow Enable remote reach Embrace the innovations Understand probabilities Become a Data Science innovator Understand FDA practices
Thank you! https://wall-lab.stanford.edu/ dpwall@stanford.edu
ACKNOWLEDGEMENTS
Rec Recom ommen ender er system ems Pr Preci cisi sion h n health wi with A AI
Recommend
More recommend