Watson Cloud Platform Strategic Customer Success Deep Learning Inferencing on IBM Cloud with NVIDIA TensorRT Khoa Huynh – Senior Technical Staff Member (STSM), IBM Larry Brown – Senior Software Engineer, IBM
Watson Cloud Platform Strategic Customer Success Agenda § Introduction § Inferencing with PyCaffe § TensorRT Overview § TensorRT Implementation § Performance Results § Conclusions § Q & A 2
Watson Cloud Platform Strategic Customer Success Introduction § AI especially deep learning has seen rapid advancement in recent years § Initial focus on image processing, expanding to natural language, different neural network models, recurrent networks, and DL frameworks § Much attention to developing networks and training models § Very compute intensive, GPUs nearly a necessity § As DL becomes mainstream focus is shifting to inferencing (use of the trained network) § Inferencing cloud service could handle requests from multiple users – One request at a time or collect a batch of requests to inference at once – But only wait a short time to fill a batch or latency is affected – Or one user might submit a larger number of images to classify at once § Inferencing cloud service needs – quick response - latency seen by the user – to handle large volume – overall throughput 3
Watson Cloud Platform Strategic Customer Success IBM Cloud GPU Offerings § Bare Metal Servers – Nvidia M60 GPUs (monthly & hourly) – Nvidia K80 GPU PCIe cards (monthly & hourly) – Nvidia P100 GPUs (monthly) – Nvidia V100 GPUs (monthly) § Virtual Servers – Nvidia P100 GPUs (monthly & hourly) – Nvidia V100 ( coming soon – monthly & hourly) § Deep Learning as a Service (DLaaS) – Part of Watson Machine Learning (WML) – Focused on deep learning training – Allows user to run training jobs on a cluster of GPU-enabled machines using various frameworks § PowerAI – Available in 2Q2018 with PowerAI R5 – Delivered through IBM Cloud Catalog & supported by IBM Trusted Partner Nimbix – On-demand cloud provisioning – Containerized – Native Distributed Deep Learning (DDL ) and Large Model Support (LMS) 4
Watson Cloud Platform Strategic Customer Success Inferencing with PyCaffe § Given a trained model want to use it to classify images. § Study performance of various GPUs, FPGAs, etc. § Can use C++, but more familiar with Python so that was the language of choice. § Unlike training, a single GPU is used. Use multiple threads, processes, or services if more volume needs to take advantage of more GPUs. root@V100 : ~/infer_caffe # python infer_caffe.py -h usage: infer_caffe.py [-h] -m MODEL -w WEIGHTS -l LMDB [-b BATCH] [-i ITERATIONS] [-c CAFFEROOT] [--blobName BLOBNAME] [--labels LABELS] [--meanImage MEANIMAGE] [--debug] [--gpu] [--csvFile CSVFILE] [--quiet] Use a trained Caffe model to classify images from a LMDB database. 5
Watson Cloud Platform Strategic Customer Success infer_caffe Sample Output root@V100 : ~/infer_caffe # python infer_caffe.py -m ~/model_zoo/caffe/vgg16/pretrained/VGG_ILSVRC_16_layers_deploy.prototxt -w ~/model_zoo/caffe/vgg16/pretrained/VGG_ILSVRC_16_layers.caffemodel -l /datasets/x86_LMDB/LMDB/ilsvrc12_val_lmdb/ -c /opt/nvidia/caffe-0.16/caffe -b 1 -i 5 --gpu --csvFile ./pycaffe.csv --quiet Final Stats (times in seconds) ------------------------------ Date: 03/08/2018 Time: 09:05:54 Host: V100 Iterations: 5 Batch size: 1 Data type: NA Total run time: 3.3347 Stats for all iterations ------------------------ Total predictions: 5 Correct top 1 predictions: 5 Correct top 5 predictions: 5 Top 1 accuracy: 100.00% Top 5 accuracy: 100.00% Inference time -- Total: 0.1645 Mean: 0.0329 Min: 0.0076 Max: 0.0726 Range: 0.0649 STD: 0.0308 Median: 0.0079 Inference time/prediction: 0.0329 Images/sec: 30.40 6
Watson Cloud Platform Strategic Customer Success Program Flow Parse command line … # Create the neural network. net = caffe.Net(model_def, # defines the structure of the model model_weights, # contains the trained weights caffe.TEST) # use test mode (e.g., don't perform dropout) … for each iteration read a batch of images from LMDB # Call the network. out = net.forward() # Time only this step. … output statistics 7
Watson Cloud Platform Strategic Customer Success TensorRT Overview § Speeds up inferencing by – Merging layers and tensors to reduce size of network and execute in a single kernel. – Selects the best specialized kernel for the target hardware based on layer parameters and measured performance. § Stages – Build: optimize the network (layers, weights, labels) to produce a runtime plan or engine. • Optimization can take some time so the resulting engine can be serialized to a file. – Deploy: run the engine with given input data to get the resulting predictions. § Supports Python and C++. § TensorRT Lite is a simplified interface for Python (not used here). § Can create the TRT network yourself or use TRT utility to import and convert framework model into TRT form. – Caffe and UFF (Universal Framework Format) compatible frameworks such as TensorFlow. 8
Watson Cloud Platform Strategic Customer Success Reduced Precision Inferencing § Model trained in FLOAT (FP32). § TensorRT inferencing can use FLOAT, HALF, or INT8 (as supported by the GPU). § Increase speed with no or little loss of accuracy. – HALF could be some reduction of accuracy, not noticeable in general. – INT8 a small reduction of accuracy. § INT8 requires calibration files. – Uses sample runs of data through the net to determine the range of FLOAT values encountered. – Maps that range to INT8’s smaller range. – Caffe patch available to easily generate these calibration files during a short training run when environment variable TENSORRT_INT8_BATCH_DIRECTORY is set. – NVIDIA suggests “For ImageNet networks, around 500 calibration images is adequate”. 9
Watson Cloud Platform Strategic Customer Success TensorRT Implementation § Re-implementation of caffe_infer.py using TRT instead of pycaffe. § Shares code for getting images, collecting stats, overall flow. root@V100 : ~/infer_caffe # python infer_caffe_trt.py -h usage: infer_caffe_trt.py [-h] -m MODEL -w WEIGHTS -l LMDB [-b BATCH] [-i ITERATIONS] [-c CAFFEROOT] [--imageShape IMAGESHAPE] [--max_batch MAX_BATCH] [--outputLayer OUTPUTLAYER] [--outputSize OUTPUTSIZE] [--dtype {FLOAT,HALF,INT8}] [--labels LABELS] [--meanImage MEANIMAGE] [--csvFile CSVFILE] [--calBatchDir CALBATCHDIR] [--firstCalBatch FIRSTCALBATCH] [--numCalBatches NUMCALBATCHES] [--debug] [--quiet] Uses NVidia TensorRT to optimize and run inference on a trained Caffe model performing an image recognition task and prints performance and accuracy results. 10
Watson Cloud Platform Strategic Customer Success TRT Program Flow # Create the engine using the TRT utilities for Caffe. # Use the caffe model converter utility in tensorrt.utils. # We provide it a logger, a path to the model prototxt, the model file, the max batch size, # the max workspace size, the output layer(s) and the data type of the weights. engine_dtype = trt.infer.DataType[dtype] calibrator = None if engine_dtype == trt.infer.DataType.INT8: calibrator = infer_utils.Calibrator.Calibrator(cal_batch_dir, first_cal_batch, num_cal_batches, debug) engine = trt.utils.caffe_to_trt_engine(trt_logger, model, weights, max_batch, 1 << 25, [output_layer], engine_dtype, calibrator=calibrator) … # Allocate memory on the GPU with PyCUDA and register it with the engine. # The size of the allocations is the size of the input and expected output * the batch size d_input = cuda.mem_alloc(batch_size * image_shape[0] * image_shape[1] * 3 * np.dtype(np.float32).itemsize) d_output = cuda.mem_alloc(batch_size * output.size * output.dtype.itemsize) # The engine needs bindings provided as pointers to the GPU memory. # PyCUDA lets us do this for memory allocations by casting those allocations to ints bindings = [int(d_input), int(d_output)] # Create a cuda stream to run inference in. stream = cuda.Stream() 11
Watson Cloud Platform Strategic Customer Success TRT Program Flow # Time moving the data to the GPU, running the network, and getting the results back to the host as part of the # inference operation for this iteration. stats.begin_iteration() cuda.memcpy_htod_async(d_input, batchin, stream) # execute model context.enqueue(batch_size, bindings, stream.handle, None) # transfer predictions back cuda.memcpy_dtoh_async(output, d_output, stream) # syncronize threads stream.synchronize() 12
Watson Cloud Platform Strategic Customer Success V100 Caffe VGGNet16 TensorRT 3.01 Inference Latency PyCaffe FLOAT HALF INT8 0.5 0.45 0.4 Average Latency/Batch (sec) 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 1 5 10 25 50 75 100 125 150 175 Batch Size 13
Watson Cloud Platform Strategic Customer Success V100 Caffe VGGNet16 TensorRT 3.01 Inference Throughput PyCaffe FLOAT HALF INT8 2500 2000 1500 Images/Second 1000 500 0 1 5 10 25 50 75 100 125 150 175 Batch Size 14
Watson Cloud Platform Strategic Customer Success V100 Caffe VGGNet16 TensorRT 3.0.1 Inference Accuracy PyCaffe FLOAT HALF INT8 100 90 85.77 85.77 85.77 85.49 80 70 64.65 64.65 64.65 64.37 Accuracy Per Cent 60 50 40 30 20 10 0 Top 1 Top 5 15
Recommend
More recommend