Intel, Phillips accelerate deep learning on CPUs in medical imaging uses
Using Intel Xeon scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements. Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit, or GPUs. By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models. Central processing units, or CPUs, in this case Intel Xeon Scalable processors don't have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence workloads, Intel Xeon Scalable processors can better meet data scientists' needs than GPU-based systems. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers. AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals. Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are a great fit for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models , particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy. These tests show that healthcare organizations can implement AI workloads without expensive hardware investments. The results for both use cases surpassed expectations. The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations - an increase of 188 times. The lung-segmentation model far surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.