4837

Gadgetron Inline AI: Effective Model inference on MR scanner
Hui Xue1, Rhodri Davies2, David Hansen3, Ethan Tseng4, Marianna Fontana5, James C. Moon2, and Peter Kellman1

1National Heart, Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States, 2Barts Heart Centre, London, United Kingdom, 3Gradient Software, SkĂždstrup, Denmark, 4NIH, National Heart, Lung and Blood Institute, Bethesda, MD, United States, 5National Amyloidosis Centre, RoyalFree Hospital, London, United Kingdom

Synopsis

We extended Gadgetron, a widely used open-source framework, to support AI inference on clinical MR scanners. Specially designed software modules (InlineAI) was added to Gadgetron, allowing to load and apply AI neural network models on incoming MR data for compelte "in-line" fashion. That is, without any user interaction, results will be sent back to scanner and available immediately after data acquisition. Two AI based applications were developed as demenstration: Inline AI cine segmenation and perfusion flow mapping and analysis.

Purpose

Artificial intelligence, especially deep-learning based algorithms, has potential to significantly improve MR imaging, reconstruction and analysis [1]. Typical development of AI based clinical imaging applications consists of two phases: Training involves optimization of model parameters iteratively given large amount of labelled datasets; Inference aims to apply the trained model on incoming single dataset. While training is generally conducted "off-line", AI inference "in-line" on the scanner will provide results immediately or shortly after data acquisition, which will improve clinical workflow. Ideally, MR data must be streamed to an AI/imaging server for image reconstruction, computing and analysis without any user interaction.

To our best knowledge, scanner computing environment supplied by vendors at this moment is often inadequate in computing power and missing AI software. To enable AI model inference on MR scanner, we extend Gadgetron [2, 3], an open-source software package currently widely used by MR research community, by adding features to: 1) interact with main stream AI software packages, such as TensorFlow [4] and PyTorch [5]; 2) allow flexible AI model deployment via the Python modules (called "Gadget") or embedded python calls (Python/C++ interfaces). We demonstrate these new abilities for inline AI inference on two clinical applications: (1) inline cine segmentation and (2) inline perfusion flow mapping and analysis. Both are currently deployed to hospitals for clinical validation.

Method

Two schemes for model inference were implemented in Gadgetron. Python Gadget: Gadgetron modules, called "Gadget" can be implemented in python language (since all major AI packages support python). AI models can be loaded in the configuration of python gadgets and applied to incoming data repeatedly. Python/C++ interface: User can supply python scripts for loading and applying AI models. This script will be called in C++ runtime of Gadgetron, through the dedicated python/C++ interface. Python-C++ data conversion is implemented for all major MR data types, including k-space, image, ECG/respiratory waveform, XML meta data and labelled contours and anatomical landmarks.

Both schemes were implemented in Gadgetron (3.17.0, Inline AI release, https://github.com/gadgetron/gadgetron/archive/v3.17.0.tar.gz). Two AI applications were developed and deployed as demonstration. Inline AI - Cine: Retro-gated cine imaging with parallel imaging and reconstructed in Gadgetron. Resulting images were fed into the pre-trained deep learning model [6] for myocardium segmentation. All functionalities were implemented as a python gadget with TensorFlow. This gadget worked with other C++ gadgets to reconstruct cine images together with endo and epi contours that may be edited on the scanner, as shown in Fig. 1. These allows cine function metrics, such as ejection fraction, to be automatically computed from segmentation results. Inline AI – Perfusion mapping and analysis: Our inline perfusion solution [4] was further improved with two AI models. First AI model was trained to detect the LV blood pool of arterial input function (AIF) image series. The detected signal was used for pixel-wise perfusion flow mapping. The second AI model was trained to segment endo/epi boundary of myocardial as well as detect RV insertion point for all short-axis slices. The AHA Bull's eye plot was computed on pixel-wise flow maps. Both segmentation and Bull's eye plot were sent back to scanner, without any user interaction (Fig. 2). The Python/C++ interfaces were used for loading and applying both AI models. PyTorch [5] was used in this application.

Patient studies were conducted at the Barts Heart Centre and Royal Free Hospital, London, UK. This study was approved by the local Ethics Committees at both hospitals and written informed consent for research was obtained for all subjects. Anonymized data were also approval by the NIH Office of Human Subjects Research OHSR (Exemption #13156).

Results

Both applications were deployed to clinical MR scanners and enabled inline cine and perfusion analysis using deep learning models. Typical model loading took ~350ms for cine, 100ms for perfusion AIF detection and 120ms for perfusion segmentation. The model inference for cine took ~100ms per image on cpu and ~25ms on gpu. Perfusion AIF detection took ~90ms on cpu and segmentation took 800ms for one whole short-axis slice. The entire process is automatic and did not require any user interaction. The applications are fully "in-line" to achieve seamlessly integration with clinical MR scans.

Conclusion

We extended the Gadgetron framework to better support AI model inference in the in-line fashion. Resulting software framework provide flexibilities to support main-stream AI software (TensorFlow and PyTorch) and flexible model deployment schemes (Python Gadget and Python/C++ interface). Two clinical AI applications were developed and deployed to demonstrate the technical capacities of Inline-AI Gadgetron.

Acknowledgements

No acknowledgement found.

References

[1] Ting D, et al. AI for medical imaging goes deep. Nature Medicine 24, 2018. [2] Hansen MS, et al. Gadgetron: An Open Source Framework for Medical Image Reconstruction. MRM, 69(6), 2013. [3] Xue H, et al. Distributed MRI Reconstruction Using Gadgetron-Based Cloud Computing. MRM, 73(3), 2015. [4] Abadi M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. [5] Paszke, Adam. Automatic differentiation in PyTorch. NIPS 2017. [6] Davies R. et al. Measuring myocardial performance across health and disease using contraction fraction. SCMR 2018.

Figures



Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4837