Publications

MR-blob: Coordinate-Transformed Blobs for Parallel MRI Reconstruction - ISMRM 2022 (submitted)

Score-based generative models have demonstrated highly promising results for medical image reconstruction tasks in magnetic resonance imaging or computed tomography. However, their application to Positron Emission Tomography (PET) is still largely unexplored. PET image reconstruction involves a variety of challenges, including Poisson noise with high variance and a wide dynamic range. To address these challenges, we propose several PET-specific adaptations of score-based generative models. The proposed framework is developed for both 2D and 3D PET. In addition, we provide an extension to guided reconstruction using magnetic resonance images. We validate the approach through extensive 2D and 3D in-silico experiments with a model trained on patient-realistic data without lesions, and evaluate on data without lesions as well as out-of-distribution data with lesions. This demonstrates the proposed method’s robustness and significant potential for improved PET reconstruction. 

{Imraj RD Singh, Alexander Denker, Riccardo Barbano}†, Željko Kereta, Bangti Jin, Kris Thielemans, Peter Maass & Simon Arridge† equal contribution

Score-based Generative Models for PET image reconstruction - MELBA, Machine Learning for Biomedical Imaging (accepted)

Score-based generative models have demonstrated highly promising results for medical image reconstruction tasks in magnetic resonance imaging or computed tomography. However, their application to Positron Emission Tomography (PET) is still largely unexplored. PET image reconstruction involves a variety of challenges, including Poisson noise with high variance and a wide dynamic range. To address these challenges, we propose several PET-specific adaptations of score-based generative models. The proposed framework is developed for both 2D and 3D PET. In addition, we provide an extension to guided reconstruction using magnetic resonance images. We validate the approach through extensive 2D and 3D in-silico experiments with a model trained on patient-realistic data without lesions, and evaluate on data without lesions as well as out-of-distribution data with lesions. This demonstrates the proposed method’s robustness and significant potential for improved PET reconstruction. 

{Imraj RD Singh, Alexander Denker, Riccardo Barbano}†, Željko Kereta, Bangti Jin, Kris Thielemans, Peter Maass & Simon Arridge† equal contribution

Investigating Intensity Normalisation for PET Reconstruction with Supervised Deep Learning - IEEE MIC, NSS & RTSD 2023

Deep learning methods have shown great promise in the field of Positron Emission Tomography (PET) reconstruction, but the successful application of these methods depends heavily on the intensity scale of the images. Normalisation is a crucial step that aims to adjust the intensity of network inputs to make them more uniform and comparable across samples, acquisition times, and activity levels. In this work, we study the influence of different linear intensity normalisation approaches. We focus on two popular deep learning based image reconstruction methods: an unrolled algorithm (Learned Primal-Dual) and a post-processing method (OSEMConvNet). Results on the out-ofdistribution test dataset demonstrate that the choice of intensity normalisation significantly impacts on generalisability of these methods. Normalisation using the mean of acquisition data or corrected acquisition data led to improved peak-signal-to-noise ratio (PSNR) and data-consistency (KLDIV). Through evaluation of lesion-specific metrics of contrast recovery coefficients (CRC) and standard deviation (STD) an increase in CRC and STD is observed. These findings highlight the importance of carefully selecting an appropriate normalisation method for supervised deep learning-based PET reconstruction applications.

Imraj RD Singh, Alexander Denker, Bangti Jin, Kris Thielemans & Simon Arridge

3D PET-DIP Reconstruction with Relative Difference Prior Using a SIRF-Based Objective - Fully3D 2023

Deep Image Prior (DIP) is an unsupervised deep learning technique that does not require ground truth images. For the first time, 3D PET reconstruction with DIP is cast as a single optimisation via penalised maximum likelihood estimation, with a log-likelihood data-fit and an optional Relative Difference Prior term. Experimental results show that although unpenalised DIP optimisation trajectory performs well in high count data, it can fail to adequately resolve lesions in lower count settings. Introducing the Relative Difference Prior into the objective function the DIP trajectory can yield notable improvements.

Imraj RD Singh, {Riccardo Barbano, Željko Kereta}, Bangti Jin, Simon Arridge & Kris Thielemans equal contribution

Deep Image Prior PET Reconstruction using a SIRF-Based Objective - IEEE MIC, NSS & RTSD 2022

Widespread adoption of deep learning in medical imaging has been hampered, in part, due to a lack of integration with clinically applicable software. In this work, we establish a direct connection between an established PET reconstruction suite, SIRF, and PyTorch. This allows for advanced reconstruction methodologies to be deployed on clinical data with an unsupervised deep learning approach: the Deep Image Prior. Results show consistent quality metrics for DIP in comparison to OSMAP.

{Imraj RD Singh, Riccardo Barbano}, Robert Twyman, Željko Kereta, Bangti Jin, Simon Arridge & Kris Thielemans equal contribution

Magnetic Resonance Fingerprinting (MRF) accelerates quantitative magnetic resonance imaging. The reconstruction can be separated into two problems: reconstruction of a set of multi-contrast images from k-space signals, and estimation of parametric maps from the set of multi-contrast images. In this study we focus on the former problem, while leveraging dictionary matching for the estimation of parametric maps. Two different sparsity promoting regularisation strategies were investigated: contrast-wise Total Variation (TV) which encourages image sparsity separately; and Total Nuclear Variation (TNV) which promotes a measure of joint edge sparsity. We found improved results using joint sparsity.

Imraj RD Singh, Olivier Jaubert, Bangti Jin, Kris Thielemans & Simon Arridge

When developing deep neural networks for segmenting intraoperative ultrasound images, several practical issues are encountered frequently, such as the presence of ultrasound frames that do not contain regions of interest and the high variance in ground-truth labels. In this study, we evaluate the utility of a pre-screening classification network prior to the segmentation network. Experimental results demonstrate that such a classifier, minimising frame classification errors, was able to directly impact the number of false positive and false negative frames. Importantly, the segmentation accuracy on the classifier-selected frames, that would be segmented, remains comparable to or better than those from standalone segmentation networks. Interestingly, the efficacy of the pre-screening classifier was affected by the sampling methods for training labels from multiple observers, a seemingly independent problem. We show experimentally that a previously proposed approach, combining random sampling and consensus labels, may need to be adapted to perform well in our application. Furthermore, this work aims to share practical experience in developing a machine learning application that assists highly variable interventional imaging for prostate cancer patients, to present robust and reproducible open-source implementations, and to report a set of comprehensive results and analysis comparing these practical, yet important, options in a real-world clinical application. 

{Liam F. Chalcroft,  Jiongqi Qu, Sophie A. Martin, Iani JMB Gayo, Giulio V. Minore, Imraj RD Singh}, Shaheer U. Saeed, Qianye Yang, Zachary M. C. Baum, Andre Altmann & Yipeng Hu  equal contribution