Loubna El Gueddari^{1,2}, Emilie Chouzenoux^{3,4}, Jean-Christophe Pesquet^{3}, Alexandre Vignaud^{1}, and Philippe Ciuciu^{1,2}

Compressed sensing theory reduces lengthy acquisition time in MRI at the expense of computationally demanding iterative reconstruction. Usually, reconstruction is performed *offline *once all the data have been collected. Here, we introduce an online CS reconstruction framework that interleaves acquisition and reconstruction steps in a convex setting and permits the delivery of intermediate images on the scanner console during acquisition. In particular, the sum of acquisition and reconstruction times is reduced without compromising image quality. The gain of this strategy is shown both on retrospective Cartesian and prospective non-Cartesian under-sampled ex-vivo baboon brain data at 7T with an in-plane resolution of 400$$$\mu$$$m.

Magnetic resonance imaging has proved its versatility to probe soft tissues in the human body, especially it addresses a large set of contrasts that bring complementary information on the underlying organ to facilitate diagnosis. However MRI acquisition remains slow in particular in the high-resolution T2* imaging context: the use of long echo and repetition times makes both the acquisition of each spoke and the interval between consecutive spokes lengthy. For the last decade, compressed sensing (CS) theory^{1,2} has tackled this issue by massively under-sampling k-space data along feasible trajectories^{3}. In this setting, MR image reconstruction has become more challenging since iterative. In this work, we propose an online strategy where acquisition and reconstruction are interleaved. This approach is based on a batch formulation of the reconstruction problem and is compatible with the Gadgetron^{4} framework.

**Problem Statement**. Let us define the offline reconstruction problem solved in CS. Let $$$N$$$, $$$S$$$, $$$C$$$ and $$$\pmb{x}$$$ being respectively the image dimension, number of shots used to acquire the NMR signal ($$$\pmb{y} \in \mathbb{C}^{CS}$$$), the number of samples per shots and the recovered image. $$$\pmb{T}$$$ will define a Wavelet Transform^{5}. The offline reconstruction problem reads as follows:

$$\hat{\pmb{x}} = \underset{\pmb{x} \in \mathbb{C}^N}{\text{argmin}}\left\lbrace \frac{1}{2} \|f_{\Omega}\left( \pmb{x} \right) - \pmb{y}_{\Omega}\|_\text{F}^2 + \lambda_1\, \|\pmb{T} \pmb{x}\|_1\right\rbrace$$

For an online implementation of CS reconstruction, we introduce $$$n_b$$$ the number of batches, $$$s_b$$$ the number of shots per batch and $$$n_j$$$ the number of iteration per batch. The k-space support of the $$$j^{th}$$$-batch will be defined as $$$\Omega_j = \cup_{0 \leq i \leq j \times s_b} \Gamma_i $$$ where $$$\Gamma_i$$$ is the $$$i^{th}$$$ shot support. The online reconstruction problem reads as follows:

$$\forall j \in \mathbb{N}, 0<j \leq n_b, \; \hat{\pmb{x}}^j = \underset{\pmb{x} \in \mathbb{C}^N}{\text{argmin}}\left\lbrace \frac{1}{2} \frac{1}{\# \Omega_j} \|f_{\Omega_j}\left( \pmb{x} \right) - \pmb{y}_{\Omega_j}\|_\text{F}^2 + \lambda_2\, \|\pmb{T} \pmb{x}\|_1\right\rbrace$$

$$$\lambda_1$$$ and $$$\lambda_2$$$ control the sparsity level. The two formulations are equivalent for $$$j = n_b, \lambda_2 =\lambda_1/ \#\Omega_{n_b}$$$. Each newly available batch has to be taken into account, which leads to the following constraint:

$$ n_{j} \times T_{it} \approx s_b \times \text{TR}$$

To ensure convergence, the last iteration number will be large enough. In terms of optimization, we implemented a primal-dual algorithm^{6,7} summarized in **Fig.1.**

**Parameter setting. **A T2*-weighted ex-vivo baboon brain was acquired on a 7T system with in-plane resolution of $$$400\mu m$$$ and 3mm slice thickness, a $$$\text{FOV}=20.4cm$$$, a base resolution of $$$512\times 512$$$.The acquisition parameters were set as follows: $$$\text{TR}=550ms$$$ (for 11 slices), $$$\text{TE}=30$$$ms and $$$\text{FA}=25 ^\circ$$$. The decimated bi-Orthogonal Wavelet Transform with 4 decomposition scale was used as sparsifying transform and $$$\lambda_2$$$ was set retrospectively. The final maximal number of iteration is set to 200. All experiments were run on a machine with 128 GB of RAM and an 8-core (2.40 GHz) Intel Xeon E5-2630 v3 Processor and the code have been developed in python using PySAP package.

**Retropsective Cartesian sampling.** The sampling mask composed of 187 lines of 512 samples is illustrated in **Fig.2**. The 12 central lines were collected first while the others were acquired in a pseudo-random order. We used the FFT, which leads to $$$T_{it} = 0.12s$$$. The batch size was tested over a discrete set of values, $$$s_b=2, 23, 46 \text{ and } 92$$$ shots/batch. The evolution of the performance was quantified using the SSIM^{8} score. The results are illustrated **in Fig.3 (a)**, and **Fig.2** represents the reconstruction at the end of acquisition i.e. at 92s.

**Propsective non-Cartesian sampling. **A modified 2D T2*-weighted GRE sequence was implemented based on the multi-shot Sparkling trajectories^{9} with $$$S=43$$$ and $$$C=3072$$$, an acceleration factor$$$ = 12 $$$ in time and an under-sampling factor $$$ = 2$$$. The sequence was implemented using a golden angle strategy (i.e. $$$\approx 130^\circ$$$ between two shots) and the NFFT^{10} was used. In this setting, $$$T_{it}=0.25s$$$, hence $$$n_{b} = s_b \times 2$$$. The sets of batch size parameters are summarized in **Table1**. The SSIM evolution is given **Fig.3 (b)**, and **Fig.4** represents the reconstruction at the end of the acquisition (21.5s from the beginning of the acquisition). Despite the slowness of the NFFT, the gain is significant.

In this work we proposed a framework for online reconstruction from segmented under-sampled data. We demonstrated its advantages and application to T2*-weighted high-resolution 2D imaging both on Cartesian and non-Cartesian sampling schemes. In terms of
reconstruction, the proposed method converges to the same image as the offline solution. Additionally this approach delivers a reliable intermediate MR image during acquisition. This new reconstruction framework is
compatible with the Gadgetron^{4} implementation, which makes it appealing for clinical use of
CS in daily routine.

Part of this work was supported by the Agence Nationale de la Recherche of France under MAJIC (ANR-17-CE40-0004-01) project.

This research program was supported by a 2016 DRF Impulsion grant (COSMIC, P.I.: P.C.).

1. D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52,no. 4, pp. 1289–1306, 2006.

2. M. Lustig, D. Donoho, and J. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195,2007.

3. N. Chauffert, P. Ciuciu, J. Kahn, and P. Weiss, “Variable density sampling with continuous trajectories. Application to MRI,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp.1962–1992, Nov. 2014.

4. M. Hansen and T. Srensen, “Gadgetron: An open source framework for medical image reconstruction,” Magnetic Resonance in Medicine, vol. 69, no. 6, pp. 1768–1776, 2013.

5. I. Daubechies, "The wavelet transform, time-frequency localization and signal analysis." IEEE transactions on information theory 36.5 (1990): 961-1005.

6. P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal proces-
sing”, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bau-
schke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz editors. Springer-
Verlag, New York, pp. 185-212, 2011

7. P. L. Combettes and J.-C. Pesquet, “Stochastic forward-backward and primal- dual approximation algorithms with application to online image restoration”, European Signal and Image Processing Conference (EUSIPCO 2016), p. 1813-1817, Budapest, Hungary, 29 Aug.- 2 Sept., 2016.

8. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on Image Processing, vol. 13, no. 4,pp. 600–612, 2004.

9. C. Lazarus, P. Weiss, N. Chauffert, F. Mauconduit, L. El Gueddari, C. Destrieux, I. Zem-moura, A. Vignaud, and P. Ciuciu, “Variable-density k-space filling curves for accelerated Magnetic Resonance Imaging,” 2018.

10. J. Keiner, S. Kunis, and D. Potts, “Using NFFT 3—a software library for various nonequispaced fast Fourier transforms,” ACM Transactions on Mathematical Software (TOMS),vol. 36, no. 4, pp. 19, 2009

11. L. Condat, “A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms,” Journal of Optimization Theory and Applications,vol. 158, no. 2, pp. 460–479, 2013.

12. B. Vũ, “A splitting algorithm for dual monotone inclusions involving cocoercive operators,” Advances in Computational Mathematics, vol. 38, no. 3, pp. 667–681, Apr 2013.

13. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes 3rd Edition:The Art of Scientific Computing, Cambridge University Press, New York, NY, USA, 3edition, 2007.