Jan W. Kurzawski^{1,2}, Matteo Cencini^{3}, Pedro A. Gómez^{4}, Rolf F. Schulte^{5}, Giada Fallo^{2}, Alessandra Retico^{1}, Michela Tosetti^{2,6}, Mauro Costagli^{2,6}, and Guido Buonincontri^{2}

Two-dimensional MRF is considered to be less sensitive to in-plane motion than conventional imaging techniques. However, in scanning populations prone to rapid and extensive motion, challenges remain. Here, we suggest a two-step 3D MRF procedure that includes the correction of subject motion during the reconstruction. In the first step, we reconstruct the data in small segments consisting of images with equal contrast and calculate the between-segment motion. In the second step, we perform motion correction and use corrected images for matching with dictionary. This results in higher quality of reconstructed images and better precision of quantitative maps.

**3D MRF sequence **

Data was acquired on a clinical 1.5 T scanner (HDx, GE Healthcare). In our 3D-MRF acquisition we sampled random radial directions at each frame (Fig 1A.) with constant TE/TR. Each segment of the data was acquired with a pattern of flip angles (Fig. 1B). Replicating this design across the acquisition results in segments of equal contrasts with the same angular sampling density in the 3D k-space. Images from each segment, combined together, create a fully reconstructed image. We use a singular value decomposition to compress acquired frames in k-space to singular values^{4}. The final images form a 128x128x128 volume of (1.5 mm)^{3 }voxels.

**Simulation**

Data from a phantom were collected with a single-segment acquisition. Motion was simulated by shifting and rotating the trajectories before the reconstruction. A timecourse of 3D volumes, formed by the reconstructed volume with no motion, followed by the volumes with simulated motion, was generated. The 3D motion correction algorithm implemented in AFNI was applied^{5}.

**In vivo experiment **

Two subjects participated to this study (case A and B), each acquired twice, with a 48-segment scheme. In the first acquisition, subjects were instructed to stay still, while in the second they randomly moved their heads. The processing pipeline on the data with deliberate motion was as follows. 1) Each acquisition was first reconstructed segment by segment to form a timeseries of 3D volumes. During reconstruction, the k-space was apodised by a 3D Gaussian (FWHM=16 k-space points) to decrease noise and facilitate motion correction. 2) Between-segment motion was estimated from the magnitude data and the transformation matrix was applied to the complex data of each segment. 3) MRF dictionary fitting was run on motion-corrected images and uncorrected images with motion artifacts. Results were compared with the dataset without voluntary motion.

Fig. 2 proves that a motion correction algorithm can correctly estimate the simulated movements (Fig. 2A), and shows the effect of motion correction applied to phantom data (Fig. 2B).

Fig. 3 shows an improvement of the in vivo data reconstructed with the motion correction algorithm. Not only the anatomical landmarks are of higher quality but also T1 maps comprise of higher detail. In Fig. 4 we correlate quantitative estimates between the datasets, and in both cases after the correction values are closer to the ground truth. To obtain a spatial map of the fitting error, we plot the correlation values between dictionary and acquired data as a map for the case A (Fig. 5A). We observe that the correction increases the correlation values globally (Fig. 5), especially in areas highly affected by motion.

- Ma, D., et al., Magnetic resonance fingerprinting.Nature, 2013. 495(7440): p. 187-92.2.
- Cruz, G., et al., Rigid motion-corrected magnetic resonance fingerprinting.Magn Reson Med, 2018.3.
- Yu, Z., et al., Exploring the sensitivity of magnetic resonance fingerprinting to motion.Magn Reson Imaging, 2018. 54: p. 241-248.4.
- McGivney, D.F., et al., SVD compression for magnetic resonance fingerprinting in the time domain.IEEE Trans Med Imaging, 2014. 33(12): p. 2311-22.5.
- Cox, R.W., AFNI: software for analysis and visualization of functional magnetic resonance neuroimages.Comput Biomed Res, 1996. 29(3): p. 162-73.

Fig 1A: Sequence diagram for 3D radial trajectories. Spokes were acquired with randomized angles (theta, phi).

Fig 1B: Flip angle pattern that presents the idea of equal contrast segments. Inversion pulse was present before each segment.

Fig. 2A: Presents the simulated and estimated motion parameters.

Fig. 2B: Illustrates the volumes before and after applying the motion correction (images of the 1^{st} singular value). First image is the reference, next two volumes consist only of translation followed by two of rotation and last two of both translation and rotation. Red circles are used to appreciate the applied correction.

Fig. 3 shows axial slices for both subjects (A and B) and all conditions (without motion, affected by motion and motion corrected) for the first singular value (1^{st} row) and T1 map (2^{nd} row). Subject B represents the extreme motion case.

Fig. 4. T1 and T2 concordance correlation coefficent (CCC) plots that compare values for non-motion datasets (x axis) with motion corrupted and motion corrected reconstructions (y axis) before (blue) and after (red) correction. Slope corresponds to the polynomial fit with intercept at 0 presented for each comparison. To calculate the correlation we used T1 values (0-3000 ms) and T2 values (0-200 ms) within the brain mask.

Fig. 5A: Correlation values from dictionary matching for the middle axial slice for the case A.

Fig. 5B: Histogram of the same parameter for the presented slice (red before correction, green after the correction). Straight lines indicate the mean.