Hidenori Takeshima^{1}

The author proposes a new layer named aliasing layer (AL) for effectively correcting MR-specific aliasing artifacts using convolutional neural networks. In MR images acquired using parallel imaging (PI) and/or echo-planar imaging (EPI), the locations of aliasing artifacts and/or N/2 ghost artifacts can be analytically calculated. The AL preprocesses MR images by moving the calculated locations to the locations accessible through summations over all channels in a convolution layer. The experimental results demonstrate that the correction method using the proposed AL could effectively remove PI aliasing and EPI ghosting artifacts.

Convolutional neural networks (CNN)^{1} have
been widely used in various MR applications such as image reconstruction, de-noising^{2} and disease classification^{3}. By using the assumption that an image has spatial
locality, CNN uses convolutions, which shares local connections in an entire
image, instead of full connections.

Often
MR images acquired using parallel imaging (PI) contain residual aliasing
artifacts. Similarly, often EPI images contain residual N/2 ghosting artifacts.
Previous methods to correct PI aliasing artifacts include the variational
network by Hammernik et al.^{4} and multi-scale deep learning approach by Lee
et al^{5}. In this work, the author proposes a new CNN layer named the Aliasing
Layer which can significantly reduce the residual aliasing artifacts.

For effectively correcting MR-specific aliasing artifacts, the author proposes a new CNN layer named aliasing layer (AL). In regularly under-sampled PI, the location of the aliasing artifacts can be analytically calculated by knowing the image matrix size and acceleration factor. Similarly, the location of the EPI N/2 ghost artifacts are known. Therefore, it is sufficient to preprocess MR images by moving the calculated locations to the locations accessible through summations over all channels in a convolution layer. After preprocessing an image, aliasing signals are accessible as spatially-local signals of other channels. As shown in Fig. 1, the AL is the layer which duplicates an input signal to new channels and shifts all aliased signals to the location of the input signal in the new channels.

The AL generates images shifted by $$$N/a, 2N/a, \cdots, (a-1)N/a$$$ where $$$N$$$ is image size in the phase-encode dimension and $$$a$$$ is number of aliasing signals. After shifting, the AL concatenates the input and all the shifted images. The number of output channels of AL is the product of the number of input channels and the number of aliasing signals. For processing aliasing artifacts from PI such as traditional SENSE and GRAPPA, $$$a$$$ is set to the reduction factor of PI. For processing N half artifact of EPI, $$$a=2$$$ is used.

For a proof of concept, the author implemented
CNNs based on ResNet^{6}, with and without ALs, for processing reconstructed MR
images. The CNN with ALs for evaluation is shown in Fig. 2. Real and imaginary
parts of data were stored in separate channels. The convolution layer used $$$3
\times 3$$$ convolution kernels with $$$32$$$ output channels in the CNN with
AL and $$$32 \times a$$$ output channels in the CNN without AL.

For
training and testing the CNNs, 5 images (4 training, 1 testing) shown in Fig. 3
were used. PI aliasing artifact was simulated by regularly under-sampling
k-space data to keep 30%, 40% and 50 % of the original. Under-sampled data was
Fourier transformed to get PI images with aliasing artifacts. In the case of
EPI, even and odd encodes in k-space were shifted by $$$-0.15, -0.12, \cdots,
+0.15$$$ in the unit of readout grids to simulate N/2 ghost artifact. The
original fully-sampled images were used as the target ground-truth. The CNN was
trained by Adaptive moment estimation (Adam)^{7} with a mean-squared-error loss
function. The number of epochs was 100. The parameters of Adam were
$$$\alpha=0.001$$$, $$$\beta_1=0.9$$$ and $$$\beta_2=0.999$$$.

Since same loss functions were used in the evaluations, the CNN with aliasing layers was expected to be superior to that with convolution layers. The results imply that MR-specific aliasing artifact can be represented as local correlation functions and thus can be modeled more precisely using a CNN with ALs.

While the proposed method can be adapted to other forms of MR artifacts, this paper demonstrates applications to aliasing artifacts from PI and EPI. Future work includes correcting images sampled from actual MR systems and extensions to other applications.

1. Y. LeCun et al. Gradient-based Learning Applied to Document Recognition. Proceedings of the IEEE. 1998; 86(11): 2278-2324.

2. K. Isogawa et al. Noise Level Adaptive Deep Convolutional Neural Network for Image Denoising. Proc. Intl. Soc. Mag. Reson. Med. 2018; 26: 2797.

3. G. Litjens et al. A Survey on Deep Learning in Medical Image Analysis. arXiv: 1702.05747.

4. K. Hammernik et al, Learning a Variational Network for Reconstruction of Accelerated MRI Data. Magn. Reson. Med. 2018; 79: 3055-3071.

5. D. Lee et al, Deep Artifact Learning for Compressed Sensing and Parallel MRI. arXiv: 1703.01120.

6. K. He et al. Deep Residual Learning for Image Recognition. Proc. the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.

7. D. Kingma et al. Adam: A Method for Stochastic Optimization. arXiv: 1412.6980.

The aliasing
layer (AL) in a CNN. The AL shifts an input image by $$$N/a$$$, $$$2N/a, \cdots$$$,
$$$(a-1)N/a$$$ ($$$N$$$: image width, $$$a$$$: number of aliasing signals) and
concatenate the input image and the all images shifted from the input image. The AL followed by a convolution layer can be
written as $$$y_j= \sum_{i} [Concat(Shift(x, 0)$$$, $$$Shift(x, N/a), \cdots$$$,
$$$Shift(x, (a-1)N/a))]_i * k_{ij}$$$ where $$$ Concat ()$$$ concatenates all data
in channel direction and $$$Shift(x, s)$$$ shifts $$$x$$$ circularly by $$$s$$$
in phase-encode direction ($$$x$$$: input, $$$y$$$: output, $$$k$$$: kernel, $$$i$$$:
input channel, $$$j$$$: output channel).

The
denoising CNN using the proposed ALs for demonstration. The purpose of the CNN
is to remove residual aliasing artifacts which are left after reconstruction of
PI or EPI. The CNN consists of 6 convolution layers with 3 residual
connections. The first and last layers are inserted for adjusting number of
channels. They consist of convolution layers with $$$1 \times 1$$$ kernel. Each
intermediate layer consists of a convolution layer followed by a rectified
linear unit (ReLU). Each AL is inserted before each convolution layer in the
CNN with AL.

The
images used in evaluation. In this figure, the following synonyms are used: FSE
(Fast Spin Echo) and GE (Gradient Echo). The acquired images were resized to
$$$512 \times 512$$$ for PI and $$$256 \times 256$$$ for EPI simulations.

Train and
validation errors on correcting PI-like and EPI-like artifacts. In both cases,
the CNN with ALs gave lower loss values than CNN without ALs.

Results
of correcting aliased PI and EPI images using conventional CNN and CNN with AL.
While both methods suppressed aliasing artifacts, the CNN without ALs also
suppressed actual brain structures whose signals were weak. The CNN without ALs
treats the brain structures as alias. In contrast, the proposed method, which
used CNN with ALs, suppressed aliased signals selectively.