Below are highlights from Technology Research and Development 1 projects covering motion correction in PET/MR and MRSI.

Memory consistent unsupervised off-the-shelf model adaptation for source-relaxed medical image segmentation

This article discusses a technique called Unsupervised Domain Adaptation (UDA), which involves transferring knowledge learned from a labeled source domain to an unlabeled target domain, without using any labeled target data. However, accessing the labeled source data can be restricted due to privacy or intellectual property concerns. The authors propose a new approach called “off-the-shelf” UDA (OSUDA), which adapts an OS segmentor trained in a source domain to a target domain using a novel batch-wise normalization (BN) statistics adaptation framework, and shows that it performs better than existing UDA methods without source data. They evaluated their framework on both cross-modality and cross-subtype brain tumor segmentation and cardiac MR to CT segmentation tasks.

Abstract
Illustration of our proposed queued dynamic memory-consistent self-training strategy.
Illustration of our proposed queued dynamic memory-consistent self-training strategy.

Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to facilitate the implementation in an unlabeled heterogeneous target domain. Although UDA is typically jointly trained on data from both domains, accessing the labeled source domain data is often restricted, due to concerns over patient data privacy or intellectual property. To sidestep this, we propose “off-the-shelf (OS)” UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation. Toward this goal, we aim to develop a novel batch-wise normalization (BN) statistics adaptation framework. In particular, we gradually adapt the domain-specific low-order BN statistics, e.g., mean and variance, through an exponential momentum decay strategy, while explicitly enforcing the consistency of the domain shareable high-order BN statistics, e.g., scaling and shifting factors, via our optimization objective. We also adaptively quantify the channel-wise transferability to gauge the importance of each channel, via both low-order statistics divergence and a scaling factor. Furthermore, we incorporate unsupervised self-entropy minimization into our framework to boost performance alongside a novel queued, memory-consistent self-training strategy to utilize the reliable pseudo label for stable and efficient unsupervised adaptation. We evaluated our OSUDA-based framework on both cross-modality and cross-subtype brain tumor segmentation and cardiac MR to CT segmentation tasks. Our experimental results showed that our memory consistent OSUDA performs better than existing source-relaxed UDA methods and yields similar performance to UDA methods with source data.

X Liu, et. al., Med Image Analysis Vol 83, Jan 2023, 102641

Manifold learning via linear tangent space alignment (LTSA) for accelerated dynamic MRI with sparse sampling

The article describes a new way to improve the quality of dynamic magnetic resonance imaging (MRI) by using mathematical modeling to reconstruct images from sparsely sampled data. The new model, called linear tangent space alignment (LTSA), exploits the low-dimensional structure of dynamic images, and was shown to outperform existing methods in numerical simulations and in vivo experiments. The authors suggest that this method could have applications in various MRI techniques, such as dynamic MRI and MR spectroscopic imaging.

Abstract
Normalized root mean square error (NRMSE) of different image reconstruction methods in the numerical simulation study. (a) Representative images during a deep inhalation. (b) The NRMSEs per frame.

The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.

Djebra Y, et. al., IEEE Trans Med Imaging. 2022 Sep 19;PP:10.1109/TMI.2022.3207774

Joint spectral quantification of MR spectroscopic imaging using linear tangent space alignment-based manifold learning

The article presents a new method to improve the accuracy of MR Spectroscopic Imaging (MRSI) spectral quantification. The proposed method uses a linear tangent space alignment (LTSA) model to represent MRSI signals, and aligns the local coordinates of the subspace model to the global coordinates of the underlying low-dimensional manifold via linear transform. The authors validated the performance of the proposed method using numerical simulation data and in vivo proton-MRSI experimental data, and showed that it achieved superior performance over existing methods in terms of noise reduction, artifact reduction, and spectral quantification accuracy.

Abstract

Purpose: To develop a manifold learning-based method that leverages the intrinsic low-dimensional structure of MR Spectroscopic Imaging (MRSI) signals for joint spectral quantification.

Methods: A linear tangent space alignment (LTSA) model was proposed to represent MRSI signals. In the proposed model, the signals of each metabolite were represented using a subspace model and the local coordinates of the subspaces were aligned to the global coordinates of the underlying low-dimensional manifold via linear transform. With the basis functions of the subspaces predetermined via quantum mechanics simulations, the global coordinates and the matrices for the local-to-global coordinate alignment were estimated by fitting the proposed LTSA model to noisy MRSI data with a spatial smoothness constraint on the global coordinates and a sparsity constraint on the matrices.

Results: The performance of the proposed method was validated using numerical simulation data and in vivo proton-MRSI experimental data acquired on healthy volunteers at 3T. The results of the proposed method were compared with the QUEST method and the subspace-based method. In all the compared cases, the proposed method achieved superior performance over the QUEST and the subspace-based methods both qualitatively in terms of noise and artifacts in the estimated metabolite concentration maps, and quantitatively in terms of spectral quantification accuracy measured by normalized root mean square errors.

Conclusion: Joint spectral quantification using linear tangent space alignment-based manifold learning improves the accuracy of MRSI spectral quantification.

Ma C, et. al., Magn Reson Med 2022.

Deep learning-based GTV contouring modeling inter- and intra- observer variability in sarcomas

The accurate delineation of gross tumor volume (GTV) is crucial in radiation therapy planning, but it is a time-consuming and subjective process. This study proposes an automatic GTV contouring method for soft-tissue sarcomas using deep learning and considering inter- and intra-observer variability. The proposed method demonstrated the ability to predict accurate contours and can potentially improve clinical workflow.

Confidence maps from multiple contourings
Confidence maps from multiple contourings

Background and purpose: The delineation of the gross tumor volume (GTV) is a critical step for radiation therapy treatment planning. Delineation is a time-consuming process that is subject to inter- and intra-observer variability.

Materials and methods: In this work we propose an automatic GTV contouring method for soft-tissue sarcomas from X-ray computed tomography (CT) images, using deep learning by integrating inter- and intra-observer variability in the learned model. Sixty-eight patients with soft tissue and bone sarcomas were considered in this evaluation, all underwent pre-operative CT imaging used to perform GTV delineation. Four radiation oncologists and radiologists performed three contouring trials each for all patients. We quantify variability by defining confidence levels based on the frequency of inclusion of a given voxel into the GTV and use a deep convolutional neural network to learn GTV confidence maps.

Results: Results were compared to confidence maps from the four readers as well as ground-truth consensus contours established jointly by all readers. The resulting continuous Dice score between predicted and true confidence maps was 87% and the Hausdorff distance was 14 mm.

Conclusion: Results demonstrate the ability of the proposed method to predict accurate contours while utilizing variability and as such it can be used to improve clinical workflow.

Marin T, et. al., Radiother Oncol 2021 167 :269-276

Attenuation correction using deep Learning and integrated UTE/multi-echo Dixon sequence: evaluation in amyloid and tau PET imaging

This study presents a new method to improve the accuracy of PET imaging for Alzheimer’s disease (AD) by correcting errors in attenuation correction (AC). The proposed method uses a combination of deep learning and an ultrashort time-to-echo/multi-echo Dixon sequence for amyloid and tau imaging. The results show that this method achieved the best performance in terms of accuracy compared to other deep learning methods and reduced errors in cortical regions.

Abstract

The averaged surface maps of SUVR relative error for different methods: Atlas-Dixon (first column), CNN-MPRAGE (second column), CNN-Dixon (third column), and CNN-mUTE (fourth column). Only the left hemisphere of the surface map is shown. The color map range is from 1 to 10% in magnitude

PET measures of amyloid and tau pathologies are powerful biomarkers for the diagnosis and monitoring of Alzheimer’s disease (AD). Because cortical regions are close to bone, quantitation accuracy of amyloid and tau PET imaging can be significantly influenced by errors of attenuation correction (AC). We have applied out MR-based AC method, which combines deep learning with a novel ultrashort time-to-echo (UTE)/multi-echo Dixon (mUTE) sequence, for amyloid and tau imaging.

In thirty-five subjects that underwent both 11C-PiB and 18F-MK6240 the proposed method was compared to Dixon-based atlas method as well as magnetization-prepared rapid acquisition with gradient echo (MPRAGE)- or Dixon-based deep learning methods. PET error images regarding standardized uptake value ratio (SUVR) were quantified through regional and surface analysis to evaluate the final AC accuracy.

The regional SUV and SUVR errors for all deep learning methods were below 2%, with mUTE-based deep learning method performing the best. The mUTE-based deep learning method resulted in the least number of surface regions with error higher than 1%, with the largest error (> 5%) showing up near the inferior temporal and medial orbitofrontal cortices.

Gong, K, et al. Eur J Nucl Med Mol Imaging. 2020 Oct 27. doi: 10.1007/s00259-020-05061-w.

MR‐based PET attenuation correction using a combined ultrashort echo time/multi‐echo Dixon acquisition

We propose a three‐dimensional (3D)  ultrashort echo time (UTE)/multi‐echo Dixon (mUTE) sequence to acquire signals from water, fat, and short T2 components (e.g., bones) simultaneously in a single acquisition. A physical compartmental model is used to fit the measured multi‐echo MR signals to obtain fractions of water, fat, and bone components for each voxel, which are then used to estimate the continuous linear attenuation coefficients (LAC) map for PET attenuation correction. The performance of the proposed method was evaluated via phantom and in vivo human studies and we found that the method can generate subject‐specific, continuous LAC map for PET attenuation correction in PET/MR.

Pulse sequence diagram of the proposed ultrashort echo time/multi‐echo Dixon (mUTE) sequence. 
Results of positron emission tomography (PET) reconstruction from the in vivo experiment (Subject #1). PET images reconstructed using linear attenuation coefficient (LAC) maps derived from computed tomography (CT), two‐point Dixon‐based method, atlas‐based method, proposed ultrashort echo time/multi‐echo Dixon (mUTE) method with single LAC assignment of bone, and proposed mUTE method with continuous LAC assignment of bone are shown

Han PK, Horng DE, Gong K, Petibon Y, Kim K, Li Q, Johnson KA, El Fakhri G, Ouyang J, Ma C. “MR-based PET attenuation correction using a combined ultrashort echo time/multi-echo Dixon acquisition.” Med Phys. 2020 Jul;47(7):3064-3077.

Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: Comparison with Atlas, ZTE and CT based attenuation correction.

Axial sections of ZTE MR image (A), CT that served as reference (B), AC map generated from Atlas (C), AC map generated from segmented-ZTE (D), U-net method (E) in three different patients.

One of the main technical challenges of PET/MRI is to achieve an accurate PET attenuation correction (AC) estimation. In current systems, AC is accomplished by generating an MRI-based surrogate computed tomography (CT) from which AC-maps are derived. Nevertheless, all techniques currently implemented in clinical routine suffer from bias. We present here a convolutional neural network (CNN) that generated AC-maps from Zero Echo Time (ZTE) MR images.

Seventy patients referred to our institution for 18FDG-PET/MR exam (SIGNA PET/MR, GE Healthcare) as part of the investigation of suspected dementia, were included. 23 patients were added to the training set of the manufacturer and 47 were used for validation. Brain computed tomography (CT) scan, two-point LAVA-flex MRI (for atlas-based AC) and ZTE-MRI were available in all patients. Three AC methods were evaluated and compared to CT-based AC (CTAC): one based on a single head-atlas, one based on ZTE-segmentation and one CNN with a 3D U-net architecture to generate AC maps from ZTE MR images. Impact on brain metabolism was evaluated combining voxel and regions-of-interest based analyses with CTAC set as reference. The U-net AC method yielded the lowest bias, the lowest inter-individual and inter-regional variability compared to PET images reconstructed with ZTE and Atlas methods. The impact on brain metabolism was negligible with average errors of -0.2% in most cortical regions.

These results suggest that the U-net AC is more reliable for correcting photon attenuation in brain FDG-PET/MR than atlas-AC and ZTE-AC methods.

Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images

Three views of the PET reconstruction error images (PETpseudoCT – PETCT, unit: SUV) using the Dixon-Seg method (left column), the Dixon-atlas method (middle column) and the proposed Dixon-Unet method (right column).

Positron emission tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as magnetic resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior to other Dixon-based methods.

When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper.

Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure.

K. Gong, et. al. Phys Med Biol. 2018 Jun 13;63(12):125011. doi: 10.1088/1361-6560/aac763.

A deep learning approach for F-FDG PET attenuation correction

PET reconstruction using a deepAC and b acquired CT-based attenuation correction (CTAC) for a 59-year-old male with a brain tumor. The tumor region was indicated by a red arrow in CTAC PET image. Low reconstructed PET error is observed by using the proposed deepAC approach given the presence of brain metastasis

The goal of this research was to develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging.

A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction.

deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions.

We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging.

F Liu, et. al. EJNMMI Phys. 2018 Nov 12;5(1):24. doi: 10.1186/s40658-018-0225-8.

Motion correction for PET data using subspace-based real-time MR imaging in simultaneous PET/MR

Image quality of positron emission tomography (PET) reconstructions is degraded by subject motion occurring during the acquisition. Magnetic resonance (MR)-based motion correction approaches have been studied for PET/MR scanners and have been successful at capturing regular motion patterns, when used in conjunction with surrogate signals (e.g. navigators) to detect motion. However, handling irregular respiratory motion and bulk motion remains challenging.

PET reconstructions for the bulk motion experiment using three different methods: reconstruction without motion correction (NMC), reconstruction from PET data corresponding to a single respiratory phase and body position (Gated), motion correction using motion estimated from the XD-GRASP MR reconstructions (MC-XDG) and proposed motion-corrected reconstruction from low-rank MR reconstruction (MC-LR). Profile plots through the right kidney (along the orange line drawn on the NMC image) are shown in (c).

In this work, we propose an MR-based motion correction method relying on subspace-based real-time MR imaging to estimate motion fields used to correct PET reconstructions. We take advantage of the low-rank characteristics of dynamic MR images to reconstruct high-resolution MR images at high frame rates from highly undersampled k-space data. Reconstructed dynamic MR images are used to determine motion phases for PET reconstruction and estimate phase-to-phase nonrigid motion fields able to capture complex motion patterns such as irregular respiratory and bulk motion. MR-derived binning and motion fields are used for PET reconstruction to generate motion-corrected PET images. The proposed method was evaluated on in vivo data with irregular motion patterns. MR reconstructions accurately captured motion, outperforming state-of-the-art dynamic MR reconstruction techniques. Evaluation of PET reconstructions demonstrated the benefits of the proposed method in terms of motion artifacts reduction, improving the contrast-to-noise ratio by up to a factor of 3 and achieving a target-to-background ratio up to 90% superior compared to standard/uncorrected methods. The proposed method can improve the image quality of motion-corrected PET reconstructions in clinical applications.

Thibault Marin et al 2020 Phys. Med. Biol. 65 235022

MR-based cardiac and respiratory motion correction of PET: application to static and dynamic cardiac F-FDG imaging

Short-axis and horizontal long-axis images of a late dynamic frame reconstructed with MC and NMC for subject 2. White arrows indicate locations where reconstructed wall activity is clearly higher in MC compared to NMC. Red arrows point to papillary muscles whose structure is more visible in MC images, indicating improved spatial resolution. Orange arrows indicate areas where spillover from the myocardium to the left-ventricle cavity is visibly reduced in MC images.

Motion of the myocardium deteriorates the quality and quantitative accuracy of cardiac PET images. We present a method for MR-based cardiac and respiratory motion correction of cardiac PET data and evaluate its impact on estimation of activity and kinetic parameters in human subjects.

Three healthy subjects underwent simultaneous dynamic 18F-FDG PET and MRI on a hybrid PET/MR scanner. A cardiorespiratory motion field was determined for each subject using navigator, tagging and golden-angle radial MR acquisitions. Acquired coincidence events were binned into cardiac and respiratory phases using electrocardiogram and list mode-driven signals, respectively. Dynamic PET images were reconstructed with MR-based motion correction (MC) and without motion correction (NMC). Parametric images of 18F-FDG consumption rates (Ki) were estimated using Patlak’s method for both MC and NMC images. MC alleviated motion artifacts in PET images, resulting in improved spatial resolution, improved recovery of activity in the myocardium wall and reduced spillover from the myocardium to the left ventricle cavity. Significantly higher myocardium contrast-to-noise ratio and lower apparent wall thickness were obtained in MC versus NMC images. Likewise, parametric images of Ki calculated with MC data had improved spatial resolution as compared to those obtained with NMC. Consistent with an increase in reconstructed activity concentration in the frames used during kinetic analyses, MC led to the estimation of higher Ki values almost everywhere in the myocardium, with up to 18% increase (mean across subjects) in the septum as compared to NMC.

This study shows that MR-based motion correction of cardiac PET results in improved image quality that can benefit both static and dynamic studies.

Y Petibon. Phys Med Biol. 2019 Oct 4;64(19):195009. doi: 10.1088/1361-6560/ab39c2.

Body motion detection and correction in cardiac PET: Phantom and human studies

Purpose: Patient body motion during a cardiac positron emission tomography (PET) scan can severely degrade image quality. We propose and evaluate a novel method to detect, estimate, and correct body motion in cardiac PET.

Non motion‐corrected, motion‐corrected and reference images for subjects 2 and 3 [(18F)‐tetraphenylphosphonium] in short‐axis view. Arrows indicate the delineation of the structures.

Methods: Our method consists of three key components: motion detection, motion estimation, and motion-compensated image reconstruction. For motion detection, we first divide PET list-mode data into 1-s bins and compute the center of mass (COM) of the coincidences’ distribution in each bin. We then compute the covariance matrix within a 25-s sliding window over the COM signals inside the window. The sum of the eigenvalues of the covariance matrix is used to separate the list-mode data into “static” (i.e., body motion free) and “moving” (i.e. contaminated by body motion) frames. Each moving frame is further divided into a number of evenly spaced sub-frames (referred to as “sub-moving” frames), in which motion is assumed to be negligible. For motion estimation, we first reconstruct the data in each static and sub-moving frame using a rapid back-projection technique. We then select the longest static frame as the reference frame and estimate elastic motion transformations to the reference frame from all other static and sub-moving frames using nonrigid registration. For motion-compensated image reconstruction, we reconstruct all the list-mode data into a single image volume in the reference frame by incorporating the estimated motion transformations in the PET system matrix. We evaluated the performance of our approach in both phantom and human studies.

Results: Visually, the motion-corrected (MC) PET images obtained using the proposed method have better quality and fewer motion artifacts than the images reconstructed without motion correction (NMC). Quantitative analysis indicates that MC yields higher myocardium to blood pool concentration ratios. MC also yields sharper myocardium than NMC.

Conclusions: The proposed body motion correction method improves image quality of cardiac PET.

T. Sun, et. al. Med Phys. 2019 Nov;46(11):4898-4906. doi: 10.1002/mp.13815.

MR-based motion correction for cardiac PET parametric imaging: a simulation study.

Estimated K1 maps and line profiles. a GA, NMC, and MC K1 maps for CM and CRM as well as ST K1 map. The GA, NMC, and MC K1 maps are for one noise realization. The arrow on the ST map points to the defect. b GA, NMC, MC line profiles (for CRM) as well as ST line profile. The profiles were drawn along a line (shown on the map at the top-right corner) connecting the anterobasal and apical regions and going through the center of the defect

Background: Both cardiac and respiratory motions bias the kinetic parameters measured by dynamic PET. The aim of this study was to perform a realistic positron emission tomography-magnetic resonance (PET-MR) simulation study using 4D XCAT to evaluate the impact of MR-based motion correction on the estimation of PET myocardial kinetic parameters using PET-MR. Dynamic activity distributions were obtained based on a one-tissue compartment model with realistic kinetic parameters and an arterial input function. Realistic proton density/T1/T2 values were also defined for the MRI simulation. Two types of motion patterns, cardiac motion only (CM) and both cardiac and respiratory motions (CRM), were generated. PET sinograms were obtained by the projection of the activity distributions. PET image for each time frame was obtained using static (ST), gated (GA), non-motion-corrected (NMC), and motion-corrected (MC) methods. Voxel-wise unweighted least squares fitting of the dynamic PET data was then performed to obtain K1 values for each study. For each study, the mean and standard deviation of K1 values were computed for four regions of interest in the myocardium across 25 noise realizations.

Results: Both cardiac and respiratory motions introduce blurring in the PET parametric images if the motion is not corrected. Conventional cardiac gating is limited by high noise level on parametric images. Dual cardiac and respiratory gating further increases the noise level. In contrast to GA, the MR-based MC method reduces motion blurring in parametric images without increasing noise level. It also improves the myocardial defect delineation as compared to NMC method. Finally, the MR-based MC method yields lower bias and variance in K1 values than NMC and GA, respectively. The reductions of K1 bias by MR-based MC are 7.7, 5.1, 15.7, and 29.9% in four selected 0.18-mL myocardial regions of interest, respectively, as compared to NMC for CRM. MR-based MC yields 85.9, 75.3, 71.8, and 95.2% less K1 standard deviation in the four regions, respectively, as compared to GA for CRM.

Conclusions: This simulation study suggests that the MR-based motion-correction method using PET-MR greatly reduces motion blurring on parametric images and yields less K1 bias without increasing noise level.

R. Guo, et. al. EJNMMI Phys. 2018 Feb 1;5(1):3. doi: 10.1186/s40658-017-0200-9.

Accelerated J-resolved 1 H-MRSI with limited and sparse sampling of (k,t1,t2)-space.

Purpose: To accelerate the acquisition of J-resolved proton magnetic resonance spectroscopic imaging (1 H-MRSI) data for high-resolution mapping of brain metabolites and neurotransmitters.

Experimental results of healthy volunteers. A, Anatomical localization image; B, representative metabolite maps of NAA, Cr, Cho, mI, Glu, Gln, and GABA in nominal spatial resolution 3.0 × 3.0 × 4.8 mm3 from the volume indicated in (A); C, white matter and gray matter metabolite concentrations of four healthy volunteers. GM, gray matter; WM, white matter; NAA, N‐acetyl aspartate; Cr, creatine; Cho, choline; mI, myo‐inositol; Glu, glutamate; Gln, glutamine; GABA, gamma‐aminobutyric acid.

Methods: The proposed method used a subspace model to represent multidimensional spatiospectral functions, which significantly reduced the number of parameters to be determined from J-resolved 1 H-MRSI data. A semi-LASER-based (Localization by Adiabatic SElective Refocusing) echo-planar spectroscopic imaging (EPSI) sequence was used for data acquisition. The proposed data acquisition scheme sampled (k,t1,t2)-space in variable density, where t1 and t2 specify the J-coupling and chemical-shift encoding times, respectively. Selection of the J-coupling encoding times (or, echo time values) was based on a Cramer-Rao lower bound analysis, which were optimized for gamma-aminobutyric acid (GABA) detection. In image reconstruction, parameters of the subspace-based spatiospectral model were determined by solving a constrained optimization problem.

Results: Feasibility of the proposed method was evaluated using both simulated and experimental data from a spectroscopic phantom. The phantom experimental results showed that the proposed method, with a factor of 12 acceleration in data acquisition, could determine the distribution of J-coupled molecules with expected accuracy. In vivo study with healthy human subjects also showed that 3D maps of brain metabolites and neurotransmitters can be obtained with a nominal spatial resolution of 3.0 × 3.0 × 4.8 mm3 from J-resolved 1 H-MRSI data acquired in 19.4 min.

Conclusions: This work demonstrated the feasibility of highly accelerated J-resolved 1 H-MRSI using limited and sparse sampling of (k,t1,t2)-space and subspace modeling. With further development, the proposed method may enable high-resolution mapping of brain metabolites and neurotransmitters in clinical applications.

L Tang, et al. Magn Reson Med. 2021 Jan;85(1):30-41. doi: 10.1002/mrm.28413

A minimum-phase Shinnar-Le Roux spectral-spatial excitation RF pulse for simultaneous water and lipid suppression in 1H-MRSI of body extremities.

 In vivo experiment results. a: GRE image. The representative spectra locations for best (red dot), average (blue dot), and worst (green dot) case scenarios are indicated in the image. b: B0 inhomogeneity map. c–d: Water (c) and lipid (d) maps from sinc pulse. e–f: Water (e) and lipid (f) maps from SPSP pulse. Note the 20-fold lower scale difference compared to the results obtained using sinc pulse.

Purpose: To develop a spectral-spatial (SPSP) excitation RF pulse for simultaneous water and lipid suppression in proton (1H) magnetic resonance spectroscopic imaging (MRSI) of body extremities.

Methods: An SPSP excitation pulse is designed to excite Creatine (Cr) and Choline (Cho) metabolite signals while suppressing the overwhelming water and lipid signals. The SPSP pulse is designed using a recently proposed multidimensional Shinnar-Le Roux (SLR) RF pulse design method. A minimum-phase spectral selectivity profile is used to minimize signal loss from T2 decay.

Results: The performance of the SPSP pulse is evaluated via Bloch equation simulations and phantom experiments. The feasibility of the proposed method is demonstrated using three-dimensional, short repetition-time, free induction decay-based 1H-MRSI in the thigh muscle at 3T.

Conclusion: The proposed SPSP excitation pulse is useful for simultaneous water and lipid suppression. The proposed method enables new applications of high-resolution 1H-MRSI in body extremities.

PK Han, et. al. Magn Reson Imaging. 2018 Jan;45:18-25. doi: 10.1016/j.mri.2017.09.008.

Deep learning for lesion detection, progression, and prediction of musculoskeletal disease

Deep learning is one of the most exciting new areas in medical imaging. This review article provides a summary of the current clinical applications of deep learning for lesion detection, progression, and prediction of musculoskeletal disease on radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine. Deep-learning methods have shown success for estimating pediatric bone age, detecting fractures, and assessing the severity of osteoarthritis on radiographs. In particular, the high diagnostic performance of deep-learning approaches for estimating pediatric bone age and detecting fractures suggests that the new technology may soon become available for use in clinical practice. Recent studies have also documented the feasibility of using deep-learning methods for identifying a wide variety of pathologic abnormalities on CT and MRI including internal derangement, metastatic disease, infection, fractures, and joint degeneration. However, the detection of musculoskeletal disease on CT and especially MRI is challenging, as it often requires analyzing complex abnormalities on multiple slices of image datasets with different tissue contrasts. Thus, additional technical development is needed to create deep-learning methods for reliable and repeatable interpretation of musculoskeletal CT and MRI examinations. Furthermore, the diagnostic performance of all deep-learning methods for detecting and characterizing musculoskeletal disease must be evaluated in prospective studies using large image datasets acquired at different institutions with different imaging parameters and different imaging hardware before they can be implemented in clinical practice. 

R. Kijowski, et. al.J Magn Reson Imaging. 2020 Dec;52(6):1607-1619. doi: 10.1002/jmri.27001.

MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping

Purpose: To develop and evaluate a novel deep learning-based image reconstruction approach called MANTIS (Model-Augmented Neural neTwork with Incoherent k-space Sampling) for efficient MR parameter mapping.

Two representative examples demonstrating the performance of MANTIS in cartilage and meniscus lesion detection. (A) Results from a 67‐year‐old male patient with knee osteoarthritis and superficial cartilage degeneration on the medial femoral condyle and medial tibia plateau. (B) Results from a 59‐year‐old male patient with a tear of the posterior horn of the medial meniscus. MANTIS was able to reconstruct high‐quality T2 maps for unambiguous identification of cartilage and meniscus lesions at both R = 5 and R = 8

Methods: MANTIS combines end-to-end convolutional neural network (CNN) mapping, incoherent k-space undersampling, and a physical model as a synergistic framework. The CNN mapping directly converts a series of undersampled images straight into MR parameter maps using supervised training. Signal model fidelity is enforced by adding a pathway between the undersampled k-space and estimated parameter maps to ensure that the parameter maps produced synthesized k-space consistent with the acquired undersampling measurements. The MANTIS framework was evaluated on the T2 mapping of the knee at different acceleration rates and was compared with 2 other CNN mapping methods and conventional sparsity-based iterative reconstruction approaches. Global quantitative assessment and regional T2 analysis for the cartilage and meniscus were performed to demonstrate the reconstruction performance of MANTIS.

Results: MANTIS achieved high-quality T2 mapping at both moderate (R = 5) and high (R = 8) acceleration rates. Compared to conventional reconstruction approaches that exploited image sparsity, MANTIS yielded lower errors (normalized root mean square error of 6.1% for R = 5 and 7.1% for R = 8) and higher similarity (structural similarity index of 86.2% at R = 5 and 82.1% at R = 8) to the reference in the T2 estimation. MANTIS also achieved superior performance compared to direct CNN mapping and a 2-step CNN method.

Conclusion: The MANTIS framework, with a combination of end-to-end CNN mapping, signal model-augmented data consistency, and incoherent k-space sampling, is a promising approach for efficient and robust estimation of quantitative MR parameters.

F Liu, et. al. Magn Reson Med. 2019 Jul;82(1):174-188. doi: 10.1002/mrm.27707.