Recent Posts

Select Category

Dr. Ruth Lim Receives Holman-Kaplan Award


Ruth Lim, M.D., received the Society of Nuclear Medicine and Molecular Imaging (SNMMI) New England Chapter Holman-Kaplan Memorial Lectureship and Award during the 33rd Annual SNMMI Northeast Regional Meeting that was held between November 8 and 10, 2019, in Stamford, CT. The award recognized her contributions to the SNMM New England Chapter as Councilor from Massachusetts on the Board of Directors, lecturer at prior meetings, and prior member of the Scientific Program Committee.  The topic of her Holman-Kaplan Memorial Lecture was pediatric multimodality imaging of urinary tract infection, vesicoureteral reflux and hydronephrosis. Dr. Lim is the Medical Director of the Gordon Center for Medical Imaging, Assistant Professor of Radiology at Harvard Medical School, and Secretary-Treasurer of the American Board of Nuclear Medicine.

Dr. Lim is the medical director of the Gordon Center for Medical Imaging. She is also assistant radiologist at the Massachusetts General Hospital and assistant professor of radiology.

Gordon Lecture: Deep Generative Models for Image Translation


Dr. Harry Yang is currently a Research Scientist at Facebook AI. He received his PhD in computer science at University of Southern California. His research interests are deep generative models and their computer vision applications, such as image inpainting, image translation and human retargeting. 
Below is a summary of his presentation

In this talk, Dr. Yang addressed the problem of translating faces and bodies between different identities without paired training data: they cannot directly train a translation module using supervised signals in this case. Instead, they propose to train a conditional variational auto-encoder (CVAE) to disentangle different latent factors such as identity and expressions. In order to achieve effective disentanglement, they further use multi-view information such as keypoints and facial landmarks to train multiple CVAEs. By relying on these simplified representations of the data they are using a more easily disentangled representation to guide the disentanglement of image itself. Experiments demonstrate the effectiveness of their method in multiple face and body datasets. They also showed that their model is a more robust image classifier and adversarial example detector comparing with traditional multi-class neural networks.

To address the issue of scaling to new identities and also generate better-quality results, they further propose an alternative approach that uses self-supervised learning based on StyleGAN to factorize out different attributes of face images, such as hair color, facial expressions, skin color, and others. Using pre-trained StyleGAN combined with iterative style inference they can easily manipulate the facial expressions or combine the facial expressions of any two people, without the need of training a specific new model for each of the identity involved. This is one of the first scalable and high-quality approach for generating DeepFake data, which serves as a critical first step to learn a more robust and general classifier against adversarial examples.

Gordon Lecture: Methodology Development with Carbon-11 and Fluorine-18 for PET Applications


So Jeong Lee is currently a postdoctoral fellow at PET center in the Department of Radiology at the University of Michigan working with Prof. Peter J. H. Scott. She obtained her B.S and B.E in Chemistry and Material Science at SUNY Stony Brook University in 2010 and her Ph.D in Chemistry from Stony Brook University and BNL under Prof. Joanna S. Fowler’s mentorship in 2015.
Below is a summary of her presentation.

Positron emission tomography (PET) is a functional imaging technique that is used for clinical diagnostic imaging, as well as research applications in healthcare, the pharmaceutical industry, and even plant physiology. In this presentation, Dr. Lee discussed her work to develop rapid methods for preparing [11C]auxin and [11C]indole via [11C]cyanation for PET imaging. Automation of the synthesis of both radiotracers was conducted so they could be used for plant PET imaging with the goal of understanding plant physiology and phenotyping. The talk also covered the work in her lab developing fundamental methodology for nucleophilic C-H radiofluorination reactions with Ag18F and K18F via metal catalyzed C-H activation reactions that enable the late-stage formation of C-18F bonds to prepare PET radiotracers and radioligands. Finally, their work to design an efficient automated route to produce [18F]ASEM, a PET radioligand for imaging of α7-nAChR in the human brain, to support their clinical research was covered.

Gordon Lecture: Deep Learning MR Reconstruction from Missing Data


Jong Chul Ye is currently KAIST Endowed Chair Professor and Professor of the Dept. of Bio/Brain Engineering and Adjunct Professor at Dept. of Mathematical Sciences of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received the B.Sc. and M.Sc. degrees from Seoul National University, Korea, and the Ph.D. from Purdue University, West Lafayette.
Below is a summary of his presentation

Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in accelerated MRI problems.  However, it is still unclear why these deep learning architectures work for specific problems. Moreover, in contrast to the usual evolution of signal processing theory around the classical theories, the link between deep learning and the classical image processing approaches are not yet well understood. In this talk, Dr. Ye reviewed the recent advances of deep learning approaches for accelerated MRI and their link between compressed sensing approaches. 

In particular, Dr. Ye first reviewed the variational neural network that was first proposed in MR field,  and the popular feed-forward neural network approaches using U-Net, which can remove undersampling artifacts from the aliasing artifact corrupted image. Then, he reviewed several advanced approaches such as AUTOMAP, CascadeNet, KiKi-Net, MoDL, etc. Finally, he demonstrated that the neural network approaches can be directly implemented in k-space domain to interpolate the missing k-space data.   In order to explore the theoretical origin of the success of the neural network for accelerated MRI, Dr. Ye reviewed some of the mathematical principles that have been proposed to explain the neural networks for inverse problems, which includes unfolding, convolution framelets, etc.  Then, he introduced recent mathematical discovery of the expressivity, generalization power and optimization landscape that give us hint to understand the power of AI for accelerated MRI.

Gordon Lecture: Machine Learning for Real-time High-quality Biomedical Imaging


Leslie Ying is currently a Professor of Biomedical Engineering and Electrical Engineering at the University at Buffalo, SUNY. She received her B.E. in Electronics Engineering from Tsinghua University, China in 1997 and both her M.S. and Ph.D. in Electrical Engineering from the University of Illinois at Urbana - Champaign in 1999 and 2003, respectively.
Below is a summary of her presentation

Machine learning has recently attracted a lot of attention in biomedical imaging. It has shown success in biomedical image classifications but only very recently been used for image reconstruction with unique features. For this talk, Dr. Ying started with compressed sensing (CS), a strategy for reconstruction from sub-Nyquist sampled data. Several machine-learning-based methods were introduced within the conventional CS framework. She then explained how the optimization algorithm underlying CS can be unrolled to a deep artificial neural network, such that parameters and prior models can be learned from training samples. Finally, end-to-end convolutional neural networks were presented based on the training data with little knowledge of the imaging system. Connections among different networks were discussed with their benefits and limitations highlighted. Although most examples provided were from MRI, the frameworks are generalizable to image reconstruction problems for most imaging modalities. The talk concluded with future outlooks.

Gordon Lecture: Learn Deeply to Advance Medical Imaging: Artificial Intelligence in MR and PET/MR

Dr. Fang Liu is an assistant scientist at the University of Wisconsin School of Medicine and Public Health.  Dr. Liu obtained his Ph.D. in 2015 from Medical Physics at the University of Wisconsin and completed two years of postdoctoral training at the Radiology department. Dr. Liu has extensive research experience in the technical development of MR imaging for MR pulse sequence design, image reconstruction, quantitative imaging, and image analysis.
Below is a summary of his presentation

Medical imaging is a research field that remains plenty of technical and clinical challenges. The recent development of Artificial Intelligence, particularly Deep Learning (DL), has demonstrated high potentials to resolve such challenges. Dr. Liu presented some of his recent work for DL theory development and applications in medical imaging and will discuss the performance, strengths, and limitations. The talk gave an overview of DL in medical imaging and discuss some recent DL applications that successfully translate new learning-based approaches into performance improvement in MR and PET/MR imaging workflow.  One primary aim is to draw tightly connections between fundamental DL concepts and clinically relevant challenges in medical imaging. Topics covered rapid MR image acquisition, reconstruction and MR quantitative mapping, and image post-processing such as image segmentation and synthesis in MR and PET/MR, and finally lead to DL augmented disease diagnosis and prediction. The talk concluded with a discussion of open problems in DL that are particularly relevant to medical imaging and the potential challenges and opportunities in this emerging field.

Gordon Lecture: Broadband Photon Tomography


Prof. Frederik J. Beekman Ph.D. heads the section Radiation, Detection & Medical Imaging at TU Delft University. He co-authored over 150 journal papers and is the inventor on 31 patents. His research interests includes biomedical imaging science, AI and image guided radio(-nuclide) therapy.  He is an associate editor of several journals and founder and CEO/CSO of MILabs ( that develops and markets high performance biomedical imaging systems.
Below is a summary of his presentation

High Performance Integrated 4x4D PET, SPECT, Optical & X-ray Tomography

In preclinical research scientists have dreamed about a 3D magnifying glass that would allow us to e.g. see various cell functions and structures in a dynamic 4D single scan, and map integrated detailed dynamics of e.g. contrast agents, tracers, pharmaceuticals, receptors and indicators of therapy response in tumours. To meet these and many other imaging needs Dr. Beekman and his lab developed the user friendly fully integrated VECTor-6 imaging platform (WMIC innovation of the Year 2018) comprising:In preclinical research scientists have dreamed about a 3D magnifying glass that would allow us to e.g. see various cell functions and structures in a dynamic 4D single scan, and map integrated detailed dynamics of e.g. contrast agents, tracers, pharmaceuticals, receptors and indicators of therapy response in tumours. To meet these and many other imaging needs Dr. Beekman and his lab developed the user friendly fully integrated VECTor-6 imaging platform (WMIC innovation of the Year 2018) comprising:

A) down to ~0.1 mm SPECT & 0.55 mm PET resolution, with positron-range free PET for otherwise “difficult isotopes” like 124I, 76Br, 86Y, and 82Rb.  B) concurrent sub-mm multi-tracer PET & PET-SPECT  C) sub-second dynamic PET & SPECT  D) sub-mm resolution imaging of α & β-emitting pharmaceuticals,  E)  ultra-high performance low dose X-ray CT and  F) optical tomography (Cherenkov, Fluorescence & Bioluminescence)

In this presentation this highly adaptive and versatile nuclear, optical and structural imaging platform was explained along with many scientific applications contributed by hundreds of worldwide users. Finally, the results of translating their nuclear imaging technologies into <3 mm resolution clinical SPECT (G-SPECT, WMIS Innovation of the Year 2015) was presented.

Gordon Lecture: Radiopharmaceutical Therapy: History, Current Status and Future Potential


Bennett S. Greenspan, M.D., M.S. received his M.D. degree from the University of Illinois in Chicago. He completed residencies in Diagnostic Radiology and Nuclear Medicine and is certified in Diagnostic Radiology and Nuclear Radiology by the ABR and in Nuclear Medicine by the ABNM. He received the M.S. degree in medical physics from UCLA. Dr. Greenspan is devoted to teaching of clinical nuclear medicine and also physics and radiation safety of nuclear medicine to nuclear medicine and radiology residents.  He is also keenly interested in quality and safety in Nuclear Medicine.
Below is a summary of his presentation

History - Radiopharmaceutical therapy began in 1941 with the efforts and insight of Saul Hertz, MD of MGH and also Arthur Roberts, PhD of MIT. From that beginning, I-131 has become an important agent for the treatment of benign and malignant thyroid disease. In the 1980s, two agents, Sr-89 chloride and Sm-153 EDTMP, were introduced for bone pain palliation. Somatostatin receptor targeted therapies were developed in the 1980s and 1990s, leading to FDA-approval of Lu-177 Dotatate in 2018. Radiolabeled antibodies were also being developed in the 1970s – 2000s, with the introduction of two agents in 2002 and 2003. Radium-223 dichloride was approved by the FDA in 2013 for treatment of castrate-resistant metastatic prostate cancer.