Gordon Lecture: Adversarial Domain Adaptation – Supervised to Unsupervised

Dr. Xiaofeng Liu is a research fellow at the Beth Israel Deaconess Medical Center, Harvard Medical School and a visiting researcher at the Montréal Institute of Learning Algorithm. He works on general deep learning theory in the computer vision domain and their application to robust inference system and medical diagnosis. He was a joint supervision PhD in Electrical and Computer Engineering at the Chinese Academy of Science and Carnegie Mellon University. Contemporary, he joined the Google Research, Facebook AI Research and Microsoft Research Asia as an intern. Before that, he received his B.Eng. in Automation (Pattern recognition), B.A. in Communication and minor in Biology in the University of Science and Technology of China.
Below is a summary of his presentation.

Deep neural networks are data-starved, depending on the independent and identically distributed assumption to discover associations of training and testing data. However, in reality, the deployment target task can usually be diverse, and collecting sufficient labeled data in the target domain is expensive or even prohibitive. The domain adaptation (DA) seeks to transfer knowledge from a labeled source domain to a related but slightly different target domain. For example, integrating data from multiple medical centers, or facilitating MRI analysis utilizing CT data.

The adversarial training is a predominant protocol to achieve DA with great potential in both supervised and unsupervised settings.  However, Dr. Liu’s recent theoretical analysis derives that in an unsupervised setting, adversarial training is essentially aligning the marginal distribution of source and target domains, but not necessarily aligned per class. In this presentation, Dr. Liu introduced recent advances of adversarial DA on supervised and unsupervised settings together with some examples of general classification, segmentation and biomedical applications.