Learning under distribution shift using optimal transport

Abstract

Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled target domain by using different but related data distributions with shared support. Most classical assumption is the covariate shift where the marginal distribution of the inputs change but label proportions are equal between domains. In some application contexts the target and source domains may not share the same support or the label proportions may vary, the extreme case being out-of source distribution samples present in the target domain (known as open set domain adaptation). In this talk we overview these problems using optimal transport as a measure of distribution mismatch and as a way to learn relationship between samples. We focus on the class-conditional and/or label distribution shifts. Also we consider cases where the samples from both domains are collected under different settings.

Date
Mar 15, 2022 1:00 PM
Gilles Gasso
Gilles Gasso
Full Professor

Research interests: AI, machine learning, optimization and image processing