About the Event
In machine learning, transfer learning refers to the problem of classifying a test data set, given one or more training data sets that are governed by different (although similar) distributions. Such problems arise in a range of applications where data distributions fluctuate because
of biological, technical, environmental, and other sources of variation. In this talk I will present generalization error analysis and a universally consistent algorithm for transfer learning using kernel methods. To begin the talk, I will overview the use of reproducing kernel Hilbert spaces in machine learning. This work is motivated by an application to flow cytometry data analysis, and is joint work with Gilles Blanchard, Gyemin Lee, and Lloyd Stoolman.