Modern machine learning applications are traditionally approached by gathering massive data sets of labeled samples and then applying standard supervised training methods to learn a classifier. However, the sample sizes required for this approach pose a significant burden, particularly for low-budget scenarios such as small startup companies or aspiring entrepreneurs. Many solutions have been developed toward reducing this data requirement, by altering the context or approach. This includes settings such as transfer, semi-supervised, active, or teacher-guided learning. In some cases, corresponding theories have been developed describing conditions for which these approaches can yield significant reductions in the data required for learning compared to traditional supervised learning approaches. At the same time, there have been advances in our understanding of situations where simple supervised learning can exploit additional structure to reduce the sample size required for learning, such as with sparsity, margin, favorable distributions, and other such properties. This workshop is intended to bring together experts developing theories relevant to all these subjects for presentations and discussion.
Xiaojin (Jerry) Zhu
Department of Computer Science, University of Wisconsin-Madison
Talk: How Fast can a Learner Learn under an Optimal Teacher?
Department of Statistics, Columbia University
Talk: Measuring Transferability: some recent insights
Microsoft Research NYC
Talk: Efficient active learning of sparse halfspaces
Time and Place:
4:00 pm — 6:30 pm, March 24, 2019.
Hyatt Centric Chicago Magnificent Mile.
Chicago, IL 60611 USA.
Workshop organizer: Steve Hanneke, TTI Chicago
This workshop takes place on the last day of the 30th International Conference on Algorithmic Learning Theory (ALT), held in Chicago March 22-24, 2019, at the Hyatt Centric Chicago Magnificent Mile. All conference attendees are invited to attend the workshop.