Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] Reminder: AIIS talk by Maxim Raginsky at 2 pm

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] Reminder: AIIS talk by Maxim Raginsky at 2 pm


Chronological Thread 
  • From: "Samdani, Rajhans" <rsamdan2 AT illinois.edu>
  • To: nl-uiuc <nl-uiuc AT cs.uiuc.edu>, aivr <aivr AT cs.uiuc.edu>, vision <vision AT cs.uiuc.edu>, eyal <eyal AT cs.uiuc.edu>, aiis <aiis AT cs.uiuc.edu>, aistudents <aistudents AT cs.uiuc.edu>, "Girju, Corina R" <girju AT illinois.edu>, "Blake, Catherine" <clblake AT illinois.edu>, "Efron, Miles James" <mefron AT illinois.edu>
  • Subject: [nl-uiuc] Reminder: AIIS talk by Maxim Raginsky at 2 pm
  • Date: Fri, 20 Apr 2012 16:40:49 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Dear all,

This is a gentle reminder for today's AIIS talk by Prof. Maxim Raginsky
(http://www.ece.illinois.edu/directory/profile.asp?maxim) from the ECE
department. Details follow.

When: Friday, April 20, 2 pm.

Where: 3405, Siebel Center.

Title:
Fundamental Limits of Passive and Active Learning


Abstract:
Statistical learning theory is concerned with making accurate predictions on
the basis of past observations. One of the main characteristics of any
learning problem is its sample complexity: the minimum number of observations
needed to ensure a given prediction accuracy at a given confidence level. For
the most part, the focus has been on passive learning, in which the learning
agent receives independent training samples. However, recently there has been
increasing interest in active learning, in which past observations are used
to control the process of gathering future observations. The main question is
whether active learning is strictly more powerful than its passive
counterpart. One way to answer this is to compare the sample complexities of
passive and active learning for the same accuracy and confidence.

In this talk, based on joint work with Sasha Rakhlin (Department of
Statistics, University of Pennsylvania), I will present a new unified
approach to deriving tight lower bounds on the sample complexity of both
passive and active learning in the setting of binary classification. This
approach is fundamentally rooted in information theory, in particular, the
simple but powerful data processing inequality for the f-divergence. I will
give a high-level overview of the proof technique and discuss the connections
between active learning and hypothesis testing with feedback.

Hoping to see y'all,
Rajhans





Archive powered by MHonArc 2.6.16.

Top of Page