Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS talk by Amir Globerson on Oct 1

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS talk by Amir Globerson on Oct 1


Chronological Thread 
  • From: Rajhans Samdani <rsamdan2 AT illinois.edu>
  • To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, eyal AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>
  • Subject: [nl-uiuc] AIIS talk by Amir Globerson on Oct 1
  • Date: Mon, 27 Sep 2010 12:42:06 -0500 (CDT)
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Hi all,

This week in AIIS, we are hosting Amir Globerson
(http://www.cs.huji.ac.il/~gamir/)
from Hebrew University. Following are the title and abstract of his talk
which is
scheduled to be held in 3405, at 2 pm on Friday, Oct 1. Please contact me in
case
you wish to arrange a meeting with Amir on Friday afternoon.

Title: Learning with Approximate inference - From LP Relaxations to
Pseudo-Max
Approaches

Abstract:
Supervised learning problems often involve the prediction of complex
structure labels,
such as sequences (e.g., POS tagging) or trees (e.g., dependency parsing). To
achieve high accuracy in these tasks, one is often interested in introducing
complex
dependencies between label parts. However, this can result in prediction
problems
that are NP hard. A natural approach in these cases is to use tractable
approximations of the prediction problem.
In this talk I will present our recent work on using approximate inference
for structured
prediction tasks. I will describe linear programming (LP) relaxations for the
problem,
and show highly scalable algorithms for learning using these relaxations. I
will next
introduce a simpler approach, called ‘psuedo-max’ learning, and show that it
is
consistent for separable problems under certain conditions, and has empirical
performance that is similar to LP relaxations.
I will conclude by addressing the problem of finding the K best solutions in
such
problems, and show a new class of relaxations that has theoretical guarantees
and
works well in practice.

Base on joint work with: Ofer Meshi, David Sontag, Menachem Fromer and Tommi
Jaakkola


Hoping to see you there.
Best,
Rajhans


Rajhans Samdani,
Graduate Student,
Dept. of Computer Science,
University of Illinois at Urbana-Champaign.




  • [nl-uiuc] AIIS talk by Amir Globerson on Oct 1, Rajhans Samdani, 09/27/2010

Archive powered by MHonArc 2.6.16.

Top of Page