Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS talk by Amir Globerson at 2pm

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS talk by Amir Globerson at 2pm


Chronological Thread 
  • From: Rajhans Samdani <rsamdan2 AT illinois.edu>
  • To: theorycs AT cs.uiuc.edu, nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, eyal AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>
  • Subject: [nl-uiuc] AIIS talk by Amir Globerson at 2pm
  • Date: Fri, 1 Oct 2010 12:34:35 -0500 (CDT)
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Hi all,

Just a gentle reminder regarding today's talk by Amir Globerson
(http://www.cs.huji.ac.il/~gamir/) from Hebrew University. The talk is in
3405 SC at 2
pm. Following are the title and abstract of his talk.

Title: Learning with Approximate inference - From LP Relaxations to
Pseudo-Max
Approaches

Abstract:
Supervised learning problems often involve the prediction of complex
structure labels,
such as sequences (e.g., POS tagging) or trees (e.g., dependency parsing). To
achieve high accuracy in these tasks, one is often interested in introducing
complex
dependencies between label parts. However, this can result in prediction
problems
that are NP hard. A natural approach in these cases is to use tractable
approximations of the prediction problem.
In this talk I will present our recent work on using approximate inference
for structured
prediction tasks. I will describe linear programming (LP) relaxations for the
problem,
and show highly scalable algorithms for learning using these relaxations. I
will next
introduce a simpler approach, called ‘psuedo-max’ learning, and show that it
is
consistent for separable problems under certain conditions, and has empirical
performance that is similar to LP relaxations.
I will conclude by addressing the problem of finding the K best solutions in
such
problems, and show a new class of relaxations that has theoretical guarantees
and
works well in practice.

Base on joint work with: Ofer Meshi, David Sontag, Menachem Fromer and Tommi
Jaakkola


Hoping to see you there.
Best,
Rajhans


Rajhans Samdani,
Graduate Student,
Dept. of Computer Science,
University of Illinois at Urbana-Champaign.





Archive powered by MHonArc 2.6.16.

Top of Page