Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] Reminder: AIIS talk at 2 pm by Pannaga Shivaswamy

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] Reminder: AIIS talk at 2 pm by Pannaga Shivaswamy


Chronological Thread 
  • From: Rajhans Samdani <rsamdan2 AT illinois.edu>
  • To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, eyal AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>
  • Cc: Pannaga Shivaswamy <pannaga.datta AT gmail.com>
  • Subject: [nl-uiuc] Reminder: AIIS talk at 2 pm by Pannaga Shivaswamy
  • Date: Fri, 24 Sep 2010 12:02:20 -0500 (CDT)
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Hi all,

Just a gentle reminder of this week's AIIS talk by Pannaga Shivaswamy. The
talk will be at 2 pm in 3405 SC. Following are the accompanying title and
abstract.

Title: Large Relative Margin and Applications

Abstract:
Over the last decade or so, machine learning algorithms such as support
vector
machines, boosting etc. have become extremely popular. The core idea in these
and
other related algorithms is the notion of large margin. Simply put, the idea
is to
geometrically separate two classes with a large separation between them; such
a
separator is then used to predict the class of an unseen test example. These

methods have been extremely successful in practice and have formed a
significant
portion of machine learning literature. There are several theoretical results
which
motivate such algorithms. A closer look at such theoretical results reveals
that the
generalization ability of these methods are strongly linked to the margin as
well as
some measure of the spread of the data. Yet the algorithms themselves only
seem to
be maximizing the margin---completely ignoring the spread information. This
talk
focuses on addressing this problem; novel formulations, that not only take
into
consideration the margin but also the spread aspect of the data, are
proposed. In
particular, relative margin machine, which is a strict generalization of the
well known
support vector machine is proposed. Further, generalization bounds are
derived for the
relative margin machines using a novel method of landmark examples. The idea
of
relative margin is fairly general; its potential is demonstrated by proposing
formulations for structured prediction problems as well as for a
transductive setup
using graph Laplacian. Finally, a boosting algorithm incorporating both the
margin
information and the spread information is derived as well. The boosting
algorithm is
motivated from the recent empirical Bernstein bounds. All the proposed
variants of
the relative margin algorithms are easy to implement, efficiently solvable
and typically
show significant improvements over their large margin counterparts--on
real-world
datasets. (joint work with Dr. Tony Jebara)

Hoping to see you all.
Best,
Rajhans


Rajhans Samdani,
Graduate Student,
Dept. of Computer Science,
University of Illinois at Urbana-Champaign.



  • [nl-uiuc] Reminder: AIIS talk at 2 pm by Pannaga Shivaswamy, Rajhans Samdani, 09/24/2010

Archive powered by MHonArc 2.6.16.

Top of Page