Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] Reminder: AIIS Talk today at 2 pm by Kristen Grauman

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] Reminder: AIIS Talk today at 2 pm by Kristen Grauman


Chronological Thread 
  • From: "Samdani, Rajhans" <rsamdan2 AT illinois.edu>
  • To: "aiis AT cs.uiuc.edu" <aiis AT cs.uiuc.edu>, "aistudents AT cs.uiuc.edu" <aistudents AT cs.uiuc.edu>, "nl-uiuc AT cs.uiuc.edu" <nl-uiuc AT cs.uiuc.edu>, "vision AT cs.uiuc.edu" <vision AT cs.uiuc.edu>, "aivr AT cs.uiuc.edu" <aivr AT cs.uiuc.edu>, "eyal AT cs.uiuc.edu" <eyal AT cs.uiuc.edu>
  • Subject: [nl-uiuc] Reminder: AIIS Talk today at 2 pm by Kristen Grauman
  • Date: Fri, 2 Dec 2011 17:00:03 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Hi all!


This is a gentle reminder for today's talk by Prof. Kristen Prof. Kristen
Grauman (http://www.cs.utexas.edu/~grauman/) in **2405 SC**

Following are the details of her talk:

When: Friday, Dec 2, 2 pm.

Where: 2405 Siebel Center (note: it's not 3405, the usual room, but 2405)

Title:
Capturing Human Insight for Visual Learning

Abstract:
How should visual learning algorithms exploit human knowledge? Existing
approaches allow only a narrow channel of input to the system, typically
treating human annotators as “label machines” who provide category names for
image or video exemplars. While there is no question that human
understanding of visual content is much richer than mere labels, the
challenge is how to elicit their insight in ways that learning algorithms can
readily exploit.

We propose to widen the channel of communication to visual recognition
systems beyond traditional labels. I will present new techniques that allow
a human annotator to teach the system more fully---using either descriptive
relative comparisons (e.g., “bears are furrier than rats”), explanations
behind the label he assigns to an exemplar (e.g., “this region is too round
for it to be a chair”), or even through implicit cues about relative object
importance that are revealed in the way an annotator tags an image. In
developing these techniques, we introduce a novel approach to model relative
visual attributes using learned ranking functions, which generalizes previous
strategies restricted to categorical properties. In addition, we investigate
new cross-modal textual/visual representations that can capture what human
viewers find most noteworthy in images. Through results on challenging image
recognition and retrieval tasks, I will demonstrate the clear advantage of
incorporating such richer forms of input for visual learning.

This talk describes work with Sung Ju Hwang, Devi Parikh, and Jeff Donahue.

BIO:
Kristen Grauman is a Clare Boothe Luce Assistant Professor in the Department
of Computer Science at the University of Texas at Austin. Her research in
computer vision and machine learning focuses on visual search and object
recognition. Before joining UT-Austin in 2007, she received her Ph.D. in the
EECS department at MIT, in the Computer Science and Artificial Intelligence
Laboratory. She is a Microsoft Research New Faculty Fellow, and a recipient
of an NSF CAREER award and the Howes Scholar Award in Computational Science.
Grauman and her collaborators were awarded the CVPR Best Student Paper Award
in 2008 for work on hashing algorithms for large-scale image retrieval, and
the Marr Prize at ICCV in 2011 for work on modeling relative visual
attributes.

See you there!
Rajhans
_______________________________________________
aiis mailing list
aiis AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/aiis




  • [nl-uiuc] Reminder: AIIS Talk today at 2 pm by Kristen Grauman, Samdani, Rajhans, 12/02/2011

Archive powered by MHonArc 2.6.16.

Top of Page