Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] Reminder: AIIS talk Today @ 2pm by Micah Hodosh

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] Reminder: AIIS talk Today @ 2pm by Micah Hodosh


Chronological Thread 
  • From: Yonatan Bisk <bisk1 AT illinois.edu>
  • To: "nl-uiuc AT cs.uiuc.edu" <nl-uiuc AT cs.uiuc.edu>, "aivr AT cs.uiuc.edu" <aivr AT cs.uiuc.edu>, "vision AT cs.uiuc.edu" <vision AT cs.uiuc.edu>, "eyal AT cs.uiuc.edu" <eyal AT cs.uiuc.edu>, "aiis AT cs.uiuc.edu" <aiis AT cs.uiuc.edu>, "aistudents AT cs.uiuc.edu" <aistudents AT cs.uiuc.edu>, Girju, Corina R <girju AT illinois.edu>, Blake, Catherine <clblake AT illinois.edu>, Efron, Miles James <mefron AT illinois.edu>, Lee, Soo Min <lee203 AT illinois.edu>
  • Subject: [nl-uiuc] Reminder: AIIS talk Today @ 2pm by Micah Hodosh
  • Date: Fri, 4 May 2012 11:43:52 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

When: Today @ 2pm
Where: 3405 SC
Speaker: Micah Hodosh

Title: 

Semantic text kernels for sentence-based image retrieval and annotation

Abstract:

When someone is asked to succinctly describe what is depicted in a photograph, the description will not just simply list everything that is being depicted in the image. In addition, two people typically will not produce the same description for a photo;  they will mention different things and use different phrasing. Current approaches for sentence-based image annotation use detectors to map images to an explicit representation of objects, scenes and events, and cannot be applied directly to the converse task of sentence-based image retrieval. By contrast, in this work, we model the association between  photographs and their  English-language sentence descriptions  by inducing a common semantic space suitable for image annotation and image retrieval. We use Kernel Canonical Correlation Analysis (KCCA) to induce this semantic space of images and sentences which can be used for both tasks. We investigate what text representations are most appropriate for this task. We only rely only on low-level image features, but images appear near sentences that describe them well: for over a quarter of unseen test images, the closest sentence among a large pool of unseen candidates describes them perfectly or with only minor errors. 

* presenting joint work with Peter Young *

Bio:

Student of Julia Hockenmaier.


  • [nl-uiuc] Reminder: AIIS talk Today @ 2pm by Micah Hodosh, Yonatan Bisk, 05/04/2012

Archive powered by MHonArc 2.6.16.

Top of Page