Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS: Carl Doersch - Dec 4 @ 2pm

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS: Carl Doersch - Dec 4 @ 2pm


Chronological Thread 
  • From: Daniel Khashabi <khashab2 AT illinois.edu>
  • To: "Hoiem, Derek W" <dhoiem AT illinois.edu>, Carl Doersch <carl.doersch AT gmail.com>, nl-uiuc <nl-uiuc AT cs.uiuc.edu>, AIVR <aivr AT cs.uiuc.edu>, Vision List <vision AT cs.uiuc.edu>, <aiis AT cs.uiuc.edu>, <aistudents AT cs.uiuc.edu>, "Girju, Corina R" <girju AT illinois.edu>, Catherine Blake <clblake AT illinois.edu>, "Efron, Miles James" <mefron AT illinois.edu>, Jana Diesner <jdiesner AT illinois.edu>, "Raginsky, Maxim" <maxim AT illinois.edu>, "Schwartz, Lane Oscar Bingaman" <lanes AT illinois.edu>, Ranjitha Kumar <ranjitha AT illinois.edu>, Tandy Warnow <warnow AT illinois.edu>
  • Subject: [nl-uiuc] AIIS: Carl Doersch - Dec 4 @ 2pm
  • Date: Sun, 30 Nov 2014 15:18:43 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc/>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

When: Thursday, Dec 4 @ 2pm
Where: 3405 SC 
Speaker:  Carl Doersch

Title:  Mining visual representations from unlabeled and weakly-labeled image collections

Abstract:
Data mining--i.e. finding repeated, informative patterns in large datasets--has proven extremely difficult for visual data.  A key issue is the lack of a reliable way to tell whether two images or image patches depict the same thing. In this talk, I'll cover two algorithms for clustering together visually coherent sets of image patches, in both weakly-supervised and fully unsupervised settings, and show how the resulting clusters provide powerful image representations. 

Our first work proposes discriminative mode seeking, an extension of Mean Shift to weakly-labeled data. Instead of finding the local maxima of a density, we exploit a weak label to partition the data into two sets and find the maxima of the density ratio. Given a dataset with weak labels such as scene categories, these 'discriminative modes' correspond to remarkably meaningful visual patterns, including objects and object parts.  Using these discriminative patches as an image representation, we obtain state-of-the-art results on a challenging indoor scene classification benchmark.

In the second part of the talk, I will discuss how we can extend this formulation to a fully unsupervised setting. Instead of using weak labels as supervision, we use the ability of an object patch to predict the rest of the object (its context) as supervisory signal to help discover visually consistent object clusters. The proposed method outperforms previous unsupervised as well as weakly-supervised object discovery approaches, and can discover objects even within extremely difficult datasets intended for benchmarking fully supervised object detection algorithms (e.g. Pascal VOC).

Bio:
Carl Doersch is a PhD student in the Machine Learning Department at Carnegie Mellon University, co-advised by Alexei Efros (UC Berkeley) and Abhinav Gupta (CMU Robotics Institute). His research interests are in computer vision and machine learning, focusing on learning useful image representations while minimizing reliance on expensive human annotations. Carl graduated with BS in Computer Science from Carnegie Mellon in 2010. He was awarded the NDSEG fellowship in 2011 and the Google Fellowship in computer vision in 2014. More information can be found at his website: http://www.cs.cmu.edu/~cdoersch/

---- 


  • [nl-uiuc] AIIS: Carl Doersch - Dec 4 @ 2pm, Daniel Khashabi, 11/30/2014

Archive powered by MHonArc 2.6.16.

Top of Page