Skip to Content.
Sympa Menu

nl-uiuc - [[nl-uiuc] ] AIIS: Sobhan Naderi Parizi - Friday Oct 23 @ 2pm

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[[nl-uiuc] ] AIIS: Sobhan Naderi Parizi - Friday Oct 23 @ 2pm


Chronological Thread 
  • From: Alice Lai <aylai2 AT illinois.edu>
  • To: <nl-uiuc AT cs.uiuc.edu>, <vision AT cs.uiuc.edu>, “aiis AT cs.uiuc.edu” <aiis AT cs.uiuc.edu>
  • Subject: [[nl-uiuc] ] AIIS: Sobhan Naderi Parizi - Friday Oct 23 @ 2pm
  • Date: Fri, 9 Oct 2015 18:38:23 +0000

Please email Alice Lai (aylai2 AT illinois.edu) or Daniel Khashabi (khashab2 AT illinois.edu) your availability if you are interested in a meeting.

When: Friday Oct 23 @ 2pm
Where: SC 4403
Speaker: Sobhan Naderi Parizi (PhD student at Brown University)

Title: Modeling and Optimization of Classifiers in Absence of Supervision

Abstract:
We live in a world where we have access to lots of multimodal data, the amount of which is infinite for many practical purposes. Despite this, building systems that can learn from such a huge amount of data to reach human-level understanding is a mission yet to be accomplished. One big issue is absence of supervision associated with the data and lack of methods that can train good models without relying on accurate and extensive annotation.

My current research is focused around learning from weakly supervised data. We discover shared patterns (or parts) automatically from large collections of images using limited supervision. Learning part-based models is often viewed as a two-stage problem. First, a collection of informative parts is discovered, using heuristics that promote part distinctiveness and diversity, and then classifiers are trained on the vector of part responses. We unify the two stages and learn the image classifiers and a set of shared parts jointly. We also introduce the notion of "negative parts", intended as parts that are negatively correlated with one or more classes. For example, a saddle detector is a positive/negative part for the horse/cow class. More generally, we proposed a generalization of the family of latent variable models that can capture counter evidence for presence of a class and show that augmenting conventional part-based models with negative parts improves their performance considerably.

We also developed a new optimization procedure that extends methods such as Expectation Maximization (EM) and Concave-Convex Procedure (CCCP), both of which are widely used for weakly supervised training. I will describe ways in which this new procedure, dubbed Generalized Majorization Minimization, or G-MM, exhibits greater flexibility in optimizing non-convex functions and accommodating application-specific biases into the learning process while being notably less sensitive to initialization than other methods such as EM and CCCP.

Bio:
I am a PhD candidate in the School of Engineering at Brown University where I am advised by Prof. Pedro Felzenszwlab. Before this, I completed a MSc degree in Computer Science from University of Chicago, a MSc degree in Systems, Control, & Robotics from Royal Institute of Technology (Sweden), and a BSc degree in Computer Engineering from Tehran Polytechnic (Iran). I am interested in developing new learning frameworks for various vision tasks with a special focus on reducing dependence on supervision and hand-engineered interventions. In a broader sense, I am passionately interested in understanding what it means to "learn" from "data", what can we learn without supervision, and what signal should unsupervised learning be driven by. I also wonder if supervision may sometimes slow down learning or limit the level of understanding of the learner. These are all questions that I am seeking to find an answer to in my research.


  • [[nl-uiuc] ] AIIS: Sobhan Naderi Parizi - Friday Oct 23 @ 2pm, Alice Lai, 10/09/2015

Archive powered by MHonArc 2.6.16.

Top of Page