Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS Reminder: Today @ 10am - Ben Taskar: Determinantal Point Processes in Machine Learning

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS Reminder: Today @ 10am - Ben Taskar: Determinantal Point Processes in Machine Learning


Chronological Thread 
  • From: Yonatan Bisk <bisk1 AT illinois.edu>
  • To: nl-uiuc <nl-uiuc AT cs.uiuc.edu>, AIVR <aivr AT cs.uiuc.edu>, Vision List <vision AT cs.uiuc.edu>, Eyal Amir <eyal AT cs.uiuc.edu>, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>, Catherine Blake <clblake AT illinois.edu>, "Efron, Miles James" <mefron AT illinois.edu>, "Lee, Soo Min" <lee203 AT illinois.edu>, Jana Diesner <jdiesner AT illinois.edu>, "Raginsky, Maxim" <maxim AT illinois.edu>
  • Subject: [nl-uiuc] AIIS Reminder: Today @ 10am - Ben Taskar: Determinantal Point Processes in Machine Learning
  • Date: Fri, 4 Oct 2013 09:10:00 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc/>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

When: Today @ 10am - (in 1 hour)
Where: 3403 SC
Speaker: Ben Taskar ( http://homes.cs.washington.edu/~taskar/ )


Title: Determinantal Point Processes in Machine Learning

Many real-world problems involve negative interactions; we might want search results to be diverse, sentences in a summary to cover distinct aspects of the subject, or objects in an image to occupy different regions of space.  However, traditional structured probabilistic models tend deal poorly with these kinds of situations; Markov random fields, for example, become intractable even to approximate. Determinantal point processes (DPPs), which arise in random matrix theory and quantum physics, behave in a complementary fashion: while they cannot encode positive interactions, they define expressive models of negative correlations that come with surprising and exact algorithms for many types of inference, including conditioning, marginalization, and sampling.  I'll present our recent work on a novel factorization and dual representation of DPPs that enables efficient and exact inference for exponentially-sized structured sets.  We develop an exact inference algorithm for DPPs conditioned on subset size and derive efficient parameter estimation for DPPs from several types of observations, as well as approximation algorithms for large-scale non-linear DPPs.  I'll illustrate the advantages of DPPs on several natural language and computer vision tasks: document summarization, image search  and multi-person pose estimation problems in images.

Joint work with Alex Kulesza, Jennifer Gillenwater, Raja Affandi and Emily Fox

Bio:
Ben Taskar received his bachelor's and doctoral degree in Computer Science from Stanford University. After a postdoc at the University of California at Berkeley, he joined the faculty at the University of Pennsylvania in 2007.  He joined the University of Washington Computer Science and Engineering Department in the spring of 2013. His research interests include machine learning, natural language  processing and computer vision. He has been awarded the Sloan Research Fellowship, the NSF CAREER Award, and selected for the Young Investigator Program by the Office of Naval Research and the DARPA Computer Science Study Group. His work on structured  prediction has received best paper awards at NIPS and EMNLP conferences.


- Yonatan -


  • [nl-uiuc] AIIS Reminder: Today @ 10am - Ben Taskar: Determinantal Point Processes in Machine Learning, Yonatan Bisk, 10/04/2013

Archive powered by MHonArc 2.6.16.

Top of Page