Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS talk this Friday by Kilian Weinberger @ 2pm.

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS talk this Friday by Kilian Weinberger @ 2pm.


Chronological Thread 
  • From: Yonatan Bisk <bisk1 AT illinois.edu>
  • To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, vision AT cs.uiuc.edu, eyal AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, Girju, Corina R <girju AT illinois.edu>, Catherine Blake <clblake AT illinois.edu>, Efron, Miles James <mefron AT illinois.edu>, Lee, Soo Min <lee203 AT illinois.edu>
  • Subject: [nl-uiuc] AIIS talk this Friday by Kilian Weinberger @ 2pm.
  • Date: Tue, 24 Apr 2012 15:20:22 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

– please email ( rsamdan2 AT uiuc.edu or bisk1 AT illinois.edu ) your availability if you are interested in a meeting -- 

When: This Friday @ 2pm - 4/27/2012
Where: 3405 SC
Speaker: Kilian Weinberger ( http://www.cse.wustl.edu/~kilian/ )

Title:   mSDA: A fast, easy-to-use, universal algorithm to improve bag-of-words features

Abstract:

Machine learning algorithms rely heavily on the representation of the data they are presented with. In particular, text documents (and often images) are traditionally expressed as bag-of-words feature vectors (e.g. as tf-idf). 

Recently Glorot et al. showed that stacked denoising autoencoders (SDA), a deep learning algorithm, can learn representations that are far superior to variants of bag-of-words. Unfortunately, training SDAs often requires a prohibitive amount of computation time and is non-trivial for non-experts.

In this work, we show that with a few modifications of the SDA model, we can relax the optimization over the hidden weights into convex optimization problems with closed form solutions. Further, we show that the expected value of the hidden weights after infinitely many training iterations can also be computed in closed form. The resulting transformation (which we call marginalized-SDA) can be computed in no more than 20 lines of straight-forward Matlab code and requires no prior expertise in machine learning. 

The representations learned with mSDA behave similarly to those obtained with SDA, but the training time is reduced by several orders of magnitudes. For example, mSDA matches the world-record on the Amazon transfer learning benchmark, however the training time shrinks from several days to a few minutes. 


Bio:

Kilian Q. Weinberger is an Assistant Professor in the Department of Computer Science & Engineering at Washington University in St. Louis. He received his Ph.D. from the University of Pennsylvania in Machine Learning under the supervision of Lawrence Saul. Prior to this, he obtained his undergraduate degree in Mathematics and Computer Science at the University of Oxford. 

During his career he has won several best paper awards at ICML, CVPR and AISTATS. In 2012 he was awarded the NSF CAREER award. 


Kilian Weinberger's research is in and around Machine Learning. In particular, he focus on high dimensional data analysis, feature- and metric-learning, machine learned web-search ranking, transfer- and multi-task learning as well as brain decoding. 


-- Yonatan --



  • [nl-uiuc] AIIS talk this Friday by Kilian Weinberger @ 2pm., Yonatan Bisk, 04/24/2012

Archive powered by MHonArc 2.6.16.

Top of Page