Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] REMINDER: AIIS talk by Kilian Weinberger right now!

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] REMINDER: AIIS talk by Kilian Weinberger right now!


Chronological Thread 
  • From: "Samdani, Rajhans" <rsamdan2 AT illinois.edu>
  • To: nl-uiuc <nl-uiuc AT cs.uiuc.edu>, aivr <aivr AT cs.uiuc.edu>, vision <vision AT cs.uiuc.edu>, eyal <eyal AT cs.uiuc.edu>, aiis <aiis AT cs.uiuc.edu>, aistudents <aistudents AT cs.uiuc.edu>, "Girju, Corina R" <girju AT illinois.edu>, "Blake, Catherine" <clblake AT illinois.edu>, "Efron, Miles James" <mefron AT illinois.edu>
  • Subject: [nl-uiuc] REMINDER: AIIS talk by Kilian Weinberger right now!
  • Date: Fri, 27 Apr 2012 18:59:45 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Reminder for the AIIS talk starting right now!!




When: This Friday @ 2pm - 4/27/2012

Where: 3405 SC

Speaker: Kilian Weinberger ( http://www.cse.wustl.edu/~kilian/ )

Title: mSDA: A fast, easy-to-use, universal algorithm to improve
bag-of-words features

Abstract:



Machine learning algorithms rely heavily on the representation of the data
they are presented with. In particular, text documents (and often images) are
traditionally expressed as bag-of-words feature vectors (e.g.
as tf-idf).

Recently Glorot et al. showed that stacked denoising autoencoders (SDA), a
deep learning algorithm, can learn representations that are far superior to
variants of bag-of-words. Unfortunately, training SDAs often
requires a prohibitive amount of computation time and is non-trivial for
non-experts.

In this work, we show that with a few modifications of the SDA model, we can
relax the optimization over the hidden weights into convex optimization
problems with closed form solutions. Further, we show that the
expected value of the hidden weights after infinitely many training
iterations can also be computed in closed form. The resulting transformation
(which we call marginalized-SDA) can be computed in no more than 20 lines of
straight-forward Matlab code and requires
no prior expertise in machine learning.

The representations learned with mSDA behave similarly to those obtained with
SDA, but the training time is reduced by several orders of magnitudes. For
example, mSDA matches the world-record on the Amazon transfer
learning benchmark, however the training time shrinks from several days to a
few minutes.






Bio:



Kilian Q. Weinberger is an Assistant Professor in the Department of Computer
Science & Engineering at Washington University in St. Louis. He received his
Ph.D. from the University of Pennsylvania in Machine Learning
under the supervision of Lawrence Saul. Prior to this, he obtained his
undergraduate degree in Mathematics and Computer Science at the University of
Oxford.

During his career he has won several best paper awards at ICML, CVPR and
AISTATS. In 2012 he was awarded the NSF CAREER award.





Kilian Weinberger's research is in and around Machine Learning. In
particular, he focus on high dimensional data analysis, feature- and
metric-learning, machine learned web-search ranking, transfer- and multi-task
learning as well as brain decoding.









-- Yonatan --












  • [nl-uiuc] REMINDER: AIIS talk by Kilian Weinberger right now!, Samdani, Rajhans, 04/27/2012

Archive powered by MHonArc 2.6.16.

Top of Page