Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] AIIS talks: Todd Zickler (Nov 17) and Jeff Siskind (Nov 19)

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] AIIS talks: Todd Zickler (Nov 17) and Jeff Siskind (Nov 19)


Chronological Thread 
  • From: Rajhans Samdani <rsamdan2 AT illinois.edu>
  • To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, eyal AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>
  • Subject: [nl-uiuc] AIIS talks: Todd Zickler (Nov 17) and Jeff Siskind (Nov 19)
  • Date: Mon, 15 Nov 2010 02:11:03 -0600 (CST)
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Hi all,

We have two AIIS seminar talks lined up for the next 7 days. Please see below
for
the details of both. If you have any questions, concerns, etc, please contact
Yonatan
Bisk
(bisk1 AT illinois.edu)
as he is in-charge of these seminars.

First talk
-------------

When: Nov 17, Wednesday, 2 pm.

Where: 3403 SC.

Speaker: Prof. Todd Zickler (http://www.eecs.harvard.edu/~zickler.)

Title: Inferring Shape and Materials under Real-World Lighting

Abstract:
A vision system is tasked with inferring the observable properties of a
scene---shape,
materials, and so on---from one or more of its images. The task is made hard
by the
fact that the mapping from scene properties to images is many-to-one: For any
given
image, there are infinite scenes to explain it.

A viable approach for dealing with this ambiguity is designing systems that
combine
prior visual experience with loose, redundant constraints induced by texture,
shading,
and various other aspects of optical stimulation. The basic idea is that each
cue
reduces the set of interpretations in some way, and by combining them,
systems will
be better equipped to sift through the infinite set of possibilities and
arrive at a
reasonable result.

To succeed with this approach, we need to understand the various ways that
shape
and materials are encoded in image data, and in this talk I will describe two
that
remain poorly understood. Each of these exists in the presence of complex
"real-
world" lighting, and for each I will summarize our recent progress in: 1)
characterizing
the constraints induced on a scene, and 2) creating algorithms for inference.

Bio:
Todd Zickler received the B.Eng. degree in honours electrical engineering
from McGill
University in 1996 and the Ph.D. degree in electrical engineering from Yale
University
in 2004 under the direction of Peter Belhumeur. He joined the School of
Engineering
and Applied Sciences, Harvard University, as an Assistant Professor in 2004
and was
appointed John L. Loeb Associate Professor of the Natural Sciences in 2008.
He is
the Director of the Harvard Computer Vision Laboratory and member of the
Harvard
Graphics, Vision and Interaction Group. His research is focused on modeling
the
interaction between light and materials, and developing systems to extract
scene
information from visual data. His work is motivated by applications in face,
object, and
scene recognition; image-based rendering; image retrieval; image and video
compression; robotics; and human-computer interfaces. Dr. Zickler is a
recipient of
the National Science Foundation Career Award and a Research Fellowship from
the
Alfred P. Sloan Foundation. His research is funded by the National Science
Foundation, the Army Research Office, and the Office of Naval Research. More
information can be found on his website:
http://www.eecs.harvard.edu/~zickler.

Second Talk:
-------------------

When: Nov 19, Friday, 2pm.

Where: 3405 SC.

Speaker: Prof. Jeff Siskind (https://engineering.purdue.edu/~qobi/)

Title: Investigating Embodied Intelligence via Assembly Imitation and
Learning to Play
Board Games

Abstract:
My research group is engaged in a concerted effort to ground the semantics
of natural language in computer vision and robotic manipulation. We have
designed a
novel custom robotic platform to support this effort and built three copies
of this
platform to allow investigation of linguistic communication between robotic
and human
agents. We do this in the context of two tasks. In the first, one robot
builds an
assembly out of Lincoln Logs while a second robot observes that activity and
communicates those observations, in natural language, to a third robot who
must
replicate that assembly. In the second, two robots play a board game, while
a third
robot---that does not know the game rules---observes the play and must infer
the
game rules. These tasks are specifically designed to support investigation
into
integrating vision, robotics, natural language, learning, and planning with
common
semantic representations and stochastic inference mechanisms. This allows
filling in
missing information from multiple modalities. When the vision system cannot
fully
determine the Lincoln Log assembly structure due to occlusion, it can ask
question in
natural language or integrate information from multiple views with different
camera
poses or taken at different assembly stages. Likewise, when there are
insufficient
training examples to learn game rules, the learner can ask questions that can
be
answered either linguistically or by robotic demonstration. I will discuss
the common
stochastic inference mechanism built on top of a novel probabilistic
programming
language augmented with automatic differentiation to support
maximum-likelihood
estimation of rich complex models and how this software architecture together
with
our hardware platform and rich integrated tasks reflect our vision for
investigating
embodied intelligence.

Joint work with Andrei Barbu, Seongwoon Ko, Siddharth Narayanaswamy, and Brian
Thomas.

Bio:
Jeffrey M. Siskind received the B.A. degree in computer science from the
Technion,
Israel Institute of Technology, Haifa, in 1979, the S.M. degree in computer
science
from the Massachusetts Institute of Technology (M.I.T.), Cambridge, in 1989,
and the
Ph.D. degree in computer science from M.I.T. in 1992. He did a postdoctoral
fellowship at the University of Pennsylvania Institute for Research in
Cognitive Science
from 1992 to 1993. He was an assistant professor at the University of
Toronto
Department of Computer Science from 1993 to 1995, a senior lecturer at the
Technion
Department of Electrical Engineering in 1996, a visiting assistant professor
at the
University of Vermont Department of Computer Science and Electrical
Engineering
from 1996 to 1997, and a research scientist at NEC Research Institute, Inc.
from 1997
to 2001. He joined the Purdue University School of Electrical and Computer
Engineering in 2002 where he is currently an associate professor. His
research
interests include machine vision, artificial intelligence, cognitive science,
computational linguistics, child language acquisition, and programming
languages and
compilers.


Thanks!
Best,
Rajhans


Rajhans Samdani,
Graduate Student,
Dept. of Computer Science,
University of Illinois at Urbana-Champaign.



  • [nl-uiuc] AIIS talks: Todd Zickler (Nov 17) and Jeff Siskind (Nov 19), Rajhans Samdani, 11/15/2010

Archive powered by MHonArc 2.6.16.

Top of Page