nl-uiuc AT lists.cs.illinois.edu
Subject: Natural language research announcements
List archive
- From: Yonatan Bisk <bisk1 AT illinois.edu>
- To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, aiis AT cs.uiuc.edu, aistudents AT cs.uiuc.edu, "Girju, Corina R" <girju AT illinois.edu>, Eyal Amir <eyal AT cs.uiuc.edu>
- Subject: [nl-uiuc] Reminder: AIIS talk: Jonathan Berant ( _Today_ @ 2pm )
- Date: Fri, 21 Jan 2011 09:39:18 -0600
- List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
- List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>
When: _Today_, Friday January 21 @ 2pm
Where: 3405 SC
Speaker: Jonathan Berant ( http://www.cs.tau.ac.il/~jonatha6/ )
Title: Global Learning of Entailment Graphs
Abstract: One of the key challenges in developing natural language
understanding applications such as Question Answering, Information
Retrieval, or Information Extraction is overcoming the variability of
semantic _expression_, namely the fact that the same meaning can be
expressed in natural language by many phrases. In this work, we
address a crucial component of this problem: learning inference rules
or entailment rules between natural language predicates, such as “X
buy from Y --> Y sell to X”.
Previous work has focused on estimating each entailment rule
independently of others, but clearly there are interactions between
different entailment rules. We address this issue by modelling the
problem of learning entailment rules as a graph learning problem
(termed “entailment graphs”), and attempt to learn graphs that are
“coherent” in the sense that they obey certain global properties. We
formulate the problem as an Integer Linear Program (ILP) and introduce
two algorithms that scale the use of ILP solvers to larger entailment
graphs. We learn entailment graphs in 2 scenarios: (1) where one of
the arguments is instantiated (X increase asthma symptoms --> X
affects asthma) (2) where the arguments are typed (Xcountry conquer
Ycity -->Xcountry invade Ycity) and show an improvement in performance
over previous state-of-the-art algorithms. We also show that our
scaling techniques increase the recall of the algorithm without
harming precision.
This work is based on the paper "Global Learning of Focused Entailment Graphs":
http://www.cs.tau.ac.il/~jonatha6/homepage_files/publications/ACL10.pdf
and on recently-submitted work performed at The University of
Washington. This is joint work with Ido Dagan and Jacob Goldberger
Bio: Jonathan Berant is a PhD student at Tel-Aviv University,
working in Bar-Ilan University's NLP group under the supervision of
Ido dagan and Jacob Goldberger
Where: 3405 SC
Speaker: Jonathan Berant ( http://www.cs.tau.ac.il/~jonatha6/ )
Title: Global Learning of Entailment Graphs
Abstract: One of the key challenges in developing natural language
understanding applications such as Question Answering, Information
Retrieval, or Information Extraction is overcoming the variability of
semantic _expression_, namely the fact that the same meaning can be
expressed in natural language by many phrases. In this work, we
address a crucial component of this problem: learning inference rules
or entailment rules between natural language predicates, such as “X
buy from Y --> Y sell to X”.
Previous work has focused on estimating each entailment rule
independently of others, but clearly there are interactions between
different entailment rules. We address this issue by modelling the
problem of learning entailment rules as a graph learning problem
(termed “entailment graphs”), and attempt to learn graphs that are
“coherent” in the sense that they obey certain global properties. We
formulate the problem as an Integer Linear Program (ILP) and introduce
two algorithms that scale the use of ILP solvers to larger entailment
graphs. We learn entailment graphs in 2 scenarios: (1) where one of
the arguments is instantiated (X increase asthma symptoms --> X
affects asthma) (2) where the arguments are typed (Xcountry conquer
Ycity -->Xcountry invade Ycity) and show an improvement in performance
over previous state-of-the-art algorithms. We also show that our
scaling techniques increase the recall of the algorithm without
harming precision.
This work is based on the paper "Global Learning of Focused Entailment Graphs":
http://www.cs.tau.ac.il/~jonatha6/homepage_files/publications/ACL10.pdf
and on recently-submitted work performed at The University of
Washington. This is joint work with Ido Dagan and Jacob Goldberger
Bio: Jonathan Berant is a PhD student at Tel-Aviv University,
working in Bar-Ilan University's NLP group under the supervision of
Ido dagan and Jacob Goldberger
- [nl-uiuc] Reminder: AIIS talk: Jonathan Berant ( _Today_ @ 2pm ), Yonatan Bisk, 01/21/2011
Archive powered by MHonArc 2.6.16.