Skip to Content.
Sympa Menu

nl-uiuc - [nl-uiuc] Upcoming talk at the AIIS seminar (this Thursday).

nl-uiuc AT lists.cs.illinois.edu

Subject: Natural language research announcements

List archive

[nl-uiuc] Upcoming talk at the AIIS seminar (this Thursday).


Chronological Thread 
  • From: "Alexandre Klementiev" <klementi AT uiuc.edu>
  • To: nl-uiuc AT cs.uiuc.edu, aivr AT cs.uiuc.edu, dais AT cs.uiuc.edu, cogcomp AT cs.uiuc.edu, vision AT cs.uiuc.edu, krr-group AT cs.uiuc.edu, group AT vision2.ai.uiuc.edu
  • Subject: [nl-uiuc] Upcoming talk at the AIIS seminar (this Thursday).
  • Date: Tue, 29 Apr 2008 13:09:36 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/nl-uiuc>
  • List-id: Natural language research announcements <nl-uiuc.cs.uiuc.edu>

Dear faculty and students,

Dr.
Xavier Carreras will give a talk (details below) at the AIIS seminar this Thursday.  There will be a student meeting with Xavier at 1pm  in Siebel 3403; if you would like to meet with him personally, please let me know.

Thank you,
Alex


Title: Full Syntactic Parsing with Dynamic Programming and the Perceptron
Speaker: 
Xavier Carreras, MIT
Date: May 1, 4:00pm
Location: Siebel 3405


Abstract:

I will describe an efficient, expressive model for natural language parsing that makes use of the perceptron algorithm in conjunction with dynamic programming search. An advantage of dynamic programming approaches is that a very large set of possible parsing structures can be considered for a given sentence. Plus, the discriminative nature of our model allows a great deal of flexibility in representing candidate parse trees for a sentence, including PCFG-based features, dependency relations and features on the surface of the sentence.

A critical problem when training a discriminative model for a task of the scale of natural language parsing is the efficiency of the parsing algorithm involved. Discriminative training algorithms, such as the averaged perceptron, require to repeatedly parse the training set under the model. But factors such as the sentence length or the number of syntactic labels in the parses, which directly affect the efficiency of the algorithms, are usually very large in real parsing tasks, making exact inference virtually intractable. As a result, in spite of the potential advantages of discriminative parsing methods, there has been little previous work for full constituent parsing without the use of fairly severe restrictions.

I will show that training models for full constituent parsing is possible, using a formalism closely related to Tree Adjoining Grammars that can be efficiently parsed with dependency parsing algorithms. Key to the efficiency of our approach is the use of a lower-order model for pruning the space of full parsing structures, and that performs remarkably well. Experiments on parsing the Penn WSJ treebank show that the model gives state-of-the-art accuracy.

Joint work with Michael Collins and Terry Koo.

Bio:

Xavier Carreras is a postdoctoral fellow at MIT CSAIL. He studied computer science at the Technical University of Catalonia in Barcelona, where he received the PhD in Artificial Intelligence in 2005. His research interests are in natural language processing and machine learning, and he has worked in syntactic parsing, semantic role labeling and named entity extraction, among other tasks.

  • [nl-uiuc] Upcoming talk at the AIIS seminar (this Thursday)., Alexandre Klementiev, 04/29/2008

Archive powered by MHonArc 2.6.16.

Top of Page