Neural Networks for NLP

Graduate Course, Carnegie Mellon University, Language Technologies Institute, 2017

I was a teaching assistant for the class “Neural Networks for NLP” (CMU 11-747). Working with Prof. Graham Neubig, I help design part of the course syllabus, quizs and projects. I am also responsible for grading the quizs and projects.

Lecture

I gave a lecture on “Coreference and Discourse Parsing”. After introducing the basic modeling variants for coreference, I introduce the recent improvements (and the reasons) on entity coreference using Neural Networks. I then covered some NN approaches on discourse parsing. I have also mentioned some work that lies between coreference and discourse parsing, such as script modeling.

Lecture Video

Course Description

Neural networks provide powerful new tools for modeling language, and have been used both to improve the state-of-the-art in a number of tasks and to tackle new problems that were not easy in the past. This class will start with a brief overview of neural networks, then spend the majority of the class demonstrating how to apply neural networks to natural language problems. Each section will introduce a particular problem or phenomenon in natural language, describe why it is difficult to model, and demonstrate several models that were designed to tackle this problem. In the process of doing so, the class will cover different techniques that are useful in creating neural network models, including handling variably sized and structured sentences, efficient handling of large data, semi-supervised and unsupervised learning, structured prediction, and multilingual modeling.