Deep Learning Natural Language Processing (DNLP)

(The course design comes from Stanford NLP with deep learning course with some modification)

Gary Geunbae Lee, Eng 2-211, gblee@postech.ac.kr, 279-2254

 

1.     Course objectives

 

This course will cover a cutting-edge research knowledge in deep learning natural language processing. Through lectures, students will learn the necessary skills to design, implement, and understand their own neural network models for various NLP problems such as word embedding/contextual word embedding, text classification, syntactic parsing, recurrent language modeling, machine translation, question answering, natural language generation, dialog systems, multi-task deep learning models, etc

 

2.     Course prerequisites

 

no required pre-requisite

 

3.     Grading

 

midterm 35%
final 35%
programming term project 30%

 

4  texts or references

 

Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd ed. draft)

Jacob Eisenstein. Natural Language Processing

Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing

Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning

Michael A. Nielsen. Neural Networks and Deep Learning

Eugene Charniak. Introduction to Deep Learning

 

 

4.     Others

 

    instruction language: English

    programming term project: design and implement your own deep neural network-based model for typical NLP task

 

5.     Course schedule

 

DNLP overview

Word embedding

Text classification/BP

Neural dependency parsing

Neural language modeling (RNN)

Neural machine translation (seq-to-seq)

Some NLP Project Data

Question answering

ConvNet NLP

Sub-word modeling

Contextual word embedding (BERT/Transformer)

NLG (dialog, summarization)

Co-reference resolution

Multi-task DNLP

Tree RNN

Future deep learning NLP