Document-level Models (11/29/2018)
This class will features discussion on document-level models. There will be no quiz, but feel free to take a look at the following references.
- Reference: Achieving Human Parity on Automatic Chinese to English News Translation
 - Reference: Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation
 - Reference: RNN Language Models (Mikolov et al 2010)
 - Reference: Larger Context RNNLMs (Mikolov and Zweig 2012)
 - Reference: Document Context Neural Language Models (Ji et al. 2015)
 - Reference: Larger Context Language Modeling (Wang and Cho 2016)
 - Reference: Recurrent Self-attentional Language Models
 - Reference: Cache-based Document-level Statistical MT
 - Reference: Document-wide Decoding for Statistical MT
 - Reference: Exploiting Cross-sentence Context for Neural Machine Translation
 - Reference: Learning to Remember Translation History with a Continuous Cache
 - Reference: Document Context Neural Machine Translation with Memory Networks
 - Reference: Does Neural Machine Translation Benefit from Larger Context?
 - Reference: Evaluating Discourse Phenomena in Neural Machine Translation
 - Reference: Neural Machine Translation with Extended Context
 - Reference: A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues
 - Reference: Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
 - Reference: A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation