tumblr statistics

I am a Research Scientist at Facebook AI Research since 2015. Previously, I was a postdoc at Microsoft Research in Redmond working on improving machine translation using deep learning. I earned my Ph.D. at the University of Edinburgh for my work on syntactic parsing with approximate inference which led to the best reported results on CCGbank.

News

  • Our paper on "Generating Text from Structured Data with Application to the Biography Domain" was accepted at EMNLP 2016.
  • We released the code for our paper "Sequence Level Training with Recurrent Neural Networks".

Papers

Neural Network-based Word Alignment through Score Aggregation
Joel Legrand, Michael Auli, and Ronan Collobert. In Proceedings of WMT, 2016.
Abstract
Strategies for Training Large Vocabulary Neural Language Models
Wenlin Chen, David Grangier, Michael Auli. In Proceedings of ACL, 2016.
Abstract
Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks
Wenduan Xu, Michael Auli, and Stephen Clark. In Proceedings of NAACL, 2016.
Abstract
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
Sumit Chopra, Michael Auli, and Alexander M. Rush. In Proceedings of NAACL, 2016.
Abstract
Sequence Level Training with Recurrent Neural Networks
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. In Proceedings of ICLR, 2016.
Abstract Code
CCG Supertagging with a Recurrent Neural Network
Wenduan Xu, Michael Auli, and Stephen Clark. In Proceedings of ACL, 2015.
Abstract
deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets
Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao and Bill Dolan. In Proceedings of ACL, 2015.
Abstract
Learning Translation Models from Monolingual Continuous Representations
Kai Zhao, Hany Hassan, and Michael Auli. In Proceedings of NAACL, 2015.
Abstract
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jianfeng Gao, Bill Dolan, and Jian-Yun Nie. In Proceedings of NAACL, 2015.
Abstract
Large Scale Expected BLEU Training of Phrase-based Reordering Models
Michael Auli, Michel Galley, and Jianfeng Gao. In Proceedings of EMNLP, 2014.
Abstract
Decoder Integration and Expected BLEU Training for Recurrent Neural Network Language Models
Michael Auli and Jianfeng Gao. In Proceedings of ACL, 2014.
Abstract
Minimum Translation Modeling with Recurrent Neural Networks
Yuening Hu, Michael Auli, Qin Gao, and Jianfeng Gao. In Proceedings of EACL, 2014.
Abstract
Joint Language and Translation Modeling with Recurrent Neural Networks
Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. In Proceedings of EMNLP, 2013.
Abstract
Integrated Supertagging and Parsing
Ph.D. Thesis, University of Edinburgh. 2012.
Abstract
Training a Log-Linear Parser with Loss Functions via Softmax-Margin
Michael Auli and Adam Lopez. In Proceedings of EMNLP, 2011.
Abstract
A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing
Michael Auli and Adam Lopez. In Proceedings of ACL, 2011.
Abstract
Efficient CCG Parsing: A* versus Adaptive Supertagging
Michael Auli and Adam Lopez. In Proceedings of ACL, 2011.
Abstract
CCG-based Models for Statistical Machine Translation
First-Year Ph.D. Report, University of Edinburgh. 2009.
Abstract
A Systematic Analysis of Translation Model Search Spaces
Michael Auli, Adam Lopez, Hieu Hoang, and Philipp Koehn. In Proceedings of the Fourth Workshop on Statistical Machine Translation, 2009.
Abstract

Talks

Learning to translate with neural networks
Talk at Facebook, Google, Amazon and the University of Washington, 2014.
Integrated Parsing and Tagging
Talk at Carnegie Mellon University, Johns Hopkins University, BBN Technologies, IBM Research and Microsoft Research, 2011.