tumblr statistics


headshot_2019

Papers

Scaling Speech Technology to 1,000+ Languages
Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli. In JMLR, 2024.
Abstract Blog Code
Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language
Alexei Baevski, Arun Babu, Wei-Ning Hsu, Michael Auli. In ICML, 2023.
Abstract Blog Code
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. In arXiv, 2022.
Abstract Blog Code
XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. In arXiv, 2021.
Abstract Blog Code
Unsupervised Speech Recognition
Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli. In Proc. of NeurIPS, 2021.
Abstract Blog Code
Beyond english-centric multilingual machine translation
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli*, Armand Joulin*. In JMLR, 2020.
Abstract Blog Code
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. In NeurIPS, 2020.
Abstract Blog Code
Unsupervised Cross-lingual Representation Learning for Speech Recognition
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. In Interspeech, 2021.
Abstract
Robust and On-the-fly Dataset Denoising for Image Classification
Jiaming Song, Lunjia Hu, Yann Dauphin, Michael Auli, Tengyu Ma. In Proc. of ECCV, 2020.
Abstract
Effectiveness of self-supervised pre-training for speech recognition
Alexei Baevski, Michael Auli, Abdelrahman Mohamed. In Proc. of ICASSP, 2020.
Abstract
Depth-Adaptive Transformer
Maha Elbayad, Jiatao Gu, Edouard Grave, Michael Auli. In Proc. of ICLR, 2020.
Abstract
On The Evaluation of Machine Translation Systems Trained With Back-Translation
Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, Michael Auli. In Proc. of ACL, 2020.
Abstract
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Alexei Baevski, Steffen Schneider, Michael Auli. In Proc. of ICLR, 2020.
Abstract
The Source-Target Domain Mismatch Problem in Machine Translation
Jiajun Shen, Peng-Jen Chen, Matt Le, Junxian He, Jiatao Gu, Myle Ott, Michael Auli, Marc'Aurelio Ranzato. In arXiv, 2019.
Abstract
Simple and effective noisy channel modeling for neural machine translation
Kyra Yee, Nathan Ng, Yann N Dauphin, Michael Auli. In Proc. of EMNLP, 2019.
Abstract Code
ELI5: Long Form Question Answering
Angela Fan, Yacine Jernite, Ethan Perez, Jason Weston, Michael Auli. In Proc. of ACL, 2019.
Abstract Data Blog
Facebook FAIR's WMT19 News Translation Task Submission
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. In Proc. of WMT, 2019.
Abstract Blog Code
wav2vec: Unsupervised Pre-training for Speech Recognition
Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli. In Proc. of Interspeech, 2019.
Abstract Blog Code
fairseq: A fast, extensible toolkit for sequence modeling
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. In Proc. of NAACL, Demonstrations, 2019.
Abstract Code
GLOSS: Generative Latent Optimization of Sentence Representations
Sidak Pal Singh, Angela Fan, Michael Auli. In Proc. of WMT, 2019.
Abstract
Pre-trained Language Model Representations for Language Generation
Sergey Edunov, Alexei Baevski, Michael Auli. In Proc. of NAACL, 2019.
Abstract Code
Cloze-driven Pretraining of Self-attention Networks
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli. In arXiv, 2019.
Abstract
Mixture Models for Diverse Machine Translation: Tricks of the Trade
Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato. In Proc. of ICML, 2019.
Abstract Code
Modeling Human Motion with Quaternion-based Neural Networks
Dario Pavllo, Christoph Feichtenhofer, Michael Auli, David Grangier. In IJCV, 2019.
Abstract
Pay Less Attention with Lightweight and Dynamic Convolutions
Felix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, Michael Auli. In Proc. of ICLR, 2019.
Abstract Code
Wizard of Wikipedia: Knowledge-Powered Conversational agents
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston. In Proc. of ICLR, 2019.
Abstract
Adaptive Input Representations for Neural Language Modeling
Alexei Baevski, Michael Auli. In Proc. of ICLR, 2019.
Abstract Code
3D human pose estimation in video with temporal convolutions and semi-supervised training
Dario Pavllo, Christoph Feichtenhofer, David Grangier, Michael Auli. In Proc. of CVPR, 2018.
Abstract Code
Understanding Back-Translation at Scale
Sergey Edunov, Myle Ott, David Grangier, Michael Auli. In Proc. of EMNLP, 2018.
Abstract Code
Scaling Neural Machine Translation
Myle Ott, Sergey Edunov, David Grangier, Michael Auli. In Proc. of WMT, 2018.
Abstract Code
QuaterNet: A Quaternion-based Recurrent Model for Human Motion
Dario Pavllo, David Grangier, Michael Auli. In Proc. of BMVC, 2018.
Abstract Code
Analyzing Uncertainty in Neural Machine Translation
Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato. In Proc. of ICML, 2018.
Abstract
Classical Structured Prediction Losses for Sequence to Sequence Learning
Sergey Edunov, Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato. In Proc. of NAACL, 2018.
Abstract Code
Controllable Abstractive Summarization
Angela Fan, David Grangier, Michael Auli. In arXiv:1711.05217, 2017.
Abstract
QuickEdit: Editing Text & Translations via Simple Delete Actions
David Grangier, Michael Auli. In Proc. of NAACL, 2018.
Abstract
Convolutional Sequence to Sequence Learning
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin. In Proc. of ICML, 2017.
Abstract Blog Code
Language Modeling with Gated Convolutional Networks
Yann N. Dauphin, Angela Fan, Michael Auli and David Grangier. In Proc. of ICML, 2017.
Abstract
A Convolutional Encoder Model for Neural Machine Translation
Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin. In Proc. of ACL, 2017.
Abstract
Iterative Refinement for Machine Translation
Roman Novak, Michael Auli, David Grangier. In arXiv:1610.06602, 2016.
Abstract
Vocabulary Selection Strategies for Neural Machine Translation
Gurvan L'Hostis, David Grangier, Michael Auli. In arXiv:1610.00072, 2016.
Abstract
Neural Text Generation from Structured Data with Application to the Biography Domain
Remi Lebret, David Grangier, and Michael Auli. In Proc. of EMNLP, 2016.
Abstract Data
Neural Network-based Word Alignment through Score Aggregation
Joel Legrand, Michael Auli, and Ronan Collobert. In Proc. of WMT, 2016.
Abstract
Strategies for Training Large Vocabulary Neural Language Models
Wenlin Chen, David Grangier, Michael Auli. In Proc. of ACL, 2016.
Abstract
Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks
Wenduan Xu, Michael Auli, and Stephen Clark. In Proc. of NAACL, 2016.
Abstract
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
Sumit Chopra, Michael Auli, and Alexander M. Rush. In Proc. of NAACL, 2016.
Abstract
Sequence Level Training with Recurrent Neural Networks
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. In Proc. of ICLR, 2016.
Abstract Code
CCG Supertagging with a Recurrent Neural Network
Wenduan Xu, Michael Auli, and Stephen Clark. In Proc. of ACL, 2015.
Abstract
deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets
Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao and Bill Dolan. In Proc. of ACL, 2015.
Abstract
Learning Translation Models from Monolingual Continuous Representations
Kai Zhao, Hany Hassan, and Michael Auli. In Proc. of NAACL, 2015.
Abstract
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jianfeng Gao, Bill Dolan, and Jian-Yun Nie. In Proc. of NAACL, 2015.
Abstract
Large Scale Expected BLEU Training of Phrase-based Reordering Models
Michael Auli, Michel Galley, and Jianfeng Gao. In Proc. of EMNLP, 2014.
Abstract
Decoder Integration and Expected BLEU Training for Recurrent Neural Network Language Models
Michael Auli and Jianfeng Gao. In Proc. of ACL, 2014.
Abstract
Minimum Translation Modeling with Recurrent Neural Networks
Yuening Hu, Michael Auli, Qin Gao, and Jianfeng Gao. In Proc. of EACL, 2014.
Abstract
Joint Language and Translation Modeling with Recurrent Neural Networks
Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. In Proc. of EMNLP, 2013.
Abstract
Integrated Supertagging and Parsing
Ph.D. Thesis, University of Edinburgh. 2012.
Abstract
Training a Log-Linear Parser with Loss Functions via Softmax-Margin
Michael Auli and Adam Lopez. In Proc. of EMNLP, 2011.
Abstract
A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing
Michael Auli and Adam Lopez. In Proc. of ACL, 2011.
Abstract
Efficient CCG Parsing: A* versus Adaptive Supertagging
Michael Auli and Adam Lopez. In Proc. of ACL, 2011.
Abstract
CCG-based Models for Statistical Machine Translation
First-Year Ph.D. Report, University of Edinburgh. 2009.
Abstract
A Systematic Analysis of Translation Model Search Spaces
Michael Auli, Adam Lopez, Hieu Hoang, and Philipp Koehn. In Proc. of WMT, 2009.
Abstract

Talks

Feel free to use my talk slides as you like but please acknowledge so if you do.

Unified self-supervised learning for speech, vision and NLP
Talk at ECCV Workshop on Perception
wav2vec: Self-supervised learning of speech representations
Talk at MIT, CMU, U of Edinburgh, Spring 2021.
Efficient Sequence Modeling
Talk at WNGT'19, Stanford, Berkeley, Nov 2019.
Sequence to Sequence Learning: Fast Training and Inference with Gated Convolutions
Talk at Johns Hopkins University, Oct 2017.
Learning to translate with neural networks
Talk at Facebook, Google, Amazon and the University of Washington, 2014.
Integrated Parsing and Tagging
Talk at Carnegie Mellon University, Johns Hopkins University, BBN Technologies, IBM Research and Microsoft Research, 2011.