Alexei Baevski
Alexei Baevski
Facebook AI Research
Verified email at fb.com
Title
Cited by
Cited by
Year
fairseq: A fast, extensible toolkit for sequence modeling
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli
arXiv preprint arXiv:1904.01038, 2019
1062019
Pay less attention with lightweight and dynamic convolutions
F Wu, A Fan, A Baevski, YN Dauphin, M Auli
arXiv preprint arXiv:1901.10430, 2019
752019
Adaptive input representations for neural language modeling
A Baevski, M Auli
arXiv preprint arXiv:1809.10853, 2018
542018
Cloze-driven pretraining of self-attention networks
A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli
arXiv preprint arXiv:1903.07785, 2019
252019
Pre-trained language model representations for language generation
S Edunov, A Baevski, M Auli
arXiv preprint arXiv:1903.09722, 2019
202019
wav2vec: Unsupervised pre-training for speech recognition
S Schneider, A Baevski, R Collobert, M Auli
arXiv preprint arXiv:1904.05862, 2019
192019
Facebook FAIR's WMT19 News Translation Task Submission
N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov
arXiv preprint arXiv:1907.06616, 2019
72019
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
A Baevski, S Schneider, M Auli
arXiv preprint arXiv:1910.05453, 2019
42019
Effectiveness of self-supervised pre-training for speech recognition
A Baevski, M Auli, A Mohamed
arXiv preprint arXiv:1911.03912, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–9