Alexei Baevski
Alexei Baevski
Facebook AI Research
Verified email at
Cited by
Cited by
fairseq: A fast, extensible toolkit for sequence modeling
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli
arXiv preprint arXiv:1904.01038, 2019
Pay less attention with lightweight and dynamic convolutions
F Wu, A Fan, A Baevski, YN Dauphin, M Auli
arXiv preprint arXiv:1901.10430, 2019
wav2vec: Unsupervised pre-training for speech recognition
S Schneider, A Baevski, R Collobert, M Auli
arXiv preprint arXiv:1904.05862, 2019
Adaptive input representations for neural language modeling
A Baevski, M Auli
arXiv preprint arXiv:1809.10853, 2018
Cloze-driven pretraining of self-attention networks
A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli
arXiv preprint arXiv:1903.07785, 2019
Facebook FAIR's WMT19 News Translation Task Submission
N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov
arXiv preprint arXiv:1907.06616, 2019
wav2vec 2.0: A framework for self-supervised learning of speech representations
A Baevski, H Zhou, A Mohamed, M Auli
arXiv preprint arXiv:2006.11477, 2020
vq-wav2vec: Self-supervised learning of discrete speech representations
A Baevski, S Schneider, M Auli
arXiv preprint arXiv:1910.05453, 2019
Pre-trained language model representations for language generation
S Edunov, A Baevski, M Auli
arXiv preprint arXiv:1903.09722, 2019
Effectiveness of self-supervised pre-training for speech recognition
A Baevski, M Auli, A Mohamed
arXiv preprint arXiv:1911.03912, 2019
Unsupervised cross-lingual representation learning for speech recognition
A Conneau, A Baevski, R Collobert, A Mohamed, M Auli
arXiv preprint arXiv:2006.13979, 2020
Self-training and Pre-training are Complementary for Speech Recognition
Q Xu, A Baevski, T Likhomanenko, P Tomasello, A Conneau, R Collobert, ...
arXiv preprint arXiv:2010.11430, 2020
The Zero Resource Speech Benchmark 2021: Metrics and baselines for unsupervised spoken language modeling
TA Nguyen, M de Seyssel, P Rozé, M Rivière, E Kharitonov, A Baevski, ...
arXiv preprint arXiv:2011.11588, 2020
Generative spoken language modeling from raw audio
K Lakhotia, E Kharitonov, WN Hsu, Y Adi, A Polyak, B Bolte, TA Nguyen, ...
arXiv preprint arXiv:2102.01192, 2021
Multilingual Speech Translation with Efficient Finetuning of Pretrained Models
X Li, C Wang, Y Tang, C Tran, Y Tang, J Pino, A Baevski, A Conneau, ...
arXiv e-prints, arXiv: 2010.12829, 2020
Reservoir Transformer
S Shen, A Baevski, AS Morcos, K Keutzer, M Auli, D Kiela
arXiv preprint arXiv:2012.15045, 2020
Large-Scale Self-and Semi-Supervised Learning for Speech Translation
C Wang, A Wu, J Pino, A Baevski, M Auli, A Conneau
arXiv preprint arXiv:2104.06678, 2021
Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
WN Hsu, A Sriram, A Baevski, T Likhomanenko, Q Xu, V Pratap, J Kahn, ...
arXiv preprint arXiv:2104.01027, 2021
A Comparison of Discrete Latent Variable Models for Speech Representation Learning
H Zhou, A Baevski, M Auli
arXiv preprint arXiv:2010.14230, 2020
The system can't perform the operation now. Try again later.
Articles 1–19