Raphael Tang
Title
Cited by
Cited by
Year
Docbert: Bert for document classification
A Adhikari, A Ram, R Tang, J Lin
arXiv preprint arXiv:1904.08398, 2019
2022019
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
R Tang*, Y Lu*, L Liu*, L Mou, O Vechtomova, J Lin
arXiv preprint arXiv:1903.12136, 2019
2012019
Deep Residual Learning for Small-Footprint Keyword Spotting
R Tang, J Lin
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
1382018
Deebert: Dynamic early exiting for accelerating bert inference
J Xin, R Tang, J Lee, Y Yu, J Lin
arXiv preprint arXiv:2004.12993, 2020
722020
Rethinking Complex Neural Network Architectures for Document Classification
A Adhikari*, A Ram*, R Tang, J Lin
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
602019
Rapidly Bootstrapping a Question Answering Dataset for COVID-19
R Tang, R Nogueira, E Zhang, N Gupta, P Cam, K Cho, J Lin
arXiv preprint arXiv:2004.11339, 2020
462020
Covidex: Neural ranking models and keyword search infrastructure for the covid-19 open research dataset
E Zhang, N Gupta, R Tang, X Han, R Pradeep, K Lu, Y Zhang, R Nogueira, ...
arXiv preprint arXiv:2007.07846, 2020
392020
An Experimental Analysis of the Power Consumption of Convolutional Neural Networks for Keyword Spotting
R Tang, W Wang, Z Tu, J Lin
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
332018
Honk: A PyTorch Reimplementation of Convolutional Neural Networks for Keyword Spotting
R Tang, J Lin
arXiv preprint arXiv:1710.06554, 2017
302017
What would elsa do? freezing layers during transformer fine-tuning
J Lee, R Tang, J Lin
arXiv preprint arXiv:1911.03090, 2019
20*2019
Natural Language Generation for Effective Knowledge Distillation
R Tang, Y Lu, J Lin
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource …, 2019
182019
Flops as a direct optimization objective for learning sparse neural networks
R Tang, A Adhikari, J Lin
arXiv preprint arXiv:1811.03060, 2018
132018
Exploring the limits of simple learners in knowledge distillation for document classification with DocBERT
A Adhikari, A Ram, R Tang, WL Hamilton, J Lin
Proceedings of the 5th Workshop on Representation Learning for NLP, 72-77, 2020
92020
BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression
J Xin, R Tang, Y Yu, J Lin
Proceedings of the 16th Conference of the European Chapter of the …, 2021
72021
Incorporating Contextual and Syntactic Structures Improves Semantic Similarity Modeling
L Liu, W Yang, J Rao, R Tang, J Lin
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
62019
Yelling at Your TV: An Analysis of Speech Recognition Errors and Subsequent User Behavior on Entertainment Systems
R Tang, F Ture, J Lin
Proceedings of the 42nd Annual International ACM SIGIR Conference on …, 2019
62019
Howl: A Deployed, Open-Source Wake Word Detection System
R Tang*, J Lee*, A Razi, J Cambre, I Bicking, J Kaye, J Lin
arXiv preprint arXiv:2008.09606, 2020
52020
Inserting Information Bottlenecks for Attribution in Transformers
Z Jiang, R Tang, J Xin, J Lin
arXiv preprint arXiv:2012.13838, 2020
42020
Progress and Tradeoffs in Neural Language Models
R Tang, J Lin
arXiv preprint arXiv:1811.00942, 2018
42018
Adaptive pruning of neural language models for mobile devices
R Tang, J Lin
arXiv preprint arXiv:1809.10282, 2018
42018
The system can't perform the operation now. Try again later.
Articles 1–20