Takip et
Hung-yi Lee
Hung-yi Lee
ntu.edu.tw üzerinde doğrulanmış e-posta adresine sahip - Ana Sayfa
Başlık
Alıntı yapanlar
Alıntı yapanlar
Yıl
Temporal pattern attention for multivariate time series forecasting
SY Shih, FK Sun, H Lee
Machine Learning 108 (8), 1421-1441, 2019
2692019
Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders
AT Liu, S Yang, PH Chi, P Hsu, H Lee
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
1982020
Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder
YA Chung, CC Wu, CH Shen, HY Lee, LS Lee
arXiv preprint arXiv:1603.00982, 2016
1672016
Tera: Self-supervised learning of transformer encoder representation for speech
AT Liu, SW Li, H Lee
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 2351-2366, 2021
1472021
Superb: Speech processing universal performance benchmark
S Yang, PH Chi, YS Chuang, CIJ Lai, K Lakhotia, YY Lin, AT Liu, J Shi, ...
arXiv preprint arXiv:2105.01051, 2021
1192021
One-shot voice conversion by separating speaker and content representations with instance normalization
J Chou, C Yeh, H Lee
arXiv preprint arXiv:1904.05742, 2019
1192019
Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations
J Chou, C Yeh, H Lee, L Lee
arXiv preprint arXiv:1804.02812, 2018
1162018
Spoken content retrieval—beyond cascading speech recognition with text retrieval
L Lee, J Glass, H Lee, C Chan
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (9), 1389 …, 2015
1022015
Audio albert: A lite bert for self-supervised learning of audio representation
PH Chi, PH Chung, TH Wu, CC Hsieh, YH Chen, SW Li, H Lee
2021 IEEE Spoken Language Technology Workshop (SLT), 344-350, 2021
842021
Tree transformer: Integrating tree structures into self-attention
YS Wang, HY Lee, YN Chen
arXiv preprint arXiv:1909.06639, 2019
812019
Supervised and unsupervised transfer learning for question answering
YA Chung, HY Lee, J Glass
arXiv preprint arXiv:1711.05345, 2017
772017
Learning chinese word representations from glyphs of characters
TR Su, HY Lee
arXiv preprint arXiv:1708.04755, 2017
772017
Lamol: Language modeling for lifelong language learning
FK Sun, CH Ho, HY Lee
arXiv preprint arXiv:1909.03329, 2019
632019
Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection
S Shen, H Lee
arXiv preprint arXiv:1604.00077, 2016
612016
Meta learning for end-to-end low-resource speech recognition
JY Hsu, YJ Chen, H Lee
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
582020
End-to-end text-to-speech for low-resource languages by cross-lingual transfer learning
T Tu, YJ Chen, C Yeh, HY Lee
arXiv preprint arXiv:1904.06508, 2019
562019
Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs
YL Tuan, YN Chen, H Lee
arXiv preprint arXiv:1910.00610, 2019
522019
Spoken SQuAD: A study of mitigating the impact of speech recognition errors on listening comprehension
CH Li, SL Wu, CL Liu, H Lee
arXiv preprint arXiv:1804.00320, 2018
512018
Learning to encode text as human-readable summaries using generative adversarial networks
YS Wang, HY Lee
arXiv preprint arXiv:1810.02851, 2018
492018
Towards machine comprehension of spoken content: Initial toefl listening comprehension test by machine
BH Tseng, SS Shen, HY Lee, LS Lee
arXiv preprint arXiv:1608.06378, 2016
482016
Sistem, işlemi şu anda gerçekleştiremiyor. Daha sonra yeniden deneyin.
Makaleler 1–20