Seguir
Hung-yi Lee
Título
Citado por
Citado por
Ano
Temporal pattern attention for multivariate time series forecasting
SY Shih, FK Sun, H Lee
Machine Learning 108, 1421-1441, 2019
6482019
Superb: Speech processing universal performance benchmark
S Yang, PH Chi, YS Chuang, CIJ Lai, K Lakhotia, YY Lin, AT Liu, J Shi, ...
arXiv preprint arXiv:2105.01051, 2021
6422021
Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders
AT Liu, S Yang, PH Chi, P Hsu, H Lee
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
3852020
Tera: Self-supervised learning of transformer encoder representation for speech
AT Liu, SW Li, H Lee
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 2351-2366, 2021
3402021
One-shot voice conversion by separating speaker and content representations with instance normalization
J Chou, C Yeh, H Lee
arXiv preprint arXiv:1904.05742, 2019
2342019
Self-supervised speech representation learning: A review
A Mohamed, H Lee, L Borgholt, JD Havtorn, J Edin, C Igel, K Kirchhoff, ...
IEEE Journal of Selected Topics in Signal Processing 16 (6), 1179-1210, 2022
2152022
Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder
YA Chung, CC Wu, CH Shen, HY Lee, LS Lee
arXiv preprint arXiv:1603.00982, 2016
2082016
Lamol: Language modeling for lifelong language learning
FK Sun, CH Ho, HY Lee
arXiv preprint arXiv:1909.03329, 2019
1772019
Audio albert: A lite bert for self-supervised learning of audio representation
PH Chi, PH Chung, TH Wu, CC Hsieh, YH Chen, SW Li, H Lee
2021 IEEE Spoken Language Technology Workshop (SLT), 344-350, 2021
1592021
Can large language models be an alternative to human evaluations?
CH Chiang, H Lee
arXiv preprint arXiv:2305.01937, 2023
1542023
Tree transformer: Integrating tree structures into self-attention
YS Wang, HY Lee, YN Chen
arXiv preprint arXiv:1909.06639, 2019
1522019
Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations
J Chou, C Yeh, H Lee, L Lee
arXiv preprint arXiv:1804.02812, 2018
1502018
SpeechBERT: Cross-modal pre-trained language model for end-to-end spoken question answering
YS Chuang, CL Liu, HY Lee
120*2019
Distilhubert: Speech representation learning by layer-wise distillation of hidden-unit bert
HJ Chang, S Yang, H Lee
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
1172022
Spoken content retrieval—beyond cascading speech recognition with text retrieval
L Lee, J Glass, H Lee, C Chan
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (9), 1389 …, 2015
1152015
Supervised and unsupervised transfer learning for question answering
YA Chung, HY Lee, J Glass
arXiv preprint arXiv:1711.05345, 2017
1032017
Meta learning for end-to-end low-resource speech recognition
JY Hsu, YJ Chen, H Lee
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
972020
Learning chinese word representations from glyphs of characters
TR Su, HY Lee
arXiv preprint arXiv:1708.04755, 2017
962017
Vqvc+: One-shot voice conversion by vector quantization and u-net architecture
DY Wu, YH Chen, HY Lee
arXiv preprint arXiv:2006.04154, 2020
922020
End-to-end text-to-speech for low-resource languages by cross-lingual transfer learning
T Tu, YJ Chen, C Yeh, HY Lee
arXiv preprint arXiv:1904.06508, 2019
912019
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20