Folgen
Zhilin Yang
Titel
Zitiert von
Zitiert von
Jahr
Xlnet: Generalized autoregressive pretraining for language understanding
Z Yang, Z Dai, Y Yang, J Carbonell, RR Salakhutdinov, QV Le
Advances in neural information processing systems 32, 2019
72952019
Transformer-xl: Attentive language models beyond a fixed-length context
Z Dai, Z Yang, Y Yang, J Carbonell, QV Le, R Salakhutdinov
arXiv preprint arXiv:1901.02860, 2019
29312019
Revisiting semi-supervised learning with graph embeddings
Z Yang, W Cohen, R Salakhudinov
International conference on machine learning, 40-48, 2016
15912016
HotpotQA: A dataset for diverse, explainable multi-hop question answering
Z Yang, P Qi, S Zhang, Y Bengio, WW Cohen, R Salakhutdinov, ...
arXiv preprint arXiv:1809.09600, 2018
12032018
Multi-task cross-lingual sequence tagging from scratch
Z Yang, R Salakhutdinov, W Cohen
arXiv preprint arXiv:1603.06270, 2016
577*2016
Good semi-supervised learning that requires a bad gan
Z Dai, Z Yang, F Yang, WW Cohen, RR Salakhutdinov
Advances in neural information processing systems 30, 2017
4932017
GPT understands, too
X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang
arXiv preprint arXiv:2103.10385, 2021
478*2021
Differentiable learning of logical rules for knowledge base reasoning
F Yang, Z Yang, WW Cohen
Advances in neural information processing systems 30, 2017
4782017
Gated-Attention Readers for Text Comprehension
B Dhingra, H Liu, Z Yang, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1606.01549, 2016
4342016
Review networks for caption generation
Z Yang, Y Yuan, Y Wu, WW Cohen, RR Salakhutdinov
Advances in neural information processing systems 29, 2016
365*2016
Breaking the softmax bottleneck: A high-rank RNN language model
Z Yang, Z Dai, R Salakhutdinov, WW Cohen
arXiv preprint arXiv:1711.03953, 2017
3482017
Cosnet: Connecting heterogeneous social networks with local and global consistency
Y Zhang, J Tang, Z Yang, J Pei, PS Yu
Proceedings of the 21th ACM SIGKDD international conference on knowledge …, 2015
3192015
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang, J Tang
arXiv preprint arXiv:2110.07602, 2021
227*2021
Neural cross-lingual named entity recognition with minimal resources
J Xie, Z Yang, G Neubig, NA Smith, J Carbonell
arXiv preprint arXiv:1808.09861, 2018
1652018
Semi-supervised QA with generative domain-adaptive nets
Z Yang, J Hu, R Salakhutdinov, WW Cohen
arXiv preprint arXiv:1702.02206, 2017
1632017
Linguistic knowledge as memory for recurrent neural networks
B Dhingra, Z Yang, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1703.02620, 2017
133*2017
Words or characters? fine-grained gating for reading comprehension
Z Yang, B Dhingra, Y Yuan, J Hu, WW Cohen, R Salakhutdinov
arXiv preprint arXiv:1611.01724, 2016
942016
GLM: General language model pretraining with autoregressive blank infilling
Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
83*2022
Transformer-xl: Attentive language models beyond a fixed-length context. arXiv 2019
Z Dai, Z Yang, Y Yang, J Carbonell, QV Le, R Salakhutdinov
arXiv preprint arXiv:1901.02860, 0
81
Xlnet: Generalized autoregressive pretraining for language understanding. arXiv 2019
Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov, QV Le
arXiv preprint arXiv:1906.08237, 2019
682019
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20