Folgen
Yuan Yao
Yuan Yao
Postdoc Research Fellow, National University of Singapore
Bestätigte E-Mail-Adresse bei mails.tsinghua.edu.cn - Startseite
Titel
Zitiert von
Zitiert von
Jahr
FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation
X Han, H Zhu, P Yu, Z Wang, Y Yao, Z Liu, M Sun
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
6132018
Pre-trained models: Past, present and future
X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, ...
AI Open 2, 225-250, 2021
5932021
DocRED: A large-scale document-level relation extraction dataset
Y Yao*, D Ye*, P Li, X Han, Y Lin, Z Liu, Z Liu, L Huang, J Zhou, M Sun
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
4802019
CPT: Colorful prompt tuning for pre-trained vision-language models
Y Yao*, A Zhang*, Z Zhang, Z Liu, TS Chua, M Sun
arXiv preprint arXiv:2109.11797, 2021
1932021
OpenNRE: An open and extensible toolkit for neural relation extraction
X Han, T Gao, Y Yao, D Ye, Z Liu, M Sun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
1742019
Onion: A simple and effective defense against textual backdoor attacks
F Qi, Y Chen, M Li, Y Yao, Z Liu, M Sun
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2020
1552020
Turn the combination lock: Learnable textual backdoor attacks via word substitution
F Qi*, Y Yao*, S Xu*, Z Liu, M Sun
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
982021
A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals
Z Zeng*, Y Yao*, Z Liu, M Sun
Nature communications 13 (1), 1-11, 2022
762022
CPM-2: Large-scale cost-effective pre-trained language models
Z Zhang, Y Gu, X Han, S Chen, C Xiao, Z Sun, Y Yao, F Qi, J Guan, P Ke, ...
AI Open 2, 216-224, 2021
752021
Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data
R Wu*, Y Yao*, X Han, R Xie, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
662019
Fine-Grained Scene Graph Generation with Data Transfer
A Zhang*, Y Yao*, Q Chen, W Ji, Z Liu, M Sun, TS Chua
Proceedings of the 2022 European Conference on Computer Vision, 2022
602022
KoLA: Carefully benchmarking world knowledge of large language models
J Yu, X Wang, S Tu, S Cao, D Zhang-Li, X Lv, H Peng, Z Yao, X Zhang, ...
ICLR 2024, 2023
532023
Knowledge transfer via pre-training for recommendation: A review and prospect
Z Zeng, C Xiao, Y Yao, R Xie, Z Liu, F Lin, L Lin, M Sun
Frontiers in big Data 4, 602071, 2021
412021
PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models
Y Yao*, Q Chen*, A Zhang, W Ji, Z Liu, TS Chua, M Sun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
402022
Visual distant supervision for scene graph generation
Y Yao*, A Zhang*, X Han, M Li, C Weber, Z Liu, S Wermter, M Sun
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021
402021
Meta-Information Guided Meta-Learning for Few-Shot Relation Classification
B Dong*, Y Yao*, R Xie, T Gao, X Han, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 28th International Conference on Computational …, 2020
372020
Transfer visual prompt generator across llms
A Zhang, H Fei, Y Yao, W Ji, L Li, Z Liu, TS Chua
NeurIPS 2023, 2023
352023
Denoising relation extraction from document-level distant supervision
C Xiao, Y Yao, R Xie, X Han, Z Liu, M Sun, F Lin, L Lin
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
332020
Uprec: User-aware pre-training for recommender systems
C Xiao, R Xie, Y Yao, Z Liu, M Sun, X Zhang, L Lin
arXiv preprint arXiv:2102.10989, 2021
322021
Open hierarchical relation extraction
K Zhang*, Y Yao*, R Xie, X Han, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
292021
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20