Folgen
Hang Yan
Hang Yan
Computer Science, Fudan University
Bestätigte E-Mail-Adresse bei fudan.edu.cn
Titel
Zitiert von
Zitiert von
Jahr
FLAT: Chinese NER using flat-lattice transformer
X Li, H Yan, X Qiu, X Huang
arXiv preprint arXiv:2004.11795, 2020
3952020
TENER: adapting transformer encoder for named entity recognition
H Yan, B Deng, X Li, X Qiu
arXiv preprint arXiv:1911.04474, 2019
3392019
A unified generative framework for various NER subtasks
H Yan, T Gui, J Dai, Q Guo, Z Zhang, X Qiu
arXiv preprint arXiv:2106.01223, 2021
2292021
A unified generative framework for aspect-based sentiment analysis
H Yan, J Dai, X Qiu, Z Zhang
arXiv preprint arXiv:2106.04300, 2021
2022021
Does syntax matter? a strong baseline for aspect-based sentiment analysis with roberta
J Dai, H Yan, T Sun, P Liu, X Qiu
arXiv preprint arXiv:2104.04986, 2021
1342021
Learning sparse sharing architectures for multiple tasks
T Sun, Y Shao, X Li, P Liu, H Yan, X Qiu, X Huang
Proceedings of the AAAI conference on artificial intelligence 34 (05), 8936-8943, 2020
1272020
Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation
Y Shao, Z Geng, Y Liu, J Dai, H Yan, F Yang, L Zhe, H Bao, X Qiu
arXiv preprint arXiv:2109.05729, 2021
1232021
Contrast and generation make bart a good dialogue emotion recognizer
S Li, H Yan, X Qiu
Proceedings of the AAAI conference on artificial intelligence 36 (10), 11002 …, 2022
512022
Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition
P Zhang, XDB Wang, Y Cao, C Xu, L Ouyang, Z Zhao, S Ding, S Zhang, ...
arXiv preprint arXiv:2309.15112, 2023
462023
Unified demonstration retriever for in-context learning
X Li, K Lv, H Yan, T Lin, W Zhu, Y Ni, G Xie, X Wang, X Qiu
arXiv preprint arXiv:2305.04320, 2023
372023
A graph-based model for joint chinese word segmentation and dependency parsing
H Yan, X Qiu, X Huang
Transactions of the Association for Computational Linguistics 8, 78-92, 2020
37*2020
A concise model for multi-criteria Chinese word segmentation with transformer encoder
X Qiu, H Pei, H Yan, X Huang
arXiv preprint arXiv:1906.12035, 2019
372019
SpellBERT: A lightweight pretrained model for Chinese spelling check
T Ji, H Yan, X Qiu
Proceedings of the 2021 conference on empirical methods in natural language …, 2021
342021
Secrets of rlhf in large language models part i: Ppo
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
arXiv preprint arXiv:2307.04964, 2023
292023
Codeie: Large code generation models are better few-shot information extractors
P Li, T Sun, Q Tang, H Yan, Y Wu, X Huang, X Qiu
arXiv preprint arXiv:2305.05711, 2023
262023
Accelerating bert inference for sequence labeling via early-exit
X Li, Y Shao, T Sun, H Yan, X Qiu, X Huang
arXiv preprint arXiv:2105.13878, 2021
262021
An embarrassingly easy but strong baseline for nested named entity recognition
H Yan, Y Sun, X Li, X Qiu
arXiv preprint arXiv:2208.04534, 2022
202022
BERT for monolingual and cross-lingual reverse dictionary
H Yan, X Li, X Qiu
arXiv preprint arXiv:2009.14790, 2020
202020
Gaussian word embedding with a wasserstein distance loss
C Sun, H Yan, X Qiu, X Huang
arXiv preprint arXiv:1808.07016, 2018
182018
fasthan: A bert-based multi-task toolkit for chinese nlp
Z Geng, H Yan, X Qiu, X Huang
arXiv preprint arXiv:2009.08633, 2020
172020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20