Folgen
Chang Liu
Titel
Zitiert von
Zitiert von
Jahr
Multi-granularity structural knowledge distillation for language model compression
C Liu, C Tao, J Feng, D Zhao
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
322022
Learning to respond with stickers: A framework of unifying multi-modality in multi-turn dialog
S Gao, X Chen, C Liu, L Liu, D Zhao, R Yan
Proceedings of the Web Conference 2020, 1138-1148, 2020
252020
Dialogue history matters! personalized response selection in multi-turn retrieval-based chatbots
J Li, C Liu, C Tao, Z Chan, D Zhao, M Zhang, R Yan
ACM Transactions on Information Systems (TOIS) 39 (4), 1-25, 2021
242021
Cross-lingual low-resource set-to-description retrieval for global e-commerce
J Li, C Liu, J Wang, L Bing, H Li, X Liu, D Zhao, R Yan
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 8212-8219, 2020
112020
CORE: Cooperative Training of Retriever-Reranker for Effective Dialogue Response Selection
C Tao, J Feng, T Shen, C Liu, J Li, X Geng, D Jiang
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
8*2023
ProphetChat: Enhancing Dialogue Generation with Simulation of Future Conversation
C Liu, X Tan, C Tao, Z Fu, D Zhao, TY Liu, R Yan
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
72022
More than Classification: A Unified Framework for Event Temporal Relation Extraction
Q Huang, Y Hu, S Zhu, Y Feng, C Liu, D Zhao
arXiv preprint arXiv:2305.17607, 2023
62023
Rethinking Task-Specific Knowledge Distillation: Contextualized Corpus as Better Textbook
C Liu, C Tao, J Liang, T Shen, J Feng, Q Huang, D Zhao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
62022
How to Represent Context Better? An Empirical Study on Context Modeling for Multi-turn Response Selection
J Feng, C Tao, C Liu, R Yan, D Zhao
Findings of the Association for Computational Linguistics: EMNLP 2022, 7285-7298, 2022
32022
SMASH: Improving SMAll Language Models’ Few-SHot Ability with Prompt-Based Distillation
Y Wang, C Liu, K Chen, X Wang, D Zhao
Findings of the Association for Computational Linguistics: EMNLP 2022, 6608-6619, 2022
22022
Reciprocal Learning of Knowledge Retriever and Response Ranker for Knowledge-Grounded Conversations
J Feng, C Tao, Z Li, C Liu, T Shen, D Zhao
Proceedings of the 29th International Conference on Computational …, 2022
22022
Attend, Select and Eliminate: Accelerating Multi-turn Response Selection with Dual-attention-based Content Elimination
J Liang, C Liu, C Tao, J Feng, D Zhao
Findings of the Association for Computational Linguistics: ACL 2023, 6758-6770, 2023
12023
Adam: Dense Retrieval Distillation with Adaptive Dark Examples
C Liu, C Tao, X Geng, T Shen, D Zhao, C Xu, B Jiao, D Jiang
arXiv preprint arXiv:2212.10192, 2022
12022
BioGen: Generating Biography Summary under Table Guidance on Wikipedia
S Gao, X Chen, C Liu, D Zhao, R Yan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
12021
Length-Adaptive Distillation: Customizing Small Language Model for Dynamic Token Pruning
C Liu, C Tao, J Liang, J Feng, T Shen, Q Huang, D Zhao
Findings of the Association for Computational Linguistics: EMNLP 2023, 4452-4463, 2023
2023
Dimension-Prompts Boost Commonsense Consolidation
J Feng, C Tao, T Shen, C Liu, D Zhao
Proceedings of the 46th International ACM SIGIR Conference on Research and …, 2023
2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–16