Codebert: A pre-trained model for programming and natural languages Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ... arXiv preprint arXiv:2002.08155, 2020 | 2706 | 2020 |
Graphcodebert: Pre-training code representations with data flow D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu, L Zhou, N Duan, ... arXiv preprint arXiv:2009.08366, 2020 | 1195* | 2020 |
Codexglue: A machine learning benchmark dataset for code understanding and generation S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ... arXiv preprint arXiv:2102.04664, 2021 | 1093* | 2021 |
Unixcoder: Unified cross-modal pre-training for code representation D Guo, S Lu, N Duan, Y Wang, M Zhou, J Yin arXiv preprint arXiv:2203.03850, 2022 | 533 | 2022 |
Codebleu: a method for automatic evaluation of code synthesis S Ren, D Guo, S Lu, L Zhou, S Liu, D Tang, N Sundaresan, M Zhou, ... arXiv preprint arXiv:2009.10297, 2020 | 418 | 2020 |
DeepSeek-Coder: When the Large Language Model Meets Programming--The Rise of Code Intelligence D Guo, Q Zhu, D Yang, Z Xie, K Dong, W Zhang, G Chen, X Bi, Y Wu, ... arXiv preprint arXiv:2401.14196, 2024 | 350 | 2024 |
Baize: An open-source chat model with parameter-efficient tuning on self-chat data C Xu, D Guo, N Duan, J McAuley arXiv preprint arXiv:2304.01196, 2023 | 267 | 2023 |
Graph-based reasoning over heterogeneous external knowledge for commonsense question answering S Lv, D Guo, J Xu, D Tang, N Duan, M Gong, L Shou, D Jiang, G Cao, ... Proceedings of the AAAI conference on artificial intelligence 34 (05), 8449-8456, 2020 | 219 | 2020 |
Automating code review activities by large-scale pre-training Z Li, S Lu, D Guo, N Duan, S Jannu, G Jenks, D Majumder, J Green, ... Proceedings of the 30th ACM Joint European Software Engineering Conference …, 2022 | 167* | 2022 |
Dialog-to-action: Conversational question answering over a large-scale knowledge base D Guo, D Tang, N Duan, M Zhou, J Yin Advances in neural information processing systems 31, 2018 | 144 | 2018 |
Deepseekmath: Pushing the limits of mathematical reasoning in open language models Z Shao, P Wang, Q Zhu, R Xu, J Song, X Bi, H Zhang, M Zhang, YK Li, ... arXiv preprint arXiv:2402.03300, 2024 | 121 | 2024 |
Reacc: A retrieval-augmented code completion framework S Lu, N Duan, H Han, D Guo, S Hwang, A Svyatkovskiy arXiv preprint arXiv:2203.07722, 2022 | 114 | 2022 |
Multi-task learning for conversational question answering over a large-scale knowledge base T Shen, X Geng, T Qin, D Guo, D Tang, N Duan, G Long, D Jiang arXiv preprint arXiv:1910.05069, 2019 | 99 | 2019 |
CodeBERT: a pre-trained model for programming and natural languages (2020) Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ... arXiv preprint arXiv:2002.08155, 2002 | 80 | 2002 |
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence Q Zhu, D Guo, Z Shao, D Yang, P Wang, R Xu, Y Wu, Y Li, H Gao, S Ma, ... arXiv preprint arXiv:2406.11931, 2024 | 78* | 2024 |
Coupling retrieval and meta-learning for context-dependent semantic parsing D Guo, D Tang, N Duan, M Zhou, J Yin arXiv preprint arXiv:1906.07108, 2019 | 61 | 2019 |
Question generation from sql queries improves neural semantic parsing D Guo, Y Sun, D Tang, N Duan, J Yin, H Chi, J Cao, P Chen, M Zhou arXiv preprint arXiv:1808.06304, 2018 | 60 | 2018 |
Deepseek llm: Scaling open-source language models with longtermism X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ... arXiv preprint arXiv:2401.02954, 2024 | 56 | 2024 |
Learning to complete code with sketches D Guo, A Svyatkovskiy, J Yin, N Duan, M Brockschmidt, M Allamanis arXiv preprint arXiv:2106.10158, 2021 | 54 | 2021 |
Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model A Liu, B Feng, B Wang, B Wang, B Liu, C Zhao, C Dengr, C Ruan, D Dai, ... arXiv preprint arXiv:2405.04434, 2024 | 49 | 2024 |