Folgen
Jingfei Du
Jingfei Du
Meta AI Research
Bestätigte E-Mail-Adresse bei fb.com
Titel
Zitiert von
Zitiert von
Jahr
Roberta: A robustly optimized bert pretraining approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
15206*2019
Supervised contrastive learning for pre-trained language model fine-tuning
B Gunel, J Du, A Conneau, V Stoyanov
arXiv preprint arXiv:2011.01403, 2020
2432020
Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model
W Xiong, J Du, WY Wang, V Stoyanov
arXiv preprint arXiv:1912.09637, 2019
1692019
Self-training improves pre-training for natural language understanding
J Du, E Grave, B Gunel, V Chaudhary, O Celebi, M Auli, V Stoyanov, ...
arXiv preprint arXiv:2010.02194, 2020
1092020
Box office prediction based on microblog
J Du, H Xu, X Huang
Expert Systems with Applications 41 (4), 1680-1689, 2014
1072014
Pretrained language models for biomedical and clinical tasks: understanding and extending the state-of-the-art
P Lewis, M Ott, J Du, V Stoyanov
Proceedings of the 3rd Clinical Natural Language Processing Workshop, 146-157, 2020
1012020
Larger-scale transformers for multilingual masked language modeling
N Goyal, J Du, M Ott, G Anantharaman, A Conneau
arXiv preprint arXiv:2105.00572, 2021
312021
Answering complex open-domain questions with multi-hop dense retrieval
W Xiong, XL Li, S Iyer, J Du, P Lewis, WY Wang, Y Mehdad, W Yih, ...
arXiv preprint arXiv:2009.12756, 2020
292020
Knowledge-augmented language model and its application to unsupervised named-entity recognition
A Liu, J Du, V Stoyanov
arXiv preprint arXiv:1904.04458, 2019
222019
Few-shot learning with multilingual language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
arXiv preprint arXiv:2112.10668, 2021
182021
Efficient large scale language modeling with mixtures of experts
M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ...
arXiv preprint arXiv:2112.10684, 2021
162021
General purpose text embeddings from pre-trained language models for scalable inference
J Du, M Ott, H Li, X Zhou, V Stoyanov
arXiv preprint arXiv:2004.14287, 2020
82020
Multi-objective optimization for overlapping community detection
J Du, J Lai, C Shi
Advanced Data Mining and Applications: 9th International Conference, ADMA …, 2013
62013
Document Entity Linking on Online Social Networks
X Yan, B Xue, J Shankar, RK Shenoy, J Du, MJ Dousti, VS Stoyanov
US Patent App. 16/112,477, 2020
42020
SpeechMatrix: A Large-Scale Mined Corpus of Multilingual Speech-to-Speech Translations
PA Duquenne, H Gong, N Dong, J Du, A Lee, V Goswani, C Wang, J Pino, ...
arXiv preprint arXiv:2211.04508, 2022
22022
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
M Xia, M Artetxe, J Du, D Chen, V Stoyanov
arXiv preprint arXiv:2205.15223, 2022
12022
On the Role of Bidirectionality in Language Model Pre-Training
M Artetxe, J Du, N Goyal, L Zettlemoyer, V Stoyanov
arXiv preprint arXiv:2205.11726, 2022
12022
Perceptron-based tagging of query boundaries for Chinese query segmentation
J Du, Y Song, CH Li
Proceedings of the 23rd International Conference on World Wide Web, 257-258, 2014
12014
Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality
H Singh, P Zhang, Q Wang, M Wang, W Xiong, J Du, Y Chen
arXiv preprint arXiv:2305.13812, 2023
2023
Few-shot Learning with Multilingual Generative Language Models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20