Electra: Pre-training text encoders as discriminators rather than generators K Clark, MT Luong, QV Le, CD Manning arXiv preprint arXiv:2003.10555, 2020 | 2469 | 2020 |
What does bert look at? an analysis of bert's attention K Clark, U Khandelwal, O Levy, CD Manning arXiv preprint arXiv:1906.04341, 2019 | 1193 | 2019 |
Deep reinforcement learning for mention-ranking coreference models K Clark, CD Manning arXiv preprint arXiv:1609.08667, 2016 | 439 | 2016 |
Semi-Supervised Sequence Modeling with Cross-View Training K Clark, MT Luong, CD Manning, QV Le arXiv preprint arXiv:1809.08370, 2018 | 399 | 2018 |
Improving coreference resolution by learning entity-level distributed representations K Clark, CD Manning arXiv preprint arXiv:1606.01323, 2016 | 387 | 2016 |
Inducing domain-specific sentiment lexicons from unlabeled corpora WL Hamilton, K Clark, J Leskovec, D Jurafsky Proceedings of the conference on empirical methods in natural language …, 2016 | 381 | 2016 |
Large-scale analysis of counseling conversations: An application of natural language processing to mental health T Althoff, K Clark, J Leskovec Transactions of the Association for Computational Linguistics 4, 463-476, 2016 | 259 | 2016 |
Entity-centric coreference resolution with model stacking K Clark, CD Manning Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015 | 245 | 2015 |
Emergent linguistic structure in artificial neural networks trained by self-supervision CD Manning, K Clark, J Hewitt, U Khandelwal, O Levy Proceedings of the National Academy of Sciences 117 (48), 30046-30054, 2020 | 204 | 2020 |
Bam! born-again multi-task networks for natural language understanding K Clark, MT Luong, U Khandelwal, CD Manning, QV Le arXiv preprint arXiv:1907.04829, 2019 | 179 | 2019 |
Sample efficient text summarization using a single pre-trained transformer U Khandelwal, K Clark, D Jurafsky, L Kaiser arXiv preprint arXiv:1905.08836, 2019 | 69 | 2019 |
Revminer: An extractive interface for navigating reviews on a smartphone J Huang, O Etzioni, L Zettlemoyer, K Clark, C Lee Proceedings of the 25th annual ACM symposium on User interface software and …, 2012 | 67 | 2012 |
Electra: Pre-training text encoders as discriminators rather than generators. arXiv 2020 K Clark, MT Luong, QV Le, CD Manning arXiv preprint arXiv:2003.10555, 2003 | 56 | 2003 |
Pre-training transformers as energy-based cloze models K Clark, MT Luong, QV Le, CD Manning arXiv preprint arXiv:2012.08561, 2020 | 51 | 2020 |
Stanford at TAC KBP 2017: Building a Trilingual Relational Knowledge Graph. AT Chaganty, A Paranjape, J Bolton, M Lamm, J Lei, A See, K Clark, ... TAC, 2017 | 5 | 2017 |
Text-to-image diffusion models are zero-shot classifiers K Clark, P Jaini arXiv preprint arXiv:2303.15233, 2023 | 3 | 2023 |
Meta-Learning Fast Weight Language Models K Clark, K Guu, MW Chang, P Pasupat, G Hinton, M Norouzi arXiv preprint arXiv:2212.02475, 2022 | 1 | 2022 |
Contrastive pre-training for language tasks TM Luong, QV Le, KS Clark US Patent 11,449,684, 2022 | 1 | 2022 |
Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments S Simmons, K Clark, A Tavakkoli, D Loffredo Advances in Visual Computing: 13th International Symposium, ISVC 2018, Las …, 2018 | 1 | 2018 |
Towards Expert-Level Medical Question Answering with Large Language Models K Singhal, T Tu, J Gottweis, R Sayres, E Wulczyn, L Hou, K Clark, S Pfohl, ... arXiv preprint arXiv:2305.09617, 2023 | | 2023 |