Universal adversarial triggers for attacking and analyzing NLP E Wallace, S Feng, N Kandpal, M Gardner, S Singh arXiv preprint arXiv:1908.07125, 2019 | 630 | 2019 |
Deduplicating training data mitigates privacy risks in language models N Kandpal, E Wallace, C Raffel International Conference on Machine Learning, 10697-10707, 2022 | 89 | 2022 |
Large language models struggle to learn long-tail knowledge N Kandpal, H Deng, A Roberts, E Wallace, C Raffel International Conference on Machine Learning, 15696-15707, 2023 | 61 | 2023 |
Music enhancement via image translation and vocoding N Kandpal, O Nieto, Z Jin ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022 | 12 | 2022 |
Backdoor attacks for in-context learning with language models N Kandpal, M Jagielski, F Tramèr, N Carlini arXiv preprint arXiv:2307.14692, 2023 | 10 | 2023 |
User inference attacks on large language models N Kandpal, K Pillutla, A Oprea, P Kairouz, CA Choquette-Choo, Z Xu arXiv preprint arXiv:2310.09266, 2023 | 2 | 2023 |
Git-Theta: A Git Extension for Collaborative Development of Machine Learning Models N Kandpal, B Lester, M Muqeeth, A Mascarenhas, M Evans, V Baskaran, ... arXiv preprint arXiv:2306.04529, 2023 | 1 | 2023 |