Folgen
Ximing Lu
Ximing Lu
Bestätigte E-Mail-Adresse bei cs.washington.edu
Titel
Zitiert von
Zitiert von
Jahr
Merlot: Multimodal neural script knowledge models
R Zellers, X Lu, J Hessel, Y Yu, JS Park, J Cao, A Farhadi, Y Choi
Advances in Neural Information Processing Systems 34, 23634-23651, 2021
2952021
DExperts: Decoding-time controlled text generation with experts and anti-experts
A Liu, M Sap, X Lu, S Swayamdipta, C Bhagavatula, NA Smith, Y Choi
arXiv preprint arXiv:2105.03023, 2021
2022021
Symbolic knowledge distillation: from general language models to commonsense models
P West, C Bhagavatula, J Hessel, JD Hwang, L Jiang, RL Bras, X Lu, ...
arXiv preprint arXiv:2110.07178, 2021
1902021
Merlot reserve: Neural script knowledge through vision and language and sound
R Zellers, J Lu, X Lu, Y Yu, Y Zhao, M Salehi, A Kusupati, J Hessel, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
1802022
Generated knowledge prompting for commonsense reasoning
J Liu, A Liu, X Lu, S Welleck, P West, RL Bras, Y Choi, H Hajishirzi
arXiv preprint arXiv:2110.08387, 2021
1532021
Neurologic decoding:(un) supervised neural text generation with predicate logic constraints
X Lu, P West, R Zellers, RL Bras, C Bhagavatula, Y Choi
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
1052021
Neurologic a* esque decoding: Constrained text generation with lookahead heuristics
X Lu, S Welleck, P West, L Jiang, J Kasai, D Khashabi, RL Bras, L Qin, ...
arXiv preprint arXiv:2112.08726, 2021
1042021
Faith and fate: Limits of transformers on compositionality
N Dziri, X Lu, M Sclar, XL Li, L Jiang, BY Lin, S Welleck, P West, ...
Advances in Neural Information Processing Systems 36, 2024
982024
Quark: Controllable text generation with reinforced unlearning
X Lu, S Welleck, J Hessel, L Jiang, L Qin, P West, P Ammanabrolu, Y Choi
Advances in neural information processing systems 35, 27591-27609, 2022
922022
Generating sequences by learning to self-correct
S Welleck, X Lu, P West, F Brahman, T Shen, D Khashabi, Y Choi
arXiv preprint arXiv:2211.00053, 2022
71*2022
Prosocialdialog: A prosocial backbone for conversational agents
H Kim, Y Yu, L Jiang, X Lu, D Khashabi, G Kim, Y Choi, M Sap
arXiv preprint arXiv:2205.12688, 2022
572022
Soda: Million-scale dialogue distillation with social commonsense contextualization
H Kim, J Hessel, L Jiang, P West, X Lu, Y Yu, P Zhou, RL Bras, M Alikhani, ...
arXiv preprint arXiv:2212.10465, 2022
562022
Rainier: Reinforced knowledge introspector for commonsense question answering
J Liu, S Hallinan, X Lu, P He, S Welleck, H Hajishirzi, Y Choi
arXiv preprint arXiv:2210.03078, 2022
332022
End-to-End diagnosis of breast biopsy images with transformers
S Mehta, X Lu, W Wu, D Weaver, H Hajishirzi, JG Elmore, LG Shapiro
Medical image analysis 79, 102466, 2022
28*2022
Naturalprover: Grounded mathematical proof generation with language models
S Welleck, J Liu, X Lu, H Hajishirzi, Y Choi
Advances in Neural Information Processing Systems 35, 4913-4927, 2022
262022
Analyzing commonsense emergence in few-shot knowledge models
J Da, RL Bras, X Lu, Y Choi, A Bosselut
arXiv preprint arXiv:2101.00297, 2021
23*2021
Fusing Pre-Trained Language Models With Multimodal Prompts Through Reinforcement Learning
Y Yu, J Chung, H Yun, J Hessel, JS Park, X Lu, R Zellers, P Ammanabrolu, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
21*2023
Connecting the dots between audio and text without parallel data through visual knowledge transfer
Y Zhao, J Hessel, Y Yu, X Lu, R Zellers, Y Choi
arXiv preprint arXiv:2112.08995, 2021
202021
I2d2: Inductive knowledge distillation with neurologic and self-imitation
C Bhagavatula, JD Hwang, D Downey, RL Bras, X Lu, L Qin, K Sakaguchi, ...
arXiv preprint arXiv:2212.09246, 2022
182022
Reflective decoding: Beyond unidirectional generation with off-the-shelf language models
P West, X Lu, A Holtzman, C Bhagavatula, J Hwang, Y Choi
arXiv preprint arXiv:2010.08566, 2020
142020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20