Folgen
Haokun Liu
Haokun Liu
Bestätigte E-Mail-Adresse bei cs.unc.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
BLiMP: The benchmark of linguistic minimal pairs for English
A Warstadt, A Parrish, H Liu, A Mohananey, W Peng, SF Wang, ...
Transactions of the Association for Computational Linguistics 8, 377-392, 2020
2692020
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
2432020
Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning
H Liu, D Tam, M Muqeeth, J Mohta, T Huang, M Bansal, CA Raffel
Advances in Neural Information Processing Systems 35, 1950-1965, 2022
2272022
Investigating BERT's knowledge of language: Five analysis methods with NPIs
A Warstadt, Y Cao, I Grosu, W Peng, H Blix, Y Nie, A Alsop, S Bordia, ...
arXiv preprint arXiv:1909.02597, 2019
1122019
Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually)
A Warstadt, Y Zhang, HS Li, H Liu, SR Bowman
arXiv preprint arXiv:2010.05358, 2020
1002020
jiant: A software toolkit for research on general-purpose text understanding models
Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ...
arXiv preprint arXiv:2003.02249, 2020
85*2020
English intermediate-task training improves zero-shot cross-lingual transfer too
J Phang, I Calixto, PM Htut, Y Pruksachatkun, H Liu, C Vania, K Kann, ...
arXiv preprint arXiv:2005.13013, 2020
662020
Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data
W Huang, H Liu, SR Bowman
arXiv preprint arXiv:2010.04762, 2020
322020
Comparing test sets with item response theory
C Vania, PM Htut, W Huang, D Mungra, RY Pang, J Phang, H Liu, K Cho, ...
arXiv preprint arXiv:2106.00840, 2021
182021
Fine-tuned transformers show clusters of similar representations across layers
J Phang, H Liu, SR Bowman
arXiv preprint arXiv:2109.08406, 2021
112021
Precise task formalization matters in Winograd schema evaluations
H Liu, W Huang, DA Mungra, SR Bowman
arXiv preprint arXiv:2010.04043, 2020
112020
Memd: A diversity-promoting learning framework for short-text conversation
M Zou, X Li, H Liu, ZH Deng
Proceedings of the 27th International Conference on Computational …, 2018
52018
Retrieving Relevant and Diverse Image from Social Media Images.
X Chen, H Liu, ZH Deng, Y Yang
MediaEval, 2015
32015
Git-Theta: A Git Extension for Collaborative Development of Machine Learning Models
N Kandpal, B Lester, M Muqeeth, A Mascarenhas, M Evans, V Baskaran, ...
arXiv preprint arXiv:2306.04529, 2023
12023
Soft Merging of Experts with Adaptive Routing
M Muqeeth, H Liu, C Raffel
arXiv preprint arXiv:2306.03745, 2023
12023
Models with Conditional Computation Learn Suboptimal Solutions
M Mohammed, H Liu, C Raffel
I Can't Believe It's Not Better Workshop: Understanding Deep Learning …, 2022
12022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–16