Folgen
Ananya Harsh Jha
Ananya Harsh Jha
Allen Institute for AI
Bestätigte E-Mail-Adresse bei allenai.org
Titel
Zitiert von
Zitiert von
Jahr
Disentangling factors of variation with cycle-consistent variational auto-encoders
AH Jha, S Anand, M Singh, VSR Veeravasarapu
Proceedings of the European Conference on Computer Vision (ECCV), 805-820, 2018
1442018
TorchMetrics - Measuring Reproducibility in PyTorch
N Detlefsen, J Borovec, J Schock, A Jha, T Koker, L Di Liello
79*2022
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
72024
AASAE: Augmentation-Augmented Stochastic Autoencoders
W Falcon, AH Jha, T Koker, K Cho
arXiv preprint arXiv:2107.12329, 2021
6*2021
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv preprint arXiv:2402.00159, 2024
42024
Large Language Model Distillation Doesn't Need a Teacher
AH Jha, D Groeneveld, E Strubell, I Beltagy
arXiv preprint arXiv:2305.14864, 2023
32023
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
arXiv preprint arXiv:2312.10523, 2023
2023
Robust Tooling and New Resources for Large Language Model Evaluation via Catwalk
K Richardson, I Magnusson, O Tafjord, A Bhagia, I Beltagy, A Cohan, ...
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–8