Folgen
Armen Aghajanyan
Armen Aghajanyan
Facebook AI Research
Bestätigte E-Mail-Adresse bei fb.com
Titel
Zitiert von
Zitiert von
Jahr
Videoclip: Contrastive pre-training for zero-shot video-text understanding
H Xu, G Ghosh, PY Huang, D Okhonko, A Aghajanyan, F Metze, ...
arXiv preprint arXiv:2109.14084, 2021
3442021
Incoder: A generative model for code infilling and synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
arXiv preprint arXiv:2204.05999, 2022
3132022
Intrinsic dimensionality explains the effectiveness of language model fine-tuning
A Aghajanyan, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2012.13255, 2020
2522020
Muppet: Massive multi-task representations with pre-finetuning
A Aghajanyan, A Gupta, A Shrivastava, X Chen, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2101.11038, 2021
2292021
Better fine-tuning by reducing representational collapse
A Aghajanyan, A Shrivastava, A Gupta, N Goyal, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2008.03156, 2020
2022020
Pre-training via paraphrasing
M Lewis, M Ghazvininejad, G Ghosh, A Aghajanyan, S Wang, ...
Advances in Neural Information Processing Systems 33, 18470-18481, 2020
1372020
Memorization without overfitting: Analyzing the training dynamics of large language models
K Tirumala, A Markosyan, L Zettlemoyer, A Aghajanyan
Advances in Neural Information Processing Systems 35, 38274-38290, 2022
1162022
Cm3: A causal masked multimodal model of the internet
A Aghajanyan, B Huang, C Ross, V Karpukhin, H Xu, N Goyal, D Okhonko, ...
arXiv preprint arXiv:2201.07520, 2022
982022
Improving passage retrieval with zero-shot question generation
DS Sachan, M Lewis, M Joshi, A Aghajanyan, W Yih, J Pineau, ...
arXiv preprint arXiv:2204.07496, 2022
612022
HTLM: Hyper-text pre-training and prompting of language models
A Armen, O Dmytro, L Mike, J Mandar, H Xu, G Gargi
International Conference on Learning Representations, 2022
56*2022
Conversational semantic parsing
A Aghajanyan, J Maillard, A Shrivastava, K Diedrick, M Haeger, H Li, ...
arXiv preprint arXiv:2009.13655, 2020
502020
Scaling autoregressive multi-modal models: Pretraining and instruction tuning
L Yu, B Shi, R Pasunuru, B Muller, O Golovneva, T Wang, A Babu, B Tang, ...
arXiv preprint arXiv:2309.02591, 2023
48*2023
Retrieval-augmented multimodal language modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
arXiv preprint arXiv:2211.12561, 2022
352022
Scaling laws for generative mixed-modal language models
A Aghajanyan, L Yu, A Conneau, WN Hsu, K Hambardzumyan, S Zhang, ...
arXiv preprint arXiv:2301.03728, 2023
332023
Megabyte: Predicting million-byte sequences with multiscale transformers
L Yu, D Simig, C Flaherty, A Aghajanyan, L Zettlemoyer, M Lewis
Advances in Neural Information Processing Systems 36, 2024
312024
Semantic representations using structural ontology for assistant systems
A Aghajanyan, S Gupta, B Moran, TF Levin, CANSH Nakatsu, D Difranco, ...
US Patent 11,651,449, 2023
292023
On-device convolutional neural network models for assistant systems
A Aly, A Babu, A Aghajanyan
US Patent 11,314,941, 2022
292022
Non-autoregressive semantic parsing for compositional task-oriented dialog
A Babu, A Shrivastava, A Aghajanyan, A Aly, A Fan, M Ghazvininejad
arXiv preprint arXiv:2104.04923, 2021
232021
Softtarget regularization: An effective technique to reduce over-fitting in neural networks
A Aghajanyan
2017 3rd IEEE International Conference on Cybernetics (CYBCONF), 1-5, 2017
202017
Retronlu: Retrieval augmented task-oriented semantic parsing
V Gupta, A Shrivastava, A Sagar, A Aghajanyan, D Savenkov
arXiv preprint arXiv:2109.10410, 2021
182021
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20