Follow
Andrew Dai
Title
Cited by
Cited by
Year
Generating sentences from a continuous space
SR Bowman, L Vilnis, O Vinyals, AM Dai, R Jozefowicz, S Bengio
Proceedings of the 20th SIGNLL Conference on Computational Natural Language …, 2016
24642016
Scalable and accurate deep learning with electronic health records
A Rajkomar, E Oren, K Chen, AM Dai, N Hajaj, M Hardt, PJ Liu, X Liu, ...
NPJ digital medicine 1 (1), 18, 2018
17832018
Natural questions: a benchmark for question answering research
T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, ...
Transactions of the Association for Computational Linguistics 7, 453-466, 2019
15842019
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
arXiv preprint arXiv:2204.02311, 2022
15192022
Semi-supervised sequence learning
AM Dai, QV Le
Advances in neural information processing systems 28, 2015
14552015
Adversarial Training Methods for Semi-Supervised Text Classification
T Miyato, AM Dai, I Goodfellow
Proceedings of the International Conference on Learning Representations, 2017
11042017
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
arXiv preprint arXiv:2109.01652, 2021
8802021
Music transformer
CZA Huang, A Vaswani, J Uszkoreit, N Shazeer, I Simon, C Hawthorne, ...
arXiv preprint arXiv:1809.04281, 2018
6762018
Maskgan: better text generation via filling in the_
W Fedus, I Goodfellow, AM Dai
arXiv preprint arXiv:1801.07736, 2018
5542018
Document embedding with paragraph vectors
AM Dai, C Olah, QV Le
NIPS 2014 Deep learning workshop, 2015
5242015
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
5042022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
3272022
Many paths to equilibrium: GANs do not need to decrease a divergence at every step
W Fedus, M Rosca, B Lakshminarayanan, AM Dai, S Mohamed, ...
arXiv preprint arXiv:1710.08446, 2017
2372017
Who said what: Modeling individual labelers improves classification
M Guan, V Gulshan, A Dai, G Hinton
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
2032018
Learning longer-term dependencies in rnns with auxiliary losses
T Trinh, A Dai, T Luong, Q Le
International Conference on Machine Learning, 4965-4974, 2018
1942018
Gmail smart compose: Real-time assisted writing
MX Chen, BN Lee, G Bansal, Y Cao, S Zhang, J Lu, J Tsay, Y Wang, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
1862019
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International Conference on Machine Learning, 5547-5569, 2022
1752022
Learning the graphical structure of electronic health records with graph convolutional transformer
E Choi, Z Xu, Y Li, M Dusenberry, G Flores, E Xue, A Dai
Proceedings of the AAAI conference on artificial intelligence 34 (01), 606-613, 2020
1572020
Wearable sensors for Parkinson’s disease: which data are worth collecting for training symptom detection models
L Lonini, A Dai, N Shawen, T Simuni, C Poon, L Shimanovich, ...
NPJ digital medicine 1 (1), 64, 2018
1462018
Embedding text in hyperbolic spaces
B Dhingra, CJ Shallue, M Norouzi, AM Dai, GE Dahl
arXiv preprint arXiv:1806.04313, 2018
1452018
The system can't perform the operation now. Try again later.
Articles 1–20