Folgen
Yu Bai
Yu Bai
Research Scientist, Salesforce Research
Bestätigte E-Mail-Adresse bei salesforce.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
The landscape of empirical risk for nonconvex losses
S Mei, Y Bai, A Montanari
The Annals of Statistics 46 (6A), 2747-2774, 2018
3472018
Provable self-play algorithms for competitive reinforcement learning
Y Bai, C Jin
International conference on machine learning, 551-560, 2020
1622020
Policy finetuning: Bridging sample-efficient offline and online reinforcement learning
T Xie, N Jiang, H Wang, C Xiong, Y Bai
Advances in neural information processing systems 34, 27395-27407, 2021
1412021
A sharp analysis of model-based reinforcement learning with self-play
Q Liu, T Yu, Y Bai, C Jin
International Conference on Machine Learning, 7001-7010, 2021
1362021
Near-Optimal Reinforcement Learning with Self-Play
Y Bai, C Jin, T Yu
Advances in Neural Information Processing Systems, 2020, 2020
1302020
Proxquant: Quantized neural networks via proximal operators
Y Bai, YX Wang, E Liberty
International Conference on Learning Representations (ICLR) 2019, 2018
1202018
Beyond linearization: On quadratic and higher-order approximation of wide neural networks
Y Bai, JD Lee
International Conference on Learning Representations (ICLR) 2020, 2019
1162019
Provably Efficient Q-Learning with Low Switching Cost
Y Bai, T Xie, N Jiang, YX Wang
Advances in Neural Information Processing Systems, 2019, 2019
982019
When can we learn general-sum Markov games with a large number of players sample-efficiently?
Z Song, S Mei, Y Bai
International Conference on Learning Representations (ICLR) 2022, 2021
862021
Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning
M Yin, Y Bai, YX Wang
International Conference on Artificial Intelligence and Statistics, 1567-1575, 2021
85*2021
Approximability of discriminators implies diversity in GANs
Y Bai, T Ma, A Risteski
International Conference on Learning Representations (ICLR) 2019, 2018
842018
Near-optimal offline reinforcement learning via double variance reduction
M Yin, Y Bai, YX Wang
Advances in neural information processing systems 34, 7677-7688, 2021
692021
How important is the train-validation split in meta-learning?
Y Bai, M Chen, P Zhou, T Zhao, J Lee, S Kakade, H Wang, C Xiong
International Conference on Machine Learning, 543-553, 2021
692021
Sample-efficient learning of Stackelberg equilibria in general-sum games
Y Bai, C Jin, H Wang, C Xiong
Advances in Neural Information Processing Systems 34, 25799-25811, 2021
652021
Transformers as statisticians: Provable in-context learning with in-context algorithm selection
Y Bai, F Chen, H Wang, C Xiong, S Mei
Advances in neural information processing systems 36, 2024
582024
Subgradient descent learns orthogonal dictionaries
Y Bai, Q Jiang, J Sun
International Conference on Learning Representations (ICLR) 2019, 2018
582018
Towards understanding hierarchical learning: Benefits of neural representations
M Chen, Y Bai, JD Lee, T Zhao, H Wang, C Xiong, R Socher
Advances in Neural Information Processing Systems, 2020, 2020
502020
The role of coverage in online reinforcement learning
T Xie, DJ Foster, Y Bai, N Jiang, SM Kakade
arXiv preprint arXiv:2210.04157, 2022
432022
Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification
Y Bai, S Mei, H Wang, C Xiong
International Conference on Machine Learning, 566-576, 2021
412021
Unified algorithms for rl with decision-estimation coefficients: No-regret, pac, and reward-free learning
F Chen, S Mei, Y Bai
arXiv preprint arXiv:2209.11745, 2022
302022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20