Folgen
Ming Yin
Ming Yin
Bestätigte E-Mail-Adresse bei princeton.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning
M Yin, Y Bai, YX Wang
(AISTATS) International Conference on Artificial Intelligence and Statistics …, 2021
72*2021
Asymptotically efficient off-policy evaluation for tabular reinforcement learning
M Yin, YX Wang
(AISTATS) International Conference on Artificial Intelligence and Statistics …, 2020
622020
Near-optimal offline reinforcement learning via double variance reduction
M Yin, Y Bai, YX Wang
(NeurIPS) Advances in neural information processing systems 34, 7677-7688, 2021
582021
Towards instance-optimal offline reinforcement learning with pessimism
M Yin, YX Wang
(NeurIPS) Advances in neural information processing systems 34, 4065-4078, 2021
542021
Near-optimal offline reinforcement learning with linear representation: Leveraging variance information with pessimism
M Yin, Y Duan, M Wang, YX Wang
(ICLR) Internation Conference on Learning Representations, 2022, 2022
452022
Optimal uniform ope and model-based offline reinforcement learning in time-homogeneous, reward-free and task-agnostic settings
M Yin, YX Wang
(NeurIPS) Advances in Neural Information Processing Systems, 2021, 2021
202021
Sample-efficient reinforcement learning with loglog (t) switching cost
D Qiao, M Yin, M Min, YX Wang
(ICML) International Conference on Machine Learning, 18031-18061, 2022
152022
Offline reinforcement learning with differentiable function approximation is provably efficient
M Yin, M Wang, YX Wang
(ICLR) Internation Conference on Learning Representations, 2023, 2023
82023
On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
T Nguyen-Tang, M Yin, S Gupta, S Venkatesh, R Arora
(AAAI) AAAI Conference on Artificial Intelligence, 2023, 2023
62023
Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality
M Yin, W Chen, M Wang, YX Wang
(UAI) The 38th Conference on Uncertainty in Artificial Intelligence, 2022
32022
TheoremQA: A Theorem-driven Question Answering dataset
W Chen, M Yin, M Ku, E Wan, X Ma, J Xu, T Xia, X Wang, P Lu
arXiv preprint arXiv:2305.12524, 2023
22023
Logarithmic Switching Cost in Reinforcement Learning beyond Linear MDPs
D Qiao, M Yin, YX Wang
arXiv preprint arXiv:2302.12456, 2023
22023
Non-stationary Reinforcement Learning under General Function Approximation
S Feng, M Yin, R Huang, YX Wang, J Yang, Y Liang
(ICML) International Conference on Machine Learning, 2023
12023
No-Regret Linear Bandits beyond Realizability
C Liu, M Yin, YX Wang
(UAI) The 39th Conference on Uncertainty in Artificial Intelligence, 2023
12023
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
K Zhang, M Yin, YX Wang
arXiv preprint arXiv:2206.05916, 2022
12022
Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games
S Feng, M Yin, YX Wang, J Yang, Y Liang
arXiv preprint arXiv:2308.08858, 2023
2023
Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data
S Madhow, D Xiao, M Yin, YX Wang
arXiv preprint arXiv:2306.14063, 2023
2023
Offline Reinforcement Learning with Closed-form Policy Improvement Operators
J Li, E Zhang, M Yin, B Qinxun, YX Wang, WY Wang
(ICML) International Conference on Machine Learning, 2023
2023
Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality (Supplementary material)
M Yin, W Chen, M Wang, YX Wang
(UAI) Supplementary material, 2022
2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–19