Xiao Wang
Titel
Zitiert von
Zitiert von
Jahr
First-order methods almost always avoid saddle points: The case of vanishing step-sizes
I Panageas, G Piliouras, X Wang
Advances in Neural Information Processing Systems, 6474-6483, 2019
102019
First-order methods almost always avoid saddle points: The case of vanishing step-sizes
I Panageas, G Piliouras, X Wang
Advances in Neural Information Processing Systems, 6474-6483, 2019
102019
Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes
Q Lei, SG Nagarajan, I Panageas, X Wang
arXiv preprint arXiv:2002.06768, 2020
62020
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
V Chatziafratis, SG Nagarajan, I Panageas, X Wang
arXiv preprint arXiv:1912.04378, 2019
62019
Multiplicative weights updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always
I Panageas, G Piliouras, X Wang
International Conference on Machine Learning, 4961-4969, 2019
32019
Convergence to Second-Order Stationarity for Non-negative Matrix Factorization: Provably and Concurrently
XW Ioannis Panageas, Stratis Skoulakis, Antonios Varvitsiotis
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–6