Xiao Wang
Titel
Zitiert von
Zitiert von
Jahr
First-order methods almost always avoid saddle points: The case of vanishing step-sizes
I Panageas, G Piliouras, X Wang
arXiv preprint arXiv:1906.07772, 2019
122019
First-order methods almost always avoid saddle points: The case of vanishing step-sizes
I Panageas, G Piliouras, X Wang
arXiv preprint arXiv:1906.07772, 2019
122019
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
V Chatziafratis, SG Nagarajan, I Panageas, X Wang
arXiv preprint arXiv:1912.04378, 2019
92019
Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes
Q Lei, SG Nagarajan, I Panageas
International Conference on Artificial Intelligence and Statistics, 1441-1449, 2021
72021
Multiplicative weights updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always
I Panageas, G Piliouras, X Wang
International Conference on Machine Learning, 4961-4969, 2019
72019
Convergence to Second-Order Stationarity for Non-negative Matrix Factorization: Provably and Concurrently
XW Ioannis Panageas, Stratis Skoulakis, Antonios Varvitsiotis
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–6