First-order methods almost always avoid saddle points: The case of vanishing step-sizes I Panageas, G Piliouras, X Wang Advances in Neural Information Processing Systems, 6474-6483, 2019 | 10 | 2019 |
First-order methods almost always avoid saddle points: The case of vanishing step-sizes I Panageas, G Piliouras, X Wang Advances in Neural Information Processing Systems, 6474-6483, 2019 | 10 | 2019 |
Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes Q Lei, SG Nagarajan, I Panageas, X Wang arXiv preprint arXiv:2002.06768, 2020 | 6 | 2020 |
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem V Chatziafratis, SG Nagarajan, I Panageas, X Wang arXiv preprint arXiv:1912.04378, 2019 | 6 | 2019 |
Multiplicative weights updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always I Panageas, G Piliouras, X Wang International Conference on Machine Learning, 4961-4969, 2019 | 3 | 2019 |
Convergence to Second-Order Stationarity for Non-negative Matrix Factorization: Provably and Concurrently XW Ioannis Panageas, Stratis Skoulakis, Antonios Varvitsiotis | | |