Folgen
Peter Richtarik
Peter Richtarik
Professor, KAUST
Bestätigte E-Mail-Adresse bei kaust.edu.sa - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Federated learning: Strategies for improving communication efficiency
J Konečný, HB McMahan, FX Yu, P Richtárik, AT Suresh, D Bacon
arXiv preprint arXiv:1610.05492, 2016
36942016
Federated optimization: Distributed machine learning for on-device intelligence
J Konečný, HB McMahan, D Ramage, P Richtárik
arXiv preprint arXiv:1610.02527, 2016
15322016
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
P Richtarik, M Takáč
Mathematical Programming 144 (2), 1-38, 2014
8092014
Generalized power method for sparse principal component analysis
M Journee, Y Nesterov, P Richtárik, R Sepulchre
Journal of Machine Learning Research 11, 517-553, 2010
6722010
Parallel coordinate descent methods for big data optimization
P Richtárik, M Takáč
Mathematical Programming 156 (1), 433-484, 2016
5212016
Accelerated, parallel and proximal coordinate descent
O Fercoq, P Richtárik
SIAM Journal on Optimization 25 (4), 1997-2023, 2015
3892015
Mini-batch semi-stochastic gradient descent in the proximal setting
J Konečný, J Liu, P Richtárik, M Takáč
IEEE Journal of Selected Topics in Signal Processing 10 (2), 242-255, 2016
3032016
SGD: General Analysis and Improved Rates
RM Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtarik
ICML 2019, 2019
2982019
Tighter theory for local SGD on identical and heterogeneous data
A Khaled, K Mishchenko, P Richtárik
The 23rd International Conference on Artificial Intelligence and Statistics, 2020
2922020
Randomized iterative methods for linear systems
RM Gower, P Richtárik
SIAM Journal on Matrix Analysis and Applications 36 (4), 1660-1690, 2015
2652015
Federated learning of a mixture of global and local models
F Hanzely, P Richtárik
arXiv preprint arXiv:2002.05516, 2020
2602020
Semi-stochastic gradient descent methods
J Konečný, P Richtárik
Frontiers in Applied Mathematics and Statistics 3:9, 2017
248*2017
Distributed coordinate descent method for learning with big data
P Richtárik, M Takáč
Journal of Machine Learning Research 17 (75), 1-25, 2016
2452016
Scaling distributed machine learning with in-network aggregation
A Sapio, M Canini, CY Ho, J Nelson, P Kalnis, C Kim, A Krishnamurthy, ...
arXiv preprint arXiv:1903.06701, 2019
2432019
Mini-batch primal and dual methods for SVMs
M Takáč, A Bijral, P Richtárik, N Srebro
Proceedings of the 30th Int. Conf. on Machine Learning, PMLR 28 (3), 1022-1030, 2013
204*2013
Adding vs. averaging in distributed primal-dual optimization
C Ma, V Smith, M Jaggi, MI Jordan, P Richtárik, M Takáč
Proceedings of the 32nd Int. Conf. on Machine Learning, PMLR 37, 1973-1982, 2015
1902015
Even faster accelerated coordinate descent using non-uniform sampling
Z Allen-Zhu, Z Qu, P Richtarik, Y Yuan
Proceedings of The 33rd Int. Conf. on Machine Learning, PMLR 48, 1110-1119, 2016
1872016
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
1862021
Distributed optimization with arbitrary local solvers
C Ma, J Konečný, M Jaggi, V Smith, MI Jordan, P Richtárik, M Takáč
Optimization Methods and Software 32 (4), 813-848, 2017
1842017
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
LM Nguyen, PH Nguyen, M van Dijk, P Richtárik, K Scheinberg, M Takáč
Proceedings of the 35th Int. Conf. on Machine Learning, PMLR 80, 3750-3758, 2018
1832018
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20