Peter Richtárik
Orcid: 0000-0003-4380-5848Affiliations:
- King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
- University of Edinburgh, UK (former)
- Moscow Institute of Physics and Technology (MIPT), Dolgoprudny, Russia (former)
- Cornell University, Ithaca, NY, USA (former, PhD 2007)
According to our database1,
Peter Richtárik
authored at least 244 papers
between 2010 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on zbmath.org
-
on linkedin.com
-
on kaust.edu.sa
-
on twitter.com
-
on orcid.org
On csauthors.net:
Bibliography
2024
SIAM J. Math. Data Sci., March, 2024
Trans. Mach. Learn. Res., 2024
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation.
CoRR, 2024
Methods for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization: Clipping, Acceleration, and Adaptivity.
CoRR, 2024
Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning.
CoRR, 2024
Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning.
CoRR, 2024
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning.
CoRR, 2024
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence.
CoRR, 2024
Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations.
CoRR, 2024
CoRR, 2024
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models.
CoRR, 2024
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction.
CoRR, 2024
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression.
CoRR, 2024
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity.
CoRR, 2024
Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity.
CoRR, 2024
Proceedings of the Forty-first International Conference on Machine Learning, 2024
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Understanding Progressive Training Through the Framework of Randomized Coordinate Descent.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024
2023
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization.
J. Optim. Theory Appl., November, 2023
Stochastic distributed learning with gradient quantization and double-variance reduction.
Optim. Methods Softw., January, 2023
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling.
Trans. Mach. Learn. Res., 2023
Trans. Mach. Learn. Res., 2023
Trans. Mach. Learn. Res., 2023
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation.
Trans. Mach. Learn. Res., 2023
Trans. Mach. Learn. Res., 2023
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences.
CoRR, 2023
CoRR, 2023
Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees.
CoRR, 2023
Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning.
CoRR, 2023
CoRR, 2023
TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation.
CoRR, 2023
CoRR, 2023
Proceedings of the Uncertainty in Artificial Intelligence, 2023
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance.
Proceedings of the International Conference on Machine Learning, 2023
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression.
Proceedings of the International Conference on Machine Learning, 2023
DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity.
Proceedings of the Eleventh International Conference on Learning Representations, 2023
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top.
Proceedings of the Eleventh International Conference on Learning Representations, 2023
Proceedings of the Eleventh International Conference on Learning Representations, 2023
Proceedings of the 4th International Workshop on Distributed Machine Learning, 2023
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization.
Proceedings of the 4th International Workshop on Distributed Machine Learning, 2023
Proceedings of the 4th International Workshop on Distributed Machine Learning, 2023
Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
2022
Trans. Mach. Learn. Res., 2022
SIAM J. Math. Data Sci., 2022
Optim. Methods Softw., 2022
J. Optim. Theory Appl., 2022
CoRR, 2022
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity.
CoRR, 2022
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Compressed Communication.
CoRR, 2022
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox.
CoRR, 2022
A Note on the Convergence of Mirrored Stein Variational Gradient Descent under (L<sub>0</sub>, L<sub>1</sub>)-Smoothness Condition.
CoRR, 2022
CoRR, 2022
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization.
CoRR, 2022
Proceedings of the Uncertainty in Artificial Intelligence, 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the Mathematical and Scientific Machine Learning, 2022
Proceedings of the Mathematical and Scientific Machine Learning, 2022
A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1.
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the International Conference on Machine Learning, 2022
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation.
Proceedings of the International Conference on Machine Learning, 2022
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the Tenth International Conference on Learning Representations, 2022
Proceedings of the Tenth International Conference on Learning Representations, 2022
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information.
Proceedings of the Tenth International Conference on Learning Representations, 2022
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
2021
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols.
IEEE Trans. Inf. Theory, 2021
Math. Program., 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback.
CoRR, 2021
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning.
CoRR, 2021
Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1.
CoRR, 2021
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation.
CoRR, 2021
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization.
Comput. Optim. Appl., 2021
Proceedings of the 18th USENIX Symposium on Networked Systems Design and Implementation, 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization.
Proceedings of the 38th International Conference on Machine Learning, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021
Proceedings of the DistributedML '21: Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning, 2021
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021
2020
Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit.
IEEE Trans. Signal Process., 2020
SIAM J. Sci. Comput., 2020
SIAM J. Matrix Anal. Appl., 2020
SIAM J. Optim., 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization.
CoRR, 2020
On the Convergence Analysis of Asynchronous SGD for Solving Consistent Linear Systems.
CoRR, 2020
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor.
CoRR, 2020
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods.
Comput. Optim. Appl., 2020
Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the 37th International Conference on Machine Learning, 2020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization.
Proceedings of the 37th International Conference on Machine Learning, 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems.
Proceedings of the 37th International Conference on Machine Learning, 2020
Proceedings of the 37th International Conference on Machine Learning, 2020
Proceedings of the 8th International Conference on Learning Representations, 2020
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop.
Proceedings of the Algorithmic Learning Theory, 2020
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
2019
Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates.
SIAM J. Optim., 2019
J. Mach. Learn. Res., 2019
CoRR, 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods.
CoRR, 2019
CoRR, 2019
Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2019
Proceedings of the 24th International Symposium on Vision, Modeling, and Visualization, 2019
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019
Proceedings of the 36th International Conference on Machine Learning, 2019
Proceedings of the 36th International Conference on Machine Learning, 2019
Proceedings of the 36th International Conference on Machine Learning, 2019
Proceedings of the IEEE International Conference on Acoustics, 2019
Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019
2018
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications.
SIAM J. Optim., 2018
Frontiers Appl. Math. Stat., 2018
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2018
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018
Proceedings of the 35th International Conference on Machine Learning, 2018
Proceedings of the 35th International Conference on Machine Learning, 2018
Proceedings of the Algorithmic Learning Theory, 2018
Proceedings of the 56th Annual Allerton Conference on Communication, 2018
2017
SIAM J. Matrix Anal. Appl., 2017
Linearly convergent stochastic heavy ball method for minimizing generalization error.
CoRR, 2017
A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017
2016
Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent.
SIAM Rev., 2016
Optim. Methods Softw., 2016
Optim. Methods Softw., 2016
Optim. Lett., 2016
IEEE J. Sel. Top. Signal Process., 2016
J. Optim. Theory Appl., 2016
J. Mach. Learn. Res., 2016
CoRR, 2016
Proceedings of the 33nd International Conference on Machine Learning, 2016
Proceedings of the 33nd International Conference on Machine Learning, 2016
Proceedings of the 33nd International Conference on Machine Learning, 2016
Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing, 2016
2015
Optim. Methods Softw., 2015
CoRR, 2015
Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 2015
Proceedings of the 32nd International Conference on Machine Learning, 2015
Proceedings of the 32nd International Conference on Machine Learning, 2015
2014
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function.
Math. Program., 2014
CoRR, 2014
Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing, 2014
2013
CoRR, 2013
Proceedings of the 30th International Conference on Machine Learning, 2013
2012
J. Optim. Theory Appl., 2012
Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes
CoRR, 2012
Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on support vector machine classification of RT-QuIC data
CoRR, 2012
2011
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design.
Proceedings of the Operations Research Proceedings 2011, Selected Papers of the International Conference on Operations Research (OR 2011), August 30, 2011
2010
J. Mach. Learn. Res., 2010