Eduard Gorbunov
Orcid: 0000-0002-3370-4130Affiliations:
- Moscow Institute of Physics and Technology (MIPT), Russia (PhD)
According to our database1,
Eduard Gorbunov
authored at least 55 papers
between 2018 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on twitter.com
-
on orcid.org
-
on github.com
On csauthors.net:
Bibliography
2024
High-Probability Complexity Bounds for Non-smooth Stochastic Convex Optimization with Heavy-Tailed Noise.
J. Optim. Theory Appl., December, 2024
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits.
Comput. Manag. Sci., June, 2024
Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization.
CoRR, 2024
Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning.
CoRR, 2024
Error Feedback under (L<sub>0</sub>,L<sub>1</sub>)-Smoothness: Normalization and Momentum.
CoRR, 2024
Methods for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization: Clipping, Acceleration, and Adaptivity.
CoRR, 2024
CoRR, 2024
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Low-Resource Machine Translation through the Lens of Personalized Federated Learning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024
2023
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences.
CoRR, 2023
CoRR, 2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance.
Proceedings of the International Conference on Machine Learning, 2023
Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity.
Proceedings of the International Conference on Machine Learning, 2023
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top.
Proceedings of the Eleventh International Conference on Learning Representations, 2023
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
2022
SIAM J. Optim., 2022
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
CoRR, 2022
CoRR, 2022
Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation.
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the International Conference on Machine Learning, 2022
Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
2021
An accelerated directional derivative method for smooth stochastic convex optimization.
Eur. J. Oper. Res., 2021
Distributed and Stochastic Optimization Methods with Gradient Compression and Local Steps.
CoRR, 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback.
CoRR, 2021
Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise.
CoRR, 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021
2020
SIAM J. Optim., 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the 8th International Conference on Learning Representations, 2020
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020
2019
Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator.
Autom. Remote. Control., 2019
Autom. Remote. Control., 2019
Proceedings of the Conference on Learning Theory, 2019
Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives.
Proceedings of the Conference on Learning Theory, 2019
On Primal and Dual Approaches for Distributed Stochastic Convex Optimization over Networks.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019
2018
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018