Grigory Malinovsky

Orcid: 0000-0001-6428-1866

According to our database1, Grigory Malinovsky authored at least 21 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation.
CoRR, 2024

MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence.
CoRR, 2024

Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction.
CoRR, 2024

Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
MAST: Model-Agnostic Sparsified Training.
CoRR, 2023

Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences.
CoRR, 2023

Improving Accelerated Federated Learning with Compression and Importance Sampling.
CoRR, 2023

TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation.
CoRR, 2023

Federated Learning with Regularized Client Participation.
CoRR, 2023

Random Reshuffling with Variance Reduction: New Analysis and Better Rates.
Proceedings of the Uncertainty in Artificial Intelligence, 2023

A Guide Through the Zoo of Biased SGD.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization.
Proceedings of the 4th International Workshop on Distributed Machine Learning, 2023

Can 5th Generation Local Training Methods Support Client Sampling? Yes!
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
An Optimal Algorithm for Strongly Convex Min-min Optimization.
CoRR, 2022

Can 5<sup>th</sup> Generation Local Training Methods Support Client Sampling? Yes!
CoRR, 2022

Federated Optimization Algorithms with Random Reshuffling and Gradient Compression.
CoRR, 2022

Federated Random Reshuffling with Compression and Variance Reduction.
CoRR, 2022

Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Proceedings of the International Conference on Machine Learning, 2022

2020
Distributed Proximal Splitting Algorithms with Rates and Acceleration.
CoRR, 2020

From Local SGD to Local Fixed-Point Methods for Federated Learning.
Proceedings of the 37th International Conference on Machine Learning, 2020


  Loading...