Dmitriy Drusvyatskiy
Orcid: 0000-0001-5245-0458
According to our database1,
Dmitriy Drusvyatskiy
authored at least 60 papers
between 2011 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
Stochastic algorithms with geometric step decay converge linearly on sharp functions.
Math. Program., September, 2024
Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality.
J. Mach. Learn. Res., 2024
Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth.
CoRR, 2024
2023
Math. Oper. Res., May, 2023
J. Mach. Learn. Res., 2023
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems.
CoRR, 2023
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
2022
SIAM J. Optim., September, 2022
Math. Oper. Res., 2022
Found. Comput. Math., 2022
A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Improved Rates for Derivative Free Gradient Play in Strongly Monotone Games<sup>∗</sup>.
Proceedings of the 61st IEEE Conference on Decision and Control, 2022
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022
2021
Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria.
Math. Program., 2021
J. Mach. Learn. Res., 2021
Low-Rank Matrix Recovery with Composite Optimization: Good Conditioning and Rapid Convergence.
Found. Comput. Math., 2021
Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality.
CoRR, 2021
Stochastic optimization under time drift: iterate averaging, step decay, and high probability guarantees.
CoRR, 2021
Stochastic optimization under time drift: iterate averaging, step-decay schedules, and high probability guarantees.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
2020
Found. Comput. Math., 2020
Proceedings of the Conference on Learning Theory, 2020
2019
Math. Program., 2019
Proceedings of the 36th International Conference on Machine Learning, 2019
2018
IEEE Trans. Autom. Control., 2018
SIAM J. Optim., 2018
Math. Oper. Res., 2018
Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning.
CoRR, 2018
Stochastic subgradient method converges at the rate O(k<sup>-1/4</sup>) on weakly convex functions.
CoRR, 2018
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018
2017
Noisy Euclidean Distance Realization: Robust Facial Reduction and the Pareto Frontier.
SIAM J. Optim., 2017
Math. Program., 2017
2016
2015
SIAM J. Matrix Anal. Appl., 2015
SIAM J. Optim., 2015
Math. Program., 2015
Math. Program., 2015
Math. Oper. Res., 2015
Found. Comput. Math., 2015
2014
2013
Tilt Stability, Uniform Quadratic Growth, and Strong Metric Regularity of the Subdifferential.
SIAM J. Optim., 2013
2011