Robert M. Gower

Orcid: 0000-0002-2268-9780

Affiliations:
  • Simons Foundation, Flatiron Institute, New York, NY, USA
  • Télécom Paris, Institut Polytechnique de Paris, France
  • University of Edinburgh, School of Mathematics, Edinburgh, UK (PhD 2016)
  • State University of Campinas, Department of Applied Mathematics, Campinas, Brazil


According to our database1, Robert M. Gower authored at least 53 papers between 2012 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Correction: A Bregman-Kaczmarz method for nonlinear systems of equations.
Comput. Optim. Appl., July, 2024

A Bregman-Kaczmarz method for nonlinear systems of equations.
Comput. Optim. Appl., April, 2024

Enhancing Policy Gradient with the Polyak Step-Size Adaption.
CoRR, 2024

Directional Smoothness and Gradient Methods: Convergence and Adaptivity.
CoRR, 2024

Level Set Teleportation: An Optimization Perspective.
CoRR, 2024

SGD with Clipping is Secretly Estimating the Median Gradient.
CoRR, 2024

MoMo: Momentum Models for Adaptive Learning Rates.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Batch and match: black-box variational inference with a score-based divergence.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Improving Convergence and Generalization Using Parameter Symmetries.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization.
J. Optim. Theory Appl., November, 2023

A Stochastic Proximal Polyak Step Size.
Trans. Mach. Learn. Res., 2023

SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms.
CoRR, 2023

Function Value Learning: Adaptive Learning Rates Based on the Polyak Stepsize and Function Splitting in ERM.
CoRR, 2023

Variational Inference with Gaussian Score Matching.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Provable convergence guarantees for black-box variational inference.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A Model-Based Method for Minimizing CVaR and Beyond.
Proceedings of the International Conference on Machine Learning, 2023

Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

SP2 : A Second Order Stochastic Polyak Method.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
RidgeSketch: A Fast Sketching Based Solver for Large Scale Ridge Regression.
SIAM J. Matrix Anal. Appl., September, 2022

A Statistical Linear Precoding Scheme Based on Random Iterative Method for Massive MIMO Systems.
IEEE Trans. Wirel. Commun., 2022

Randomized Iterative Methods for Low-Complexity Large-Scale MIMO Detection.
IEEE Trans. Signal Process., 2022

Sketched Newton-Raphson.
SIAM J. Optim., 2022

Cutting Some Slack for SGD with Adaptive Polyak Stepsizes.
CoRR, 2022

A general sample complexity analysis of vanilla policy gradient.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

SAN: Stochastic Average Newton Algorithm for Minimizing Finite Sums.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
On Adaptive Sketch-and-Project for Solving Linear Systems.
SIAM J. Matrix Anal. Appl., 2021

Stochastic quasi-gradient methods: variance reduction via Jacobian sketching.
Math. Program., 2021

Stochastic Polyak Stepsize with a Moving Target.
CoRR, 2021

Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball.
Proceedings of the Conference on Learning Theory, 2021

SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization.
Proceedings of the Asian Conference on Machine Learning, 2021

2020
Variance-Reduced Methods for Machine Learning.
Proc. IEEE, 2020

On the convergence of the Stochastic Heavy Ball Method.
CoRR, 2020

Factorial Powers for Stochastic Optimization.
CoRR, 2020

Fast Linear Convergence of Randomized BFGS.
CoRR, 2020

2019
Adaptive Sketch-and-Project Methods for Solving Linear Systems.
CoRR, 2019

SGD: General Analysis and Improved Rates.
CoRR, 2019

Towards closing the gap between the theory and practice of SVRG.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

RSN: Randomized Subspace Newton.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

SGD with Arbitrary Sampling: General Analysis and Improved Rates.
Proceedings of the 36th International Conference on Machine Learning, 2019

Optimal Mini-Batch and Step Sizes for SAGA.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
Greedy stochastic algorithms for entropy-regularized optimal transport problems.
CoRR, 2018

Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

Stochastic algorithms for entropy-regularized optimal transport problems.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

2017
Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms.
SIAM J. Matrix Anal. Appl., 2017

2016
Higher-order reverse automatic differentiation with emphasis on the third-order.
Math. Program., 2016

Stochastic Block BFGS: Squeezing More Curvature out of Data.
Proceedings of the 33nd International Conference on Machine Learning, 2016

2015
Randomized Iterative Methods for Linear Systems.
SIAM J. Matrix Anal. Appl., 2015

Stochastic Dual Ascent for Solving Linear Systems.
CoRR, 2015

2014
Computing the sparsity pattern of Hessians using automatic differentiation.
ACM Trans. Math. Softw., 2014

Action constrained quasi-Newton methods.
CoRR, 2014

2012
A new framework for the computation of Hessians.
Optim. Methods Softw., 2012


  Loading...