Courtney Paquette

According to our database1, Courtney Paquette authored at least 15 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
4+3 Phases of Compute-Optimal Neural Scaling Laws.
CoRR, 2024

Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems.
CoRR, 2024

Implicit Diffusion: Efficient Optimization through Stochastic Sampling.
CoRR, 2024

Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems extended abstract.
Proceedings of the Thirty Seventh Annual Conference on Learning Theory, June 30, 2024

2023
Halting Time is Predictable for Large Models: A Universality Property and Average-Case Analysis.
Found. Comput. Math., April, 2023

Hitting the High-Dimensional Notes: An ODE for SGD learning dynamics on GLMs and multi-index models.
CoRR, 2023

2022
Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High Dimensions.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Only tails matter: Average-Case Universality and Robustness in the Convex Regime.
Proceedings of the International Conference on Machine Learning, 2022

2021
Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality.
Proceedings of the Conference on Learning Theory, 2021

2020
A Stochastic Line Search Method with Expected Complexity Analysis.
SIAM J. Optim., 2020

2019
Efficiency of minimizing compositions of convex functions and smooth maps.
Math. Program., 2019

2018
Subgradient Methods for Sharp Weakly Convex Functions.
J. Optim. Theory Appl., 2018

Catalyst for Gradient-based Nonconvex Optimization.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018


  Loading...