Brian Bullins

According to our database1, Brian Bullins authored at least 30 papers between 2016 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Convex optimization with <i>p</i>-norm oracles.
CoRR, 2024

Faster Acceleration for Steepest Descent.
CoRR, 2024

Local Composite Saddle Point Optimization.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Federated Composite Saddle Point Optimization.
CoRR, 2023

Beyond first-order methods for non-convex non-concave min-max optimization.
CoRR, 2023

Competitive Gradient Optimization.
Proceedings of the International Conference on Machine Learning, 2023

Variance-Reduced Conservative Policy Iteration.
Proceedings of the International Conference on Algorithmic Learning Theory, 2023

2022
Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities.
SIAM J. Optim., September, 2022

Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities.
CoRR, 2022

Towards Optimal Communication Complexity in Distributed Non-Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication (Extended Abstract).
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

2021
Adaptive regularization with cubics on manifolds.
Math. Program., 2021

A Stochastic Newton Algorithm for Distributed Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Almost-Linear-Time Weighted 𝓁<sub>p</sub>-Norm Solvers in Slightly Dense Graphs via Sparsification.
Proceedings of the 48th International Colloquium on Automata, Languages, and Programming, 2021

The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication.
Proceedings of the Conference on Learning Theory, 2021

2020
Is Local SGD Better than Minibatch SGD?
Proceedings of the 37th International Conference on Machine Learning, 2020

Highly smooth minimization of non-smooth problems.
Proceedings of the Conference on Learning Theory, 2020

2019
Efficient Higher-Order Optimization for Machine Learning
PhD thesis, 2019

Higher-Order Accelerated Methods for Faster Non-Smooth Optimization.
CoRR, 2019

Online Control with Adversarial Disturbances.
Proceedings of the 36th International Conference on Machine Learning, 2019

Efficient Full-Matrix Adaptive Regularization.
Proceedings of the 36th International Conference on Machine Learning, 2019

Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning.
Proceedings of the Algorithmic Learning Theory, 2019

2018
The Case for Full-Matrix Adaptive Regularization.
CoRR, 2018

Not-So-Random Features.
Proceedings of the 6th International Conference on Learning Representations, 2018

2017
Second-Order Stochastic Optimization for Machine Learning in Linear Time.
J. Mach. Learn. Res., 2017

Finding approximate local minima faster than gradient descent.
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017

2016
Finding Approximate Local Minima for Nonconvex Optimization in Linear Time.
CoRR, 2016

Second Order Stochastic Optimization in Linear Time.
CoRR, 2016

The Limits of Learning with Missing Data.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016


  Loading...