Haihao Lu

Orcid: 0000-0002-5217-1894

According to our database1, Haihao Lu authored at least 37 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Infeasibility Detection with Primal-Dual Hybrid Gradient for Large-Scale Linear Programming.
SIAM J. Optim., March, 2024

A J-symmetric quasi-newton method for minimax problems.
Math. Program., March, 2024

On the Linear Convergence of Extragradient Methods for Nonconvex-Nonconcave Minimax Problems.
INFORMS J. Optim., January, 2024

Auto-bidding and Auctions in Online Advertising: A Survey.
CoRR, 2024

A Field Guide for Pacing Budget and ROS Constraints.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems.
Oper. Res., January, 2023

The landscape of the proximal point method for nonconvex-nonconcave minimax optimization.
Math. Program., 2023

Faster first-order primal-dual methods for linear programming using restarts and sharpness.
Math. Program., 2023

Joint Feedback Loop for Spend and Return-On-Spend Constraints.
CoRR, 2023

Online Ad Procurement in Non-stationary Autobidding Worlds.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Frank-Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning.
SIAM J. Optim., December, 2022

An O(s<sup>r)</sup>-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems.
Math. Program., 2022

From Online Optimization to PID Controllers: Mirror Descent with Momentum.
CoRR, 2022

A J-Symmetric Quasi-Newton Method for Minimax Problems.
CoRR, 2022

Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems.
Proceedings of the International Conference on Algorithmic Learning Theory, 29 March, 2022

2021
Generalized stochastic Frank-Wolfe algorithm with stochastic "substitute" gradient for structured convex optimization.
Math. Program., 2021

Linear Convergence of Stochastic Primal Dual Methods for Linear Programming Using Variance Reduction and Restarts.
CoRR, 2021

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Regularized Online Allocation Problems: Fairness and Beyond.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Randomized Gradient Boosting Machine.
SIAM J. Optim., 2020

The Landscape of Nonconvex-Nonconcave Minimax Optimization.
CoRR, 2020

Contextual Reserve Price Optimization in Auctions.
CoRR, 2020

An O(s<sup>r</sup>)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to Convex-Concave Saddle-Point Problems.
CoRR, 2020

Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Dual Mirror Descent for Online Allocation Problems.
Proceedings of the 37th International Conference on Machine Learning, 2020

Accelerating Gradient Boosting Machines.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Ordered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
"Relative Continuity" for Non-Lipschitz Nonsmooth Convex Optimization Using Stochastic (or Deterministic) Mirror Descent.
INFORMS J. Optim., October, 2019

A Stochastic First-Order Method for Ordered Empirical Risk Minimization.
CoRR, 2019

Accelerating Gradient Boosting Machine.
CoRR, 2019

2018
Relatively Smooth Convex Optimization by First-Order Methods, and Applications.
SIAM J. Optim., 2018

New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure.
Math. Program., 2018

Near-Optimal Online Knapsack Strategy for Real-Time Bidding in Internet Advertising.
CoRR, 2018

Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems.
CoRR, 2018

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions.
Proceedings of the 35th International Conference on Machine Learning, 2018

Accelerating Greedy Coordinate Descent Methods.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Depth Creates No Bad Local Minima.
CoRR, 2017


  Loading...