Qihang Lin

Orcid: 0000-0003-2943-3267

According to our database1, Qihang Lin authored at least 74 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning.
CoRR, 2024

Model Developmental Safety: A Safety-Centric Method and Applications in Vision-Language Models.
CoRR, 2024

Multi-Output Distributional Fairness via Post-Processing.
CoRR, 2024

Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning.
CoRR, 2024

2023
Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method.
SIAM J. Optim., March, 2023

First-order Methods for Affinely Constrained Composite Non-convex Non-smooth Problems: Lower Complexity Bound and Near-optimal Methods.
CoRR, 2023

Single-Loop Switching Subgradient Methods for Non-Smooth Weakly Convex Optimization with Non-Smooth Convex Constraints.
CoRR, 2023

Oracle Complexity of Single-Loop Switching Subgradient Methods for Non-Smooth Weakly Convex Functional Constrained Optimization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Modulating functionally-distinct vagus nerve fibers using microelectrodes and kilohertz frequency electrical stimulation.
Proceedings of the 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, 2023

Stochastic Methods for AUC Optimization subject to AUC-based Fairness Constraints.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
Distributionally Robust Optimization with Confidence Bands for Probability Density Functions.
INFORMS J. Optim., January, 2022

Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning.
Optim. Methods Softw., 2022

Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization.
CoRR, 2022

Inexact accelerated proximal gradient method with line search and reduced complexity for affine-constrained and bilinear saddle-point structured convex problems.
CoRR, 2022

Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization.
Comput. Optim. Appl., 2022

Large-scale Optimization of Partial AUC in a Range of False Positive Rates.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

ProtoX: Explaining a Reinforcement Learning Agent via Prototyping.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Hybrid Predictive Models: When an Interpretable Model Collaborates with a Black-box Model.
J. Mach. Learn. Res., 2021

First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems.
J. Mach. Learn. Res., 2021

2020
Comparison-Based Algorithms for One-Dimensional Stochastic Convex Optimization.
INFORMS J. Optim., January, 2020

High-dimensional model recovery from random sketched data by exploring intrinsic sparsity.
Mach. Learn., 2020

Revisiting Approximate Linear Programming: Constraint-Violation Learning with Applications to Inventory Control and Energy Storage.
Manag. Sci., 2020

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints.
J. Mach. Learn. Res., 2020

Sharp Analysis of Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization.
CoRR, 2020

Self-guided Approximate Linear Programs.
CoRR, 2020

Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Bayesian Decision Process for Budget-efficient Crowdsourced Clustering.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Transparency Promotion with Model-Agnostic Linear Competitors.
Proceedings of the 37th International Conference on Machine Learning, 2020

Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints.
Proceedings of the 37th International Conference on Machine Learning, 2020

A Computational Model of Functionally-distinct Cervical Vagus Nerve Fibers.
Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, 2020

2019
DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization.
J. Mach. Learn. Res., 2019

Model-Agnostic Linear Competitors - When Interpretable Models Compete and Collaborate with Black-Box Models.
CoRR, 2019

Inexact Proximal-Point Penalty Methods for Non-Convex Optimization with Non-Convex Constraints.
CoRR, 2019

Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model.
CoRR, 2019

Stochastic Primal-Dual Algorithms with Faster Convergence than O(1/√T) for Problems without Bilinear Structure.
CoRR, 2019

Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
A Level-Set Method for Convex Optimization with a Feasible Solution Path.
SIAM J. Optim., 2018

RSG: Beating Subgradient Method without Smoothness and Strong Convexity.
J. Mach. Learn. Res., 2018

Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning.
CoRR, 2018

Prophit: Causal inverse classification for multiple continuously valued treatment policies.
CoRR, 2018

A Unified Analysis of Stochastic Momentum Methods for Deep Learning.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018

Level-Set Methods for Finite-Sum Constrained Convex Optimization.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Distributed Stochastic Variance Reduced Gradient Methods by Sampling Extra Data with Replacement.
J. Mach. Learn. Res., 2017

Normalized Gradient with Adaptive Stepsize Method for Deep Neural Network Training.
CoRR, 2017

Generalized Inverse Classification.
Proceedings of the 2017 SIAM International Conference on Data Mining, 2017

Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates.
Proceedings of the 34th International Conference on Machine Learning, 2017

Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence.
Proceedings of the 34th International Conference on Machine Learning, 2017

A Budget-Constrained Inverse Classification Framework for Smooth Classifiers.
Proceedings of the 2017 IEEE International Conference on Data Mining Workshops, 2017

2016
On Data Preconditioning for Regularized Loss Minimization.
Mach. Learn., 2016

Bayesian Decision Process for Cost-Efficient Dynamic Ranking via Crowdsourcing.
J. Mach. Learn. Res., 2016

Accelerate Stochastic Subgradient Method by Leveraging Local Error Bound.
CoRR, 2016

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections.
Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, 2016

Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/\epsilon).
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

2015
An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization.
SIAM J. Optim., 2015

A trade execution model under a composite dynamic coherent risk measure.
Oper. Res. Lett., 2015

Statistical decision making for optimal budget allocation in crowd labeling.
J. Mach. Learn. Res., 2015

Doubly Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization with Factorized Data.
CoRR, 2015

Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees.
CoRR, 2015

Distributed Stochastic Variance Reduced Gradient Methods.
CoRR, 2015

An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization.
Comput. Optim. Appl., 2015

Big Data Analytics: Optimization and Randomization.
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015

2014
A smoothing stochastic gradient method for composite optimization.
Optim. Methods Softw., 2014

A sparsity preserving stochastic gradient methods for sparse regression.
Comput. Optim. Appl., 2014

An Accelerated Proximal Coordinate Gradient Method.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014

2013
Optimistic Knowledge Gradient Policy for Optimal Budget Allocation in Crowdsourcing.
Proceedings of the 30th International Conference on Machine Learning, 2013

2012
Optimal Regularized Dual Averaging Methods for Stochastic Optimization.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

2011
Smoothing Proximal Gradient Method for General Structured Sparse Learning.
Proceedings of the UAI 2011, 2011

Sparse Latent Semantic Analysis.
Proceedings of the Eleventh SIAM International Conference on Data Mining, 2011

2010
A Smoothing Stochastic Gradient Method for Composite Optimization
CoRR, 2010

An Efficient Proximal-Gradient Method for Single and Multi-task Regression with Structured Sparsity
CoRR, 2010

Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
CoRR, 2010

Learning Preferences with Millions of Parameters by Enforcing Sparsity.
Proceedings of the ICDM 2010, 2010


  Loading...