Jialin Liu

Orcid: 0000-0002-0861-1856

Affiliations:
  • Alibaba US, Bellevue, WA, USA
  • University of California Los Angeles, Department of Mathematics, CA, USA (PhD 2020)
  • Tsinghua University, National Laboratory for Information Science and Technology, Beijing, China


According to our database1, Jialin Liu authored at least 23 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Towards Robustness and Efficiency of Coherence-Guided Complex Convolutional Sparse Coding for Interferometric Phase Restoration.
IEEE Trans. Computational Imaging, 2024

Learning to optimize: A tutorial for continuous and mixed-integer optimization.
CoRR, 2024

Rethinking the Capacity of Graph Neural Networks for Branching Strategy.
CoRR, 2024

2023
DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee.
CoRR, 2023

Towards Constituting Mathematical Structures for Learning to Optimize.
Proceedings of the International Conference on Machine Learning, 2023

On Representing Mixed-Integer Linear Programs by Graph Neural Networks.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

On Representing Linear Programs by Graph Neural Networks.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Coherence-Guided Complex Convolutional Sparse Coding for Interferometric Phase Restoration.
IEEE Trans. Geosci. Remote. Sens., 2022

Learning to Optimize: A Primer and A Benchmark.
J. Mach. Learn. Res., 2022

On Representing Mixed-Integer Linear Programs by Graph Neural Networks.
CoRR, 2022

On Representing Linear Programs by Graph Neural Networks.
CoRR, 2022

2021
Learning Convolutional Sparse Coding on Complex Domain for Interferometric Phase Restoration.
IEEE Trans. Neural Networks Learn. Syst., 2021

Multilevel Optimal Transport: A Fast Approximation of Wasserstein-1 Distances.
SIAM J. Sci. Comput., 2021

Hyperparameter Tuning is All You Need for LISTA.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Learned Robust PCA: A Scalable Deep Unfolding Approach for High-Dimensional Outlier Detection.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Learning A Minimax Optimizer: A Pilot Study.
Proceedings of the 9th International Conference on Learning Representations, 2021

2019
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers.
Proceedings of the 36th International Conference on Machine Learning, 2019

ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA.
Proceedings of the 7th International Conference on Learning Representations, 2019

2018
First- and Second-Order Methods for Online Convolutional Dictionary Learning.
SIAM J. Imaging Sci., 2018

Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

2017
Online convolutional dictionary learning.
Proceedings of the 2017 IEEE International Conference on Image Processing, 2017

2015
Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints.
CoRR, 2015

Averaging random projection: A fast online solution for large-scale constrained stochastic optimization.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015


  Loading...