Shaobo Lin

Orcid: 0000-0001-5122-9153

According to our database1, Shaobo Lin authored at least 89 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Sketching with Spherical Designs for Noisy Data Fitting on Spheres.
SIAM J. Sci. Comput., February, 2024

Kernel Interpolation of High Dimensional Scattered Data.
SIAM J. Numer. Anal., 2024

Weighted Spectral Filters for Kernel Interpolation on Spheres: Estimates of Prediction Accuracy for Noisy Data.
SIAM J. Imaging Sci., 2024

Component-based Sketching for Deep ReLU Nets.
CoRR, 2024

Lepskii Principle for Distributed Kernel Ridge Regression.
CoRR, 2024

Integral Operator Approaches for Scattered Data Fitting on Spheres.
CoRR, 2024

2023
Construction of Deep ReLU Nets for Spatially Sparse Learning.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

Adaptive Parameter Selection for Kernel Ridge Regression.
CoRR, 2023

Lifting the Veil: Unlocking the Power of Depth in Q-learning.
CoRR, 2023

Distributed Uncertainty Quantification of Kernel Interpolation on Spheres.
CoRR, 2023

Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos.
CoRR, 2023

Optimal Approximation and Learning Rates for Deep Convolutional Neural Networks.
CoRR, 2023

Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction and Learning.
CoRR, 2023

Kernel-Based Distributed Q-Learning: A Scalable Reinforcement Learning Approach for Dynamic Treatment Regimes.
CoRR, 2023

Explore the Power of Dropout on Few-shot Learning.
CoRR, 2023

An Effective Crop-Paste Pipeline for Few-shot Object Detection.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Explore the Power of Synthetic Data on Few-shot Object Detection.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Realization of Spatial Sparseness by Deep ReLU Nets With Massive Data.
IEEE Trans. Neural Networks Learn. Syst., 2022

Distributed Learning With Dependent Samples.
IEEE Trans. Inf. Theory, 2022

Universal Consistency of Deep Convolutional Neural Networks.
IEEE Trans. Inf. Theory, 2022

Learning With Selected Features.
IEEE Trans. Cybern., 2022

Depth Selection for Deep ReLU Nets in Feature Extraction and Generalization.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

Fully corrective gradient boosting with squared hinge: Fast learning rates and early stopping.
Neural Networks, 2022

Nystrom Regularization for Time Series Forecasting.
J. Mach. Learn. Res., 2022

Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications.
INFORMS J. Comput., 2022

A Unified Framework with Meta-dropout for Few-shot Learning.
CoRR, 2022

Three-stage Training Pipeline with Patch Random Drop for Few-shot Object Detection.
Proceedings of the Computer Vision - ACCV 2022, 2022

2021
Random Sketching for Neural Networks With ReLU.
IEEE Trans. Neural Networks Learn. Syst., 2021

Deep Neural Network Based Vehicle and Pedestrian Detection for Autonomous Driving: A Survey.
IEEE Trans. Intell. Transp. Syst., 2021

Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere.
SIAM J. Numer. Anal., 2021

On ADMM in Deep Learning: Convergence and Saturation-Avoidance.
J. Mach. Learn. Res., 2021

Radial Basis Function Approximation with Distributively Stored Data on Spheres.
CoRR, 2021

Generalization Performance of Empirical Risk Minimization on Over-parameterized Deep ReLU Nets.
CoRR, 2021

2020
Realizing Data Features by Deep Nets.
IEEE Trans. Neural Networks Learn. Syst., 2020

Learning Through Deterministic Assignment of Hidden Parameters.
IEEE Trans. Cybern., 2020

Distributed Kernel Ridge Regression with Communications.
J. Mach. Learn. Res., 2020

Kernel-based L_2-Boosting with Structure Constraints.
CoRR, 2020

Distributed Learning with Dependent Samples.
CoRR, 2020

2019
Rescaled Boosting in Classification.
IEEE Trans. Neural Networks Learn. Syst., 2019

Generalization and Expressivity for Deep Nets.
IEEE Trans. Neural Networks Learn. Syst., 2019

Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares Approximation.
IEEE Trans. Neural Networks Learn. Syst., 2019

Constructive Neural Network Learning.
IEEE Trans. Cybern., 2019

Fast Learning With Polynomial Kernels.
IEEE Trans. Cybern., 2019

Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping.
J. Mach. Learn. Res., 2019

Nonparametric regression using needlet kernels for spherical data.
J. Complex., 2019

Deep Net Tree Structure for Balance of Capacity and Approximation Ability.
Frontiers Appl. Math. Stat., 2019

Fast Polynomial Kernel Classification for Massive Data.
CoRR, 2019

Deep Neural Networks for Rotation-Invariance Approximation and Learning.
CoRR, 2019

A Convergence Analysis of Nonlinearly Constrained ADMM in Deep Learning.
CoRR, 2019

Global Convergence of Block Coordinate Descent in Deep Learning.
Proceedings of the 36th International Conference on Machine Learning, 2019

High-Resolution Driving Scene Synthesis Using Stacked Conditional Gans and Spectral Normalization.
Proceedings of the IEEE International Conference on Multimedia and Expo, 2019

2018
Greedy Criterion in Orthogonal Greedy Learning.
IEEE Trans. Cybern., 2018

Corrigendum to "GAITA: A Gauss-Seidel iterative thresholding algorithm for l<sub>q</sub> regularized least squares regression" [J. Comput. Appl. Math. 319 (2017) 220-235].
J. Comput. Appl. Math., 2018

Construction of Neural Networks for Realization of Localized Deep Learning.
Frontiers Appl. Math. Stat., 2018

Block Coordinate Descent for Deep Learning: Unified Convergence Guarantees.
CoRR, 2018

Generalization Bounds for Regularized Pairwise Learning.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018

2017
Shrinkage Degree in L<sub>2</sub>-Rescale Boosting for Regression.
IEEE Trans. Neural Networks Learn. Syst., 2017

Limitations of shallow nets approximation.
Neural Networks, 2017

Learning Rates for Classification with Gaussian Kernels.
Neural Comput., 2017

Distributed Learning with Regularized Least Squares.
J. Mach. Learn. Res., 2017

Distributed Semi-supervised Learning with Kernel Ridge Regression.
J. Mach. Learn. Res., 2017

GAITA: A Gauss-Seidel iterative thresholding algorithm for ℓ<sub>q</sub> regularized least squares regression.
J. Comput. Appl. Math., 2017

2016
Sparse Regularization: Convergence Of Iterative Jumping Thresholding Algorithm.
IEEE Trans. Signal Process., 2016

Learning and approximation capabilities of orthogonal super greedy algorithm.
Knowl. Based Syst., 2016

Linear and nonlinear approximation of spherical radial basis function networks.
J. Complex., 2016

Simultaneous approximation by spherical neural networks.
Neurocomputing, 2016

Greedy Criterion in Orthogonal Greedy Learning.
CoRR, 2016

Divide and Conquer Local Average Regression.
CoRR, 2016

Learning capability of the truncated greedy algorithm.
Sci. China Inf. Sci., 2016

2015
Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part I).
IEEE Trans. Neural Networks Learn. Syst., 2015

Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II).
IEEE Trans. Neural Networks Learn. Syst., 2015

Error Estimate for Spherical Neural Networks Interpolation.
Neural Process. Lett., 2015

Jackson-type inequalities for spherical neural networks with doubling weights.
Neural Networks, 2015

A Gauss-Seidel Iterative Thresholding Algorithm for lq Regularized Least Squares Regression.
CoRR, 2015

Shrinkage degree in L<sub>2</sub>-re-scale boosting for regression.
CoRR, 2015

Re-scale boosting for regression and classification.
CoRR, 2015

2014
L<sub>1/2</sub> Regularization: Convergence of Iterative Half Thresholding Algorithm.
IEEE Trans. Signal Process., 2014

Sparse solution of underdetermined linear equations via adaptively iterative thresholding.
Signal Process., 2014

Learning Rates of <i>l<sup>q</sup></i> Coefficient Regularization Learning with Gaussian Kernel.
Neural Comput., 2014

Almost optimal estimates for approximation and learning by radial basis function networks.
Mach. Learn., 2014

Greedy metrics in orthogonal greedy learning.
CoRR, 2014

Learning and approximation capability of orthogonal super greedy algorithm.
CoRR, 2014

2013
Learning Capability of Relaxed Greedy Algorithms.
IEEE Trans. Neural Networks Learn. Syst., 2013

Learning rates of l<sup>q</sup> coefficient regularization learning with Gaussian kernel.
CoRR, 2013

Approximation by neural networks with scattered data.
Appl. Math. Comput., 2013

2012
A general radial quasi-interpolation operator on the sphere.
J. Approx. Theory, 2012

2011
Essential rate for approximation by spherical neural networks.
Neural Networks, 2011

2010
Constructive approximate interpolation by neural networks in the metric space.
Math. Comput. Model., 2010

Approximation capability of interpolation neural networks.
Neurocomputing, 2010


  Loading...