Shuai Zhang

Affiliations:
  • Rensselaer Polytechnic Institute, Department of Electrical, Computer, and Systems Engineering, Troy, NY, USA


According to our database1, Shuai Zhang authored at least 18 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance.
CoRR, 2024

Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis.
Proceedings of the 13th IEEE Sensor Array and Multichannel Signal Processing Workshop, 2024

SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
On the Convergence and Sample Complexity Analysis of Deep Q-Networks with ε-Greedy Exploration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks.
Proceedings of the International Conference on Machine Learning, 2023

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis.
CoRR, 2022

How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Learning and generalization of one-hidden-layer neural networks, going beyond standard Gaussian data.
Proceedings of the 56th Annual Conference on Information Sciences and Systems, 2022

2021
Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
IEEE Trans. Neural Networks Learn. Syst., 2021

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks.
CoRR, 2021

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case.
Proceedings of the 37th International Conference on Machine Learning, 2020

Guaranteed Convergence of Training Convolutional Neural Networks via Accelerated Gradient Descent.
Proceedings of the 54th Annual Conference on Information Sciences and Systems, 2020

2019
Correction of Corrupted Columns Through Fast Robust Hankel Matrix Completion.
IEEE Trans. Signal Process., 2019

2018
Multichannel Hankel Matrix Completion Through Nonconvex Optimization.
IEEE J. Sel. Top. Signal Process., 2018

Correction of Simultaneous Bad Measurements by Exploiting the Low-rank Hankel Structure.
Proceedings of the 2018 IEEE International Symposium on Information Theory, 2018

2017
Multi-Channel missing data recovery by exploiting the low-rank hankel structures.
Proceedings of the 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2017


  Loading...