Haibo Yang
Orcid: 0000-0002-3245-2728Affiliations:
- Rochester Institute of Technology, NY, USA
- Ohio State University, Columbus, OH, USA (former)
- Iowa State University, Ames, IA, USA (former)
According to our database1,
Haibo Yang
authored at least 18 papers
between 2019 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
2019
2020
2021
2022
2023
2024
0
1
2
3
4
5
6
7
8
9
1
4
1
7
3
1
1
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2024
Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning?
Proceedings of the Twenty-fifth International Symposium on Theory, 2024
Proceedings of the 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis, 2024
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
2022
SAGDA: Achieving O(ε<sup>-2</sup>) Communication Complexity in Federated Min-Max Learning.
CoRR, 2022
CHARLES: Channel-Quality-Adaptive Over-the-Air Federated Learning over Wireless Networks.
Proceedings of the 23rd IEEE International Workshop on Signal Processing Advances in Wireless Communication, 2022
Taming Fat-Tailed ("Heavier-Tailed" with Potentially Infinite Variance) Noise in Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity in Federated Min-Max Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
NET-FLEET: achieving linear convergence speedup for fully decentralized federated learning with heterogeneous data.
Proceedings of the MobiHoc '22: The Twenty-third International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Seoul, Republic of Korea, October 17, 2022
Proceedings of the IEEE International Symposium on Information Theory, 2022
Proceedings of the International Conference on Machine Learning, 2022
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach.
Proceedings of the Tenth International Conference on Learning Representations, 2022
2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning.
Proceedings of the 19th International Symposium on Modeling and Optimization in Mobile, 2021
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021
2020
Adaptive Multi-Hierarchical signSGD for Communication-Efficient Distributed Optimization.
Proceedings of the 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications, 2020
2019
Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019