Shengwei Li
Orcid: 0000-0002-7419-1511
According to our database1,
Shengwei Li
authored at least 20 papers
between 2009 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
IEEE Trans. Parallel Distributed Syst., August, 2024
IEEE Trans. Parallel Distributed Syst., April, 2024
End-To-End Control of a Quadrotor Using Gaussian Ensemble Model-Based Reinforcement Learning.
Proceedings of the Intelligence Science V - 6th IFIP TC 12 International Conference, 2024
2023
Merak: An Efficient Distributed DNN Training Framework With Automated 3D Parallelism for Giant Foundation Models.
IEEE Trans. Parallel Distributed Syst., May, 2023
CoRR, 2023
Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training.
CoRR, 2023
Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 2023
Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 2023
Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 2023
Communication Analysis for Multidimensional Parallel Training of Large-scale DNN Models.
Proceedings of the IEEE International Conference on High Performance Computing & Communications, 2023
Prophet: Fine-grained Load Balancing for Parallel Training of Large-scale MoE Models.
Proceedings of the IEEE International Conference on Cluster Computing, 2023
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023
Proceedings of the 23rd IEEE/ACIS International Conference on Computer and Information Science, 2023
2022
EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks.
Proceedings of the 51st International Conference on Parallel Processing, 2022
AutoPipe: A Fast Pipeline Parallelism Approach with Balanced Partitioning and Micro-batch Slicing.
Proceedings of the IEEE International Conference on Cluster Computing, 2022
HPH: Hybrid Parallelism on Heterogeneous Clusters for Accelerating Large-scale DNNs Training.
Proceedings of the IEEE International Conference on Cluster Computing, 2022
2021
EmbRace: Accelerating Sparse Communication for Distributed Training of NLP Neural Networks.
CoRR, 2021
Hippie: A Data-Paralleled Pipeline Approach to Improve Memory-Efficiency and Scalability for Large DNN Training.
Proceedings of the ICPP 2021: 50th International Conference on Parallel Processing, Lemont, IL, USA, August 9, 2021
Proceedings of the IEEE International Conference on Cluster Computing, 2021
2009
Proceedings of the 2009 IEEE International Conference on Granular Computing, 2009