Shenggui Li

Orcid: 0000-0003-2037-2496

According to our database1, Shenggui Li authored at least 14 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
Parallel Training of Pre-Trained Models via Chunk-Based Dynamic Memory Management.
IEEE Trans. Parallel Distributed Syst., 2023

Colossal-Auto: Unified Automation of Parallelization and Activation Checkpoint for Large-scale Models.
CoRR, 2023

Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training.
Proceedings of the 52nd International Conference on Parallel Processing, 2023

Sequence Parallelism: Long Sequence Training from System Perspective.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Critique of "MemXCT: Memory-Centric X-Ray CT Reconstruction With Massive Parallelization" by SCC Team From Nanyang Technological University.
IEEE Trans. Parallel Distributed Syst., 2022

Elixir: Train a Large Language Model on a Small GPU Cluster.
CoRR, 2022

EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models.
CoRR, 2022

A Frequency-aware Software Cache for Large Recommendation System Embeddings.
CoRR, 2022

Sky Computing: Accelerating Geo-distributed Computing in Federated Learning.
CoRR, 2022

2021
PatrickStar: Parallel Training of Pre-trained Models via a Chunk-based Memory Management.
CoRR, 2021

Sequence Parallelism: Making 4D Parallelism Possible.
CoRR, 2021

An Efficient 2D Method for Training Super-Large Deep Learning Models.
CoRR, 2021

Online evolutionary batch size orchestration for scheduling deep learning workloads in GPU clusters.
Proceedings of the International Conference for High Performance Computing, 2021


  Loading...