SpecServe: Efficient and SLO-Aware Large Language Model Serving with Adaptive Speculative Decoding.
CoRR, March, 2025
λScale: Enabling Fast Scaling for Serverless Large Language Model Inference.
,
,
,
,
,
,
,
,
,
,
CoRR, February, 2025
FaaSTube: Optimizing GPU-oriented Data Transfer for Serverless Computing.
CoRR, 2024
CaraServe: CPU-Assisted and Rank-Aware LoRA Serving for Generative LLM Inference.
CoRR, 2024
FaaSwap: SLO-Aware, GPU-Efficient Serverless Inference via Model Swapping.
CoRR, 2023
Following the Data, Not the Function: Rethinking Function Orchestration in Serverless Computing.
Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, 2023
Enabling Cost-Effective, SLO-Aware Machine Learning Inference Serving on Public Cloud.
IEEE Trans. Cloud Comput., 2022
Restructuring Serverless Computing with Data-Centric Function Orchestration.
CoRR, 2021
CrystalPerf: Learning to Characterize the Performance of Dataflow Computation through Code Analysis.
Proceedings of the 2021 USENIX Annual Technical Conference, 2021
Gillis: Serving Large Neural Networks in Serverless Functions with Automatic Model Partitioning.
Proceedings of the 41st IEEE International Conference on Distributed Computing Systems, 2021
RepBun: Load-Balanced, Shuffle-Free Cluster Caching for Structured Data.
Proceedings of the 39th IEEE Conference on Computer Communications, 2020
MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving.
Proceedings of the 2019 USENIX Annual Technical Conference, 2019
Continuum: A Platform for Cost-Aware, Low-Latency Continual Learning.
Proceedings of the ACM Symposium on Cloud Computing, 2018