2025
SpecServe: Efficient and SLO-Aware Large Language Model Serving with Adaptive Speculative Decoding.
CoRR, March, 2025

λScale: Enabling Fast Scaling for Serverless Large Language Model Inference.
CoRR, February, 2025

2024
FaaSTube: Optimizing GPU-oriented Data Transfer for Serverless Computing.
CoRR, 2024

CaraServe: CPU-Assisted and Rank-Aware LoRA Serving for Generative LLM Inference.
CoRR, 2024

2023
FaaSwap: SLO-Aware, GPU-Efficient Serverless Inference via Model Swapping.
CoRR, 2023

Following the Data, Not the Function: Rethinking Function Orchestration in Serverless Computing.
Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, 2023

2022
Enabling Cost-Effective, SLO-Aware Machine Learning Inference Serving on Public Cloud.
IEEE Trans. Cloud Comput., 2022

2021
Restructuring Serverless Computing with Data-Centric Function Orchestration.
CoRR, 2021

CrystalPerf: Learning to Characterize the Performance of Dataflow Computation through Code Analysis.
Proceedings of the 2021 USENIX Annual Technical Conference, 2021

Gillis: Serving Large Neural Networks in Serverless Functions with Automatic Model Partitioning.
Proceedings of the 41st IEEE International Conference on Distributed Computing Systems, 2021

2020
RepBun: Load-Balanced, Shuffle-Free Cluster Caching for Structured Data.
Proceedings of the 39th IEEE Conference on Computer Communications, 2020

2019
MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving.
Proceedings of the 2019 USENIX Annual Technical Conference, 2019

2018
Continuum: A Platform for Cost-Aware, Low-Latency Continual Learning.
Proceedings of the ACM Symposium on Cloud Computing, 2018