Federated learning with superquantile aggregation for heterogeneous data.
Mach. Learn., May, 2024
Fine-Tuning Large Language Models with User-Level Differential Privacy.
CoRR, 2024
Efficient and Near-Optimal Noise Generation for Streaming Differential Privacy.
CoRR, 2024
Distributionally Robust Optimization with Bias and Variance Reduction.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Efficient and Near-Optimal Noise Generation for Streaming Differential Privacy.
Proceedings of the 65th IEEE Annual Symposium on Foundations of Computer Science, 2024
User Inference Attacks on Large Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
MAUVE Scores for Generative Models: Theory and Practice.
J. Mach. Learn. Res., 2023
Modified Gauss-Newton Algorithms under Noise.
Proceedings of the IEEE Statistical Signal Processing Workshop, 2023
Unleashing the Power of Randomization in Auditing Differentially Private ML.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Stochastic Optimization for Spectral Risk Measures.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
Influence Diagnostics under Self-concordance.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
From Enormous Structured Models to On-device Federated Learning: Robustness, Heterogeneity and Optimization
PhD thesis, 2022
Robust Aggregation for Federated Learning.
IEEE Trans. Signal Process., 2022
Statistical and Computational Guarantees for Influence Diagnostics.
CoRR, 2022
Federated Learning with Partial Model Personalization.
Proceedings of the International Conference on Machine Learning, 2022
Federated Learning with Heterogeneous Data: A Superquantile Optimization Approach.
CoRR, 2021
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral.
CoRR, 2021
MAUVE: Human-Machine Divergence Curves for Evaluating Open-Ended Text Generation.
CoRR, 2021
MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Effects, and Frontier Integrals.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
A Superquantile Approach to Federated Learning with Heterogeneous Devices.
Proceedings of the 55th Annual Conference on Information Sciences and Systems, 2021
Device Heterogeneity in Federated Learning: A Superquantile Approach.
CoRR, 2020