Christopher A. Choquette-Choo

According to our database1, Christopher A. Choquette-Choo authored at least 44 papers between 2019 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Near Exact Privacy Amplification for Matrix Mechanisms.
CoRR, 2024

The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD.
CoRR, 2024

Gemma 2: Improving Open Language Models at a Practical Size.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon.
CoRR, 2024

CodeGemma: Open Code Models Based on Gemma.
CoRR, 2024

Optimal Rates for DP-SCO with a Single Epoch and Large Batches.
CoRR, 2024

Phantom: General Trigger Attacks on Retrieval Augmented Language Generation.
CoRR, 2024

Gemma: Open Models Based on Gemini Research and Technology.
CoRR, 2024

Privacy Side Channels in Machine Learning Systems.
Proceedings of the 33rd USENIX Security Symposium, 2024

Poisoning Web-Scale Training Datasets is Practical.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Auditing Private Prediction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Teach LLMs to Phish: Stealing Private Information from Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Privacy Amplification for Matrix Mechanisms.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Correlated Noise Provably Beats Independent Noise for Differentially Private Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

User Inference Attacks on Large Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
Private Multi-Winner Voting for Machine Learning.
Proc. Priv. Enhancing Technol., January, 2023

Scalable Extraction of Training Data from (Production) Language Models.
CoRR, 2023

Report of the 1st Workshop on Generative AI and Law.
CoRR, 2023

Robust and Actively Secure Serverless Collaborative Learning.
CoRR, 2023

MADLAD-400: A Multilingual And Document-Level Large Audited Dataset.
CoRR, 2023

Are aligned neural networks adversarially aligned?
CoRR, 2023

(Amplified) Banded Matrix Factorization: A unified approach to private training.
CoRR, 2023

PaLM 2 Technical Report.
CoRR, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
CoRR, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Robust and Actively Secure Serverless Collaborative Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

(Amplified) Banded Matrix Factorization: A unified approach to private training.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Are aligned neural networks adversarially aligned?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.
Proceedings of the 16th International Natural Language Generation Conference, 2023

Private Federated Learning with Autotuned Compression.
Proceedings of the International Conference on Machine Learning, 2023

Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning.
Proceedings of the International Conference on Machine Learning, 2023

Proof-of-Learning is Currently More Broken Than You Think.
Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023

Federated Learning of Gboard Language Models with Differential Privacy.
Proceedings of the The 61st Annual Meeting of the Association for Computational Linguistics: Industry Track, 2023

2022
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy.
CoRR, 2022

Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search.
CoRR, 2022

On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning.
CoRR, 2022

The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning.
Proceedings of the International Conference on Machine Learning, 2022

2021
Entangled Watermarks as a Defense against Model Extraction.
Proceedings of the 30th USENIX Security Symposium, 2021

Proof-of-Learning: Definitions and Practice.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Machine Unlearning.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Label-Only Membership Inference Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

CaPC Learning: Confidential and Private Collaborative Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Entangled Watermarks as a Defense against Model Extraction.
CoRR, 2020

2019
A Multi-label, Dual-Output Deep Neural Network for Automated Bug Triaging.
Proceedings of the 18th IEEE International Conference On Machine Learning And Applications, 2019


  Loading...