Matthew Jagielski

According to our database1, Matthew Jagielski authored at least 52 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD.
CoRR, 2024

UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI.
CoRR, 2024

Beyond the Mean: Differentially Private Prototypes for Private Transfer Learning.
CoRR, 2024

Phantom: General Trigger Attacks on Retrieval Augmented Language Generation.
CoRR, 2024

Privacy Side Channels in Machine Learning Systems.
Proceedings of the 33rd USENIX Security Symposium, 2024

Poisoning Web-Scale Training Datasets is Practical.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

Auditing Private Prediction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Stealing part of a production language model.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Noise Masking Attacks and Defenses for Pretrained Speech Models.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
How to Combine Membership-Inference Attacks on Multiple Updated Machine Learning Models.
Proc. Priv. Enhancing Technol., July, 2023

Scalable Extraction of Training Data from (Production) Language Models.
CoRR, 2023

Backdoor Attacks for In-Context Learning with Language Models.
CoRR, 2023

Are aligned neural networks adversarially aligned?
CoRR, 2023

A Note On Interpreting Canary Exposure.
CoRR, 2023

Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models.
CoRR, 2023

Challenges towards the Next Frontier in Privacy.
CoRR, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
CoRR, 2023

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators.
CoRR, 2023

Tight Auditing of Differentially Private Machine Learning.
Proceedings of the 32nd USENIX Security Symposium, 2023

Extracting Training Data from Diffusion Models.
Proceedings of the 32nd USENIX Security Symposium, 2023

SNAP: Efficient Extraction of Private Properties with Poisoning.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning.
Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning, 2023

Counterfactual Memorization in Neural Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Are aligned neural networks adversarially aligned?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Privacy Auditing with One (1) Training Run.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.
Proceedings of the 16th International Natural Language Generation Conference, 2023

Measuring Forgetting of Memorized Training Examples.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Quantifying Memorization Across Neural Language Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning.
IACR Cryptol. ePrint Arch., 2022

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy.
CoRR, 2022

How to Combine Membership-Inference Attacks on Multiple Updated Models.
CoRR, 2022

Debugging Differential Privacy: A Case Study for Privacy Auditing.
CoRR, 2022

The Privacy Onion Effect: Memorization is Relative.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Subverting Fair Image Search with Generative Adversarial Perturbations.
Proceedings of the FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21, 2022

Network-Level Adversaries in Federated Learning.
Proceedings of the 10th IEEE Conference on Communications and Network Security, 2022

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

2021
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) versus QUIC.
J. Cryptol., 2021

Extracting Training Data from Large Language Models.
Proceedings of the 30th USENIX Security Symposium, 2021

Subpopulation Data Poisoning Attacks.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

2020
High Accuracy and High Fidelity Extraction of Neural Networks.
Proceedings of the 29th USENIX Security Symposium, 2020

Auditing Differentially Private Machine Learning: How Private is Private SGD?
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Cryptanalytic Extraction of Neural Network Models.
Proceedings of the Advances in Cryptology - CRYPTO 2020, 2020

2019
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) vs. QUIC.
IACR Cryptol. ePrint Arch., 2019

High-Fidelity Extraction of Neural Network Models.
CoRR, 2019

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019

Differentially Private Fair Learning.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks.
CoRR, 2018

Threat Detection for Collaborative Adaptive Cruise Control in Connected Cars.
Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks, 2018

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning.
Proceedings of the 2018 IEEE Symposium on Security and Privacy, 2018

Network and system level security in connected vehicle applications.
Proceedings of the International Conference on Computer-Aided Design, 2018


  Loading...