Saeed Mahloujifar

Orcid: 0000-0001-6586-8378

According to our database1, Saeed Mahloujifar authored at least 57 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Aligning LLMs to Be Robust Against Prompt Injection.
CoRR, 2024

Guarantees of confidentiality via Hammersley-Chapman-Robbins bounds.
CoRR, 2024

Privacy Amplification for the Gaussian Mechanism via Bounded Support.
CoRR, 2024

Private Fine-tuning of Large Language Models with Zeroth-order Optimization.
CoRR, 2024

Horus: Granular In-Network Task Scheduler for Cloud Datacenters.
Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation, 2024

A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
Experimenting with Zero-Knowledge Proofs of Training.
IACR Cryptol. ePrint Arch., 2023

Publicly Detectable Watermarking for Language Models.
IACR Cryptol. ePrint Arch., 2023

A Randomized Approach for Tight Privacy Accounting.
CoRR, 2023

Towards A Proactive ML Approach for Detecting Backdoor Poison Samples.
Proceedings of the 32nd USENIX Security Symposium, 2023

ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

A Randomized Approach to Tight Privacy Accounting.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Bounding training data reconstruction in DP-SGD.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Uncovering Adversarial Risks of Test-Time Adaptation.
Proceedings of the International Conference on Machine Learning, 2023

Effectively Using Public Data in Privacy Preserving Machine Learning.
Proceedings of the International Conference on Machine Learning, 2023

MultiRobustBench: Benchmarking Robustness Against Multiple Attacks.
Proceedings of the International Conference on Machine Learning, 2023

Revisiting the Assumption of Latent Separability for Backdoor Defenses.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Machine Learning with Differentially Private Labels: Mechanisms and Frameworks.
Proc. Priv. Enhancing Technol., 2022

DP-RAFT: A Differentially Private Recipe for Accelerated Fine-Tuning.
CoRR, 2022

Overparameterized (robust) models from computational constraints.
CoRR, 2022

Fight Poison with Poison: Detecting Backdoor Poison Samples via Decoupling Benign Correlations.
CoRR, 2022

Circumventing Backdoor Defenses That Are Based on Latent Separability.
CoRR, 2022

Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms.
CoRR, 2022

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture.
Proceedings of the 31st USENIX Security Symposium, 2022

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier.
Proceedings of the 31st USENIX Security Symposium, 2022

Parameterizing Activation Functions for Adversarial Robustness.
Proceedings of the 43rd IEEE Security and Privacy, 2022

Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Overparameterization from Computational Constraints.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Formulating Robustness Against Unforeseen Attacks.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Proceedings of the Tenth International Conference on Learning Representations, 2022

Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation.
Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, 2022

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Polynomial-time targeted attacks on coin tossing for any number of corruptions.
IACR Cryptol. ePrint Arch., 2021

Property Inference from Poisoning.
IACR Cryptol. ePrint Arch., 2021

NeuraCrypt is not private.
CoRR, 2021

Membership Inference on Word Embedding and Beyond.
CoRR, 2021

Improving Adversarial Robustness Using Proxy Distributions.
CoRR, 2021

Is Private Learning Possible with Instance Encoding?
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Model-Targeted Poisoning Attacks with Provable Convergence.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
CoRR, 2020

Model-Targeted Poisoning Attacks: Provable Convergence and Certified Bounds.
CoRR, 2020

Obliviousness Makes Poisoning Adversaries Weaker.
CoRR, 2020

Learning under p-tampering poisoning attacks.
Ann. Math. Artif. Intell., 2020

Computational Concentration of Measure: Optimal Bounds, Reductions, and More.
Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, 2020

Lower Bounds for Adversarially Robust PAC Learning.
Proceedings of the International Symposium on Artificial Intelligence and Mathematics, 2020

Lower Bounds for Adversarially Robust PAC Learning under Evasion and Hybrid Attacks.
Proceedings of the 19th IEEE International Conference on Machine Learning and Applications, 2020

Adversarially Robust Learning Could Leverage Computational Hardness.
Proceedings of the Algorithmic Learning Theory, 2020

2019
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Data Poisoning Attacks in Multi-Party Learning.
Proceedings of the 36th International Conference on Machine Learning, 2019

Can Adversarially Robust Learning LeverageComputational Hardness?
Proceedings of the Algorithmic Learning Theory, 2019

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Multi-party Poisoning through Generalized p-Tampering.
IACR Cryptol. ePrint Arch., 2018

Can Adversarially Robust Learning Leverage Computational Hardness?
CoRR, 2018

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Learning under $p$-Tampering Attacks.
Proceedings of the Algorithmic Learning Theory, 2018

2017
Blockwise p-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners.
IACR Cryptol. ePrint Arch., 2017


  Loading...