Position Paper: Beyond Robustness Against Single Attack Types.
CoRR, 2024
Larimar: Large Language Models with Episodic Memory Control.
,
,
,
,
,
,
,
,
,
,
,
CoRR, 2024
PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses.
Proceedings of the 33rd USENIX Security Symposium, 2024
Larimar: Large Language Models with Episodic Memory Control.
,
,
,
,
,
,
,
,
,
,
,
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker.
CoRR, 2023
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks.
Proceedings of the International Conference on Machine Learning, 2023
Parameterizing Activation Functions for Adversarial Robustness.
Proceedings of the 43rd IEEE Security and Privacy, 2022
Formulating Robustness Against Unforeseen Attacks.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Proceedings of the Tenth International Conference on Learning Representations, 2022
Improving Adversarial Robustness Using Proxy Distributions.
CoRR, 2021
Neural Networks with Recurrent Generative Feedback.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Out-of-Distribution Detection Using Neural Rendering Generative Models.
CoRR, 2019