Ambra Demontis
Orcid: 0000-0001-9318-6913
According to our database1,
Ambra Demontis
authored at least 49 papers
between 2015 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
Neurocomputing, 2025
2024
Computer, March, 2024
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial Robustness.
CoRR, 2024
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis.
CoRR, 2024
2023
Inf. Sci., December, 2023
ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches.
Pattern Recognit., 2023
Inf. Sci., 2023
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning.
ACM Comput. Surv., 2023
BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2023
Proceedings of the Italia Intelligenza Artificiale, 2023
Proceedings of the Italia Intelligenza Artificiale, 2023
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023
Proceedings of the Image Analysis and Processing - ICIAP 2023, 2023
Proceedings of the 31st European Symposium on Artificial Neural Networks, 2023
Towards Machine Learning Models that We Can Trust: Testing, Improving, and Explaining Robustness.
Proceedings of the 31st European Symposium on Artificial Neural Networks, 2023
2022
A Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification.
IEEE Wirel. Commun. Lett., 2022
IEEE Trans. Pattern Anal. Mach. Intell., 2022
Do gradient-based explanations tell anything about adversarial robustness to android malware?
Int. J. Mach. Learn. Cybern., 2022
CoRR, 2022
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022
2021
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
CoRR, 2021
CoRR, 2021
Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning.
CoRR, 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Proceedings of the International Joint Conference on Neural Networks, 2021
Proceedings of the AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021
2020
CoRR, 2020
Comput. Secur., 2020
Proceedings of the CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020
2019
IEEE Trans. Dependable Secur. Comput., 2019
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019
2018
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks.
CoRR, 2018
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Proceedings of the 26th European Signal Processing Conference, 2018
2017
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017
Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017
2016
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2016
Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, 2016
2015
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015