Antonio Emanuele Cinà

Orcid: 0000-0003-3807-6417

According to our database1, Antonio Emanuele Cinà authored at least 21 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

2021
2022
2023
2024
2025
0
5
10
1
8
3
2
1
3
2
1

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Energy-latency attacks via sponge poisoning.
Inf. Sci., 2025

2024
Machine Learning Security Against Data Poisoning: Are We There Yet?
Computer, March, 2024

Pirates of Charity: Exploring Donation-based Abuses in Social Media Platforms.
CoRR, 2024

Robust image classification with multi-modal large language models.
CoRR, 2024

On the Robustness of Adversarial Training Against Uncertainty Attacks.
CoRR, 2024

Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms.
CoRR, 2024

Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis.
CoRR, 2024

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples.
CoRR, 2024

σ-zero: Gradient-based Optimization of 𝓁<sub>0</sub>-norm Adversarial Examples.
CoRR, 2024

The Imitation Game: Exploring Brand Impersonation Attacks on Social Media Platforms.
Proceedings of the 33rd USENIX Security Symposium, 2024

Conning the Crypto Conman: End-to-End Analysis of Cryptocurrency-based Technical Support Scams.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Understanding XAI Through the Philosopher's Lens: A Historical Perspective.
Proceedings of the ECAI 2024 - 27th European Conference on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain, 2024

2023
Hardening RGB-D object recognition systems against adversarial patch attacks.
Inf. Sci., December, 2023

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning.
ACM Comput. Surv., 2023

Vector Flows and the Capacity of a Discrete Memoryless Channel.
CoRR, 2023

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training.
Proceedings of the Image Analysis and Processing - ICIAP 2023, 2023

On the Limitations of Model Stealing with Uncertainty Quantification Models.
Proceedings of the 31st European Symposium on Artificial Neural Networks, 2023

2022
Security of Machine Learning (Dagstuhl Seminar 22281).
Dagstuhl Reports, July, 2022

A black-box adversarial attack for poisoning clustering.
Pattern Recognit., 2022

2021
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions.
CoRR, 2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Proceedings of the International Joint Conference on Neural Networks, 2021


  Loading...