Ahmed Salem

Affiliations:
  • CISPA Helmholtz Center for Information Security, Saarbrücken, Germany


According to our database1, Ahmed Salem authored at least 30 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Permissive Information-Flow Analysis for Large Language Models.
CoRR, 2024

Vera Verto: Multimodal Hijacking Attack.
CoRR, 2024

Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification.
CoRR, 2024

SOS! Soft Prompt Attack Against Open-Source Large Language Models.
CoRR, 2024

Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition.
CoRR, 2024

Are you still on track!? Catching LLM Task Drift with Activations.
CoRR, 2024

Detection and Attribution of Models Trained on Generated Data.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
Maatphor: Automated Variant Analysis for Prompt Injection Attacks.
CoRR, 2023

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective.
CoRR, 2023

Comprehensive Assessment of Toxicity in ChatGPT.
CoRR, 2023

Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning.
CoRR, 2023

Two-in-One: A Model Hijacking Attack Against Text Generation Models.
Proceedings of the 32nd USENIX Security Symposium, 2023

UnGANable: Defending Against GAN-based Face Manipulation.
Proceedings of the 32nd USENIX Security Symposium, 2023

SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

Analyzing Leakage of Personally Identifiable Information in Language Models.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

Bayesian Estimation of Differential Privacy.
Proceedings of the International Conference on Machine Learning, 2023

2022
Adversarial inference and manipulation of machine learning models.
PhD thesis, 2022

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
Proceedings of the 31st USENIX Security Symposium, 2022

Get a Model! Model Hijacking Attack Against Machine Learning Models.
Proceedings of the 29th Annual Network and Distributed System Security Symposium, 2022

Dynamic Backdoor Attacks Against Machine Learning Models.
Proceedings of the 7th IEEE European Symposium on Security and Privacy, 2022

2021
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2021

BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements.
Proceedings of the ACSAC '21: Annual Computer Security Applications Conference, Virtual Event, USA, December 6, 2021

2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.
CoRR, 2020

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.
CoRR, 2020

BadNL: Backdoor Attacks Against NLP Models.
CoRR, 2020

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning.
Proceedings of the 29th USENIX Security Symposium, 2020

2019
Privacy-Preserving Similar Patient Queries for Combined Biomedical Data.
Proc. Priv. Enhancing Technol., 2019

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models.
Proceedings of the 26th Annual Network and Distributed System Security Symposium, 2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models.
CoRR, 2018


  Loading...