Guangyu Shen

According to our database1, Guangyu Shen authored at least 37 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs.
CoRR, 2024

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia.
CoRR, 2024

Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs.
CoRR, 2024

Rethinking the Invisible Protection against Unauthorized Image Usage in Stable Diffusion.
Proceedings of the 33rd USENIX Security Symposium, 2024

On Large Language Models' Resilience to Coercive Interrogation.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Exploring the Orthogonality and Linearity of Backdoor Attacks.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Distribution Preserving Backdoor Attack in Self-supervised Learning.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

OdScan: Backdoor Scanning for Object Detection Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening.
Proceedings of the Computer Vision - ECCV 2024, 2024

Lotus: Evasive and Resilient Backdoor Attacks through Sub-Partitioning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs.
CoRR, 2023

Hard-label Black-box Universal Adversarial Patch Attack.
Proceedings of the 32nd USENIX Security Symposium, 2023

PELICAN: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis.
Proceedings of the 32nd USENIX Security Symposium, 2023

ImU: Physical Impersonating Attack for Face Recognition System with Natural Style Changes.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense.
Proceedings of the 30th Annual Network and Distributed System Security Symposium, 2023

Improving Binary Code Similarity Transformer Models by Semantics-Driven Instruction Deemphasis.
Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2023

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

MEDIC: Remove Model Backdoors via Importance Driven Cloning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Detecting Backdoors in Pre-trained Encoders.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models.
CoRR, 2022

DECK: Model Hardening for Defending Pervasive Backdoors.
CoRR, 2022

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense.
CoRR, 2022

Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Security.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

Piccolo: Exposing Complex Backdoors in NLP Transformer Models.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity.
Proceedings of the 29th Annual Network and Distributed System Security Symposium, 2022

Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense.
Proceedings of the International Conference on Machine Learning, 2022

Better Trigger Inversion Optimization in Backdoor Scanning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

Complex Backdoor Detection by Symmetric Feature Differencing.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry.
CoRR, 2021

Backdoor Scanning for Deep Neural Networks through K-Arm Optimization.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
PENet: Object Detection using Points Estimation in Aerial Images.
CoRR, 2020

2019
Unrestricted Adversarial Attacks for Semantic Segmentation.
CoRR, 2019

2018
Multi-modal brain tumor image segmentation based on SDAE.
Int. J. Imaging Syst. Technol., 2018

Brain Tumor Segmentation Using Concurrent Fully Convolutional Networks and Conditional Random Fields.
Proceedings of the 3rd International Conference on Multimedia and Image Processing, 2018


  Loading...