Han Xu

Orcid: 0000-0002-4016-6748

Affiliations:
  • Michigan State University, USA


According to our database1, Han Xu authored at least 40 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Mitigating the Privacy Issues in Retrieval-Augmented Generation (RAG) via Pure Synthetic Data.
CoRR, 2024

Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis.
CoRR, 2024

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention.
CoRR, 2024

Copyright Protection in Generative AI: A Technical Perspective.
CoRR, 2024

Data Poisoning for In-context Learning.
CoRR, 2024

Neural Style Protection: Counteracting Unauthorized Neural Style Transfer.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024

Sharpness-Aware Data Poisoning Attack.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG).
Proceedings of the Findings of the Association for Computational Linguistics, 2024

Exploring Memorization in Fine-tuned Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
A Robust Semantics-based Watermark for Large Language Model against Paraphrasing.
CoRR, 2023

Confidence-driven Sampling for Backdoor Attacks.
CoRR, 2023

FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models.
CoRR, 2023

On the Generalization of Training-based ChatGPT Detection Methods.
CoRR, 2023

DiffusionShield: A Watermark for Copyright Protection against Generative Diffusion Models.
CoRR, 2023

Sharpness-Aware Data Poisoning Attack.
CoRR, 2023

How does the Memorization of Neural Networks Impact Adversarial Robust Models?
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023

Probabilistic Categorical Adversarial Attack and Adversarial Training.
Proceedings of the International Conference on Machine Learning, 2023

Transferable Unlearnable Examples.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Jointly Attacking Graph Neural Network and its Explanations.
Proceedings of the 39th IEEE International Conference on Data Engineering, 2023

2022
Towards Fair Classification against Poisoning Attacks.
CoRR, 2022

Probabilistic Categorical Adversarial Attack & Adversarial Training.
CoRR, 2022

A Comprehensive Survey on Trustworthy Recommender Systems.
CoRR, 2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data.
CoRR, 2022

Enhancing Adversarial Training with Feature Separability.
CoRR, 2022

Doctoral Consortium of WSDM'22: Exploring the Bias of Adversarial Defenses.
Proceedings of the WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21, 2022

Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Imbalanced Adversarial Training with Reweighting.
Proceedings of the IEEE International Conference on Data Mining, 2022

2021
Towards the Memorization Effect of Neural Networks in Adversarial Training.
CoRR, 2021

Yet Meta Learning Can Adapt Fast, it Can Also Break Easily.
Proceedings of the 2021 SIAM International Conference on Data Mining, 2021

Graph Neural Networks with Adaptive Residual.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adversarial Robustness in Deep Learning: From Practices to Theories.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

To be Robust or to be Fair: Towards Fairness in Adversarial Training.
Proceedings of the 38th International Conference on Machine Learning, 2021

DeepRobust: a Platform for Adversarial Attacks and Defenses.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Adversarial Attacks and Defenses on Graphs.
SIGKDD Explor., 2020

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.
Int. J. Autom. Comput., 2020

To be Robust or to be Fair: Towards Fairness in Adversarial Training.
CoRR, 2020

DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses.
CoRR, 2020

Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study.
CoRR, 2020

Deep Adversarial Canonical Correlation Analysis.
Proceedings of the 2020 SIAM International Conference on Data Mining, 2020

Adversarial Attacks and Defenses: Frontiers, Advances and Practice.
Proceedings of the KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2020


  Loading...