Xilie Xu

Orcid: 0000-0001-9200-6589

According to our database1, Xilie Xu authored at least 15 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

2020
2021
2022
2023
2024
0
1
2
3
4
5
6
2
2
2
2
3
2
1
1

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Technical Report for ICML 2024 TiFA Workshop MLLM Attack Challenge: Suffix Injection and Projected Gradient Descent Can Easily Fool An MLLM.
CoRR, 2024

Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models.
CoRR, 2024

Perplexity-aware Correction for Robust Alignment with Noisy Preferences.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

AutoLoRa: An Automated Robust Fine-Tuning Framework.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

An LLM can Fool Itself: A Prompt-Based Adversarial Attack.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Decision Boundary-Aware Data Augmentation for Adversarial Training.
IEEE Trans. Dependable Secur. Comput., 2023

AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework.
CoRR, 2023

Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
NoiLin: Improving adversarial training and correcting stereotype of noisy labels.
Trans. Mach. Learn. Res., 2022

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests.
CoRR, 2022

Adversarial Attack and Defense for Non-Parametric Two-Sample Tests.
Proceedings of the International Conference on Machine Learning, 2022

2021
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
CoRR, 2021

Guided Interpolation for Adversarial Training.
CoRR, 2021

2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
Proceedings of the 37th International Conference on Machine Learning, 2020


  Loading...