Jie Ren

Affiliations:
  • Michigan State University, Department of Computer Science and Engineering, East Lansing, MI, USA


According to our database1, Jie Ren authored at least 21 papers between 2022 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models.
CoRR, 2024

Mitigating the Privacy Issues in Retrieval-Augmented Generation (RAG) via Pure Synthetic Data.
CoRR, 2024

EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations.
CoRR, 2024

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention.
CoRR, 2024

Copyright Protection in Generative AI: A Technical Perspective.
CoRR, 2024

Superiority of Multi-Head Attention in In-Context Linear Regression.
CoRR, 2024

Neural Style Protection: Counteracting Unauthorized Neural Style Transfer.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024

A Robust Semantics-based Watermark for Large Language Model against Paraphrasing.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024

Sharpness-Aware Data Poisoning Attack.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG).
Proceedings of the Findings of the Association for Computational Linguistics, 2024

Exploring Memorization in Fine-tuned Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Confidence-driven Sampling for Backdoor Attacks.
CoRR, 2023

FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models.
CoRR, 2023

On the Generalization of Training-based ChatGPT Detection Methods.
CoRR, 2023

DiffusionShield: A Watermark for Copyright Protection against Generative Diffusion Models.
CoRR, 2023

Sharpness-Aware Data Poisoning Attack.
CoRR, 2023

Probabilistic Categorical Adversarial Attack and Adversarial Training.
Proceedings of the International Conference on Machine Learning, 2023

Transferable Unlearnable Examples.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Probabilistic Categorical Adversarial Attack & Adversarial Training.
CoRR, 2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data.
CoRR, 2022

Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022


  Loading...