Yixin Liu

Affiliations:
  • Lehigh University, Department of Computer Science, Bethlehem, PA, USA


According to our database1, Yixin Liu authored at least 19 papers between 2023 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Investigating and Defending Shortcut Learning in Personalized Diffusion Models.
CoRR, 2024

Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras.
CoRR, 2024

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking.
CoRR, 2024

Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models.
CoRR, 2024

TrustLLM: Trustworthiness in Large Language Models.
CoRR, 2024

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Improving Faithfulness for Vision Transformers.
CoRR, 2023

Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis.
CoRR, 2023

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts.
CoRR, 2023

GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation.
CoRR, 2023

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use.
CoRR, 2023

Watermarking Text Data on Large Language Models for Dataset Copyright Protection.
CoRR, 2023

BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT.
CoRR, 2023

A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT.
CoRR, 2023

Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation.
CoRR, 2023

Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation.
CoRR, 2023

A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT.
CoRR, 2023

Backdoor Attacks to Pre-trained Unified Foundation Models.
CoRR, 2023

SEAT: Stable and Explainable Attention.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023


  Loading...