Yixin Liu

Orcid: 0000-0003-3856-439X

Affiliations:
  • Lehigh University, Department of Computer Science, Bethlehem, PA, USA


According to our database1, Yixin Liu authored at least 27 papers between 2020 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective.
CoRR, 2024

Can Large Language Models Automatically Jailbreak GPT-4V?
CoRR, 2024

Investigating and Defending Shortcut Learning in Personalized Diffusion Models.
CoRR, 2024

Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras.
CoRR, 2024

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking.
CoRR, 2024

Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models.
CoRR, 2024

TrustLLM: Trustworthiness in Large Language Models.
CoRR, 2024

Improving Interpretation Faithfulness for Vision Transformers.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

EditShield: Protecting Unauthorized Image Editing by Instruction-Guided Diffusion Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

MetaCloak: Preventing Unauthorized Subject-Driven Text-to-Image Diffusion-Based Synthesis via Meta-Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Improving Faithfulness for Vision Transformers.
CoRR, 2023

Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis.
CoRR, 2023

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts.
CoRR, 2023

GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation.
CoRR, 2023

Watermarking Text Data on Large Language Models for Dataset Copyright Protection.
CoRR, 2023

BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT.
CoRR, 2023

A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT.
CoRR, 2023

Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation.
CoRR, 2023

Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation.
CoRR, 2023

A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT.
CoRR, 2023

Backdoor Attacks to Pre-trained Unified Foundation Models.
CoRR, 2023

SEAT: Stable and Explainable Attention.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2021
Conditional Automated Channel Pruning for Deep Neural Networks.
IEEE Signal Process. Lett., 2021

Priority prediction of Asian Hornet sighting report using machine learning methods.
CoRR, 2021

2020
Conditional Automated Channel Pruning for Deep Neural Networks.
CoRR, 2020


  Loading...