Kuofeng Gao

Orcid: 0000-0002-5667-8238

According to our database1, Kuofeng Gao authored at least 20 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
PointNCBW: Toward Dataset Ownership Verification for Point Clouds via Negative Clean-Label Backdoor Watermark.
IEEE Trans. Inf. Forensics Secur., 2025

2024
Imperceptible and Robust Backdoor Attack in 3D Point Cloud.
IEEE Trans. Inf. Forensics Secur., 2024

Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models.
CoRR, 2024

Denial-of-Service Poisoning Attacks against Large Language Models.
CoRR, 2024

Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning.
CoRR, 2024

PointNCBW: Towards Dataset Ownership Verification for Point Clouds via Negative Clean-label Backdoor Watermark.
CoRR, 2024

Video Watermarking: Safeguarding Your Video from (Unauthorized) Annotations by Video-based LLMs.
CoRR, 2024

Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas.
CoRR, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers.
CoRR, 2024

Adversarial Robustness for Visual Grounding of Multimodal Large Language Models.
CoRR, 2024

Energy-Latency Manipulation of Multi-modal Large Language Models via Verbose Samples.
CoRR, 2024

FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs.
CoRR, 2024

Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Backdoor Defense via Adaptively Splitting Poisoned Dataset.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning.
Proceedings of the 34th British Machine Vision Conference 2023, 2023

2022
Practical protection against video data leakage via universal adversarial head.
Pattern Recognit., 2022

Hardly Perceptible Trojan Attack Against Neural Networks with Bit Flips.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Clean-label Backdoor Attack against Deep Hashing based Retrieval.
CoRR, 2021


  Loading...