A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures.
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
CoRR, June, 2025
FragFake: A Dataset for Fine-Grained Detection of Edited Images with Vision Language Models.
,
,
,
,
,
,
,
,
,
,
,
CoRR, May, 2025
SEM: Reinforcement Learning for Search-Efficient Large Language Models.
CoRR, May, 2025
Thought Manipulation: External Thought Can Be Efficient for Large Reasoning Models.
CoRR, April, 2025
Prompt Stealing Attacks Against Large Language Models.
CoRR, 2024
Conversation Reconstruction Attack Against GPT Models.
CoRR, 2024
Games and Beyond: Analyzing the Bullet Chats of Esports Livestreaming.
Proceedings of the Eighteenth International AAAI Conference on Web and Social Media, 2024
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
ZeroFake: Zero-Shot Detection of Fake Images Generated and Edited by Text-to-Image Generation Models.
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 2024
Comprehensive Assessment of Toxicity in ChatGPT.
CoRR, 2023
From Visual Prompt Learning to Zero-Shot Transfer: Mapping Is All You Need.
CoRR, 2023
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks.
CoRR, 2022
DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Diffusion Models.
CoRR, 2022