Context Injection Attacks on Large Language Models.
CoRR, 2024
LLM Factoscope: Uncovering LLMs' Factual Discernment through Measuring Inner States.
Proceedings of the Findings of the Association for Computational Linguistics, 2024
LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis.
CoRR, 2023
Optimum Methods of Thermal-Fluid Numerical Simulation for Switchgear.
IEEE Access, 2019