2024
Context Injection Attacks on Large Language Models.
CoRR, 2024

LLM Factoscope: Uncovering LLMs' Factual Discernment through Measuring Inner States.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis.
CoRR, 2023

2019
Optimum Methods of Thermal-Fluid Numerical Simulation for Switchgear.
IEEE Access, 2019