CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking.
Proc. ACM Softw. Eng., 2024
ProSec: Fortifying Code LLMs with Proactive Security Alignment.
CoRR, 2024
When Dataflow Analysis Meets Large Language Models.
CoRR, 2024
LLMDFA: Analyzing Dataflow in Code with Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024
Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024
Sanitizing Large Language Models in Bug Detection with Data-Flow.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
LmPa: Improving Decompilation by Synergy of Large Language Model and Program Analysis.
CoRR, 2023
Improving Binary Code Similarity Transformer Models by Semantics-Driven Instruction Deemphasis.
Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2023
Cross-Layer Dual Modular Redundancy Hardened Scheme of Flip-Flop Design Based on Sense-Amplifier.
J. Circuits Syst. Comput., 2021
A Hybrid DMR Latch to Tolerate MNU Using TDICE and WDICE.
Proceedings of the 27th IEEE Asian Test Symposium, 2018