Survey on Explainable AI: Techniques, challenges and open issues.
Expert Syst. Appl., 2024
Experience of Training a 1.7B-Parameter LLaMa Model From Scratch.
CoRR, 2024
On the Effectiveness of Incremental Training of Large Language Models.
CoRR, 2024
A Novel Deep Multi-head Attentive Vulnerable Line Detector.
Proceedings of the International Neural Network Society Workshop on Deep Learning Innovations and Applications, 2023
DyAdvDefender: An instance-based online machine learning model for perturbation-trial-based black-box adversarial defense.
Inf. Sci., 2022
On the Effectiveness of Interpretable Feedforward Neural Network.
Proceedings of the International Joint Conference on Neural Networks, 2022
Interpretable Malware Classification based on Functional Analysis.
Proceedings of the 17th International Conference on Software Technologies, 2022
VDGraph2Vec: Vulnerability Detection in Assembly Code using Message Passing Neural Networks.
Proceedings of the 21st IEEE International Conference on Machine Learning and Applications, 2022
Malware classification and composition analysis: A survey of recent developments.
J. Inf. Secur. Appl., 2021
<i>I-MAD</i>: Interpretable malware detector using Galaxy Transformer.
Comput. Secur., 2021
A Novel and Dedicated Machine Learning Model for Malware Classification.
Proceedings of the 16th International Conference on Software Technologies, 2021
A Novel Neural Network-Based Malware Severity Classification System.
Proceedings of the Software Technologies - 16th International Conference, 2021
I-MAD: A Novel Interpretable Malware Detector Using Hierarchical Transformer.
CoRR, 2019