Wei Huang

Orcid: 0000-0003-1418-6267

Affiliations:
  • Purple Mountain Laboratories, Nanjing, China
  • University of Liverpool, School of Electrical Engineering, Electronics and Computer Science, EEECS, UK


According to our database1, Wei Huang authored at least 24 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
A survey of safety and trustworthiness of large language models through the lens of verification and validation.
Artif. Intell. Rev., July, 2024

Hierarchical Distribution-aware Testing of Deep Learning.
ACM Trans. Softw. Eng. Methodol., February, 2024

A simple framework to enhance the adversarial robustness of deep learning-based intrusion detection system.
Comput. Secur., February, 2024

Formal verification of robustness and resilience of learning-enabled state estimation systems.
Neurocomputing, 2024

Eidos: Efficient, Imperceptible Adversarial 3D Point Clouds.
CoRR, 2024

Diversity supporting robustness: Enhancing adversarial robustness via differentiated ensemble predictions.
Comput. Secur., 2024

Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models.
Proceedings of the International Joint Conference on Neural Networks, 2024

2023
Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance.
ACM Trans. Embed. Comput. Syst., 2023

What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety-Critical Systems.
Proceedings of the Bridging the Gap Between AI and Reality, 2023

SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Coverage-Guided Testing for Recurrent Neural Networks.
IEEE Trans. Reliab., 2022

Embedding and extraction of knowledge in tree ensemble classifiers.
Mach. Learn., 2022

A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled Systems.
Proceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022) co-located with the Thirty-First International Joint Conference on Artificial Intelligence and the Twenty-Fifth European Conference on Artificial Intelligence (IJCAI-ECAI-2022), 2022

Enhancing Adversarial Training with Second-Order Statistics of Weights.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Reliability Assessment and Safety Arguments for Machine Learning Components in Assuring Learning-Enabled Autonomous Systems.
CoRR, 2021

Tutorials on Testing Neural Networks.
CoRR, 2021

BayLIME: Bayesian local interpretable model-agnostic explanations.
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, 2021

Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles.
Proceedings of the Workshop on Artificial Intelligence Safety 2021 co-located with the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021), 2021

Detecting Operational Adversarial Examples for Reliable Deep Learning.
Proceedings of the 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2021

2020
Formal Verification of Robustness and Resilience of Learning-Enabled State Estimation Systems for Robotics.
CoRR, 2020

Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers.
CoRR, 2020

Practical Verification of Neural Network Enabled State Estimation System for Robotics.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020

2019
Test Metrics for Recurrent Neural Networks.
CoRR, 2019

testRNN: Coverage-guided Testing on Recurrent Neural Networks.
CoRR, 2019


  Loading...