Jingyi Wang

Orcid: 0000-0001-7113-7635

Affiliations:
  • Zhejiang University, China
  • Singapore University of Technology and Design, Singapore (former)


According to our database1, Jingyi Wang authored at least 56 papers between 2016 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Attack as Detection: Using Adversarial Attack Methods to Detect Abnormal Examples.
ACM Trans. Softw. Eng. Methodol., March, 2024

Better Pay Attention Whilst Fuzzing.
IEEE Trans. Software Eng., February, 2024

VeriFi: Towards Verifiable Federated Unlearning.
IEEE Trans. Dependable Secur. Comput., 2024

FAST: Boosting Uncertainty-based Test Prioritization Methods for Neural Networks via Feature Selection.
CoRR, 2024

μDrive: User-Controlled Autonomous Driving.
CoRR, 2024

Protecting Deep Learning Model Copyrights with Adversarial Example-Free Reuse Detection.
CoRR, 2024

Towards Real World Debiasing: A Fine-grained Analysis On Spurious Correlation.
CoRR, 2024

S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models.
CoRR, 2024

TeDA: A Testing Framework for Data Usage Auditing in Deep Learning Model Development.
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2024

Interpretability Based Neural Network Repair.
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2024

Isolation-Based Debugging for Neural Networks.
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2024

VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks.
Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, 2024

K-RAPID: A Formal Executable Semantics of the RAPID Robot Programming Language.
Proceedings of the 10th ACM Cyber-Physical System Security Workshop, 2024

2023
Boosting Adversarial Training in Safety-Critical Systems Through Boundary Data Selection.
IEEE Robotics Autom. Lett., December, 2023

TestSGD: Interpretable Testing of Neural Networks against Subtle Group Discrimination.
ACM Trans. Softw. Eng. Methodol., November, 2023

K-ST: A Formal Executable Semantics of the Structured Text Language for PLCs.
IEEE Trans. Software Eng., October, 2023

QuoTe: Quality-oriented Testing for Deep Learning Systems.
ACM Trans. Softw. Eng. Methodol., September, 2023

Defending Cyber-Physical Systems Through Reverse-Engineering-Based Memory Sanity Check.
IEEE Internet Things J., May, 2023

Prompting Frameworks for Large Language Models: A Survey.
CoRR, 2023

FairRec: Fairness Testing for Deep Recommender Systems.
Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2023

DEEPJUDGE: A Testing Framework for Copyright Protection of Deep Learning Models.
Proceedings of the 45th IEEE/ACM International Conference on Software Engineering: ICSE 2023 Companion Proceedings, 2023

Black-Box Fairness Testing with Shadow Models.
Proceedings of the Information and Communications Security - 25th International Conference, 2023

HODOR: Shrinking Attack Surface on Node.js via System Call Limitation.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023

2022
Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling.
IEEE Trans. Software Eng., 2022

Towards Comprehensively Understanding the Run-time Security of Programmable Logic Controllers: A 3-year Empirical Study.
CoRR, 2022

K-ST: A Formal Executable Semantics of PLC Structured Text Language.
CoRR, 2022

Which neural network makes more explainable decisions? An approach towards measuring explainability.
Autom. Softw. Eng., 2022

Repairing Adversarial Texts Through Perturbation.
Proceedings of the Theoretical Aspects of Software Engineering, 2022

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification.
Proceedings of the 44th IEEE/ACM 44th International Conference on Software Engineering, 2022

2021
Automatically 'Verifying' Discrete-Time Complex Systems through Learning, Abstraction and Refinement.
IEEE Trans. Software Eng., 2021

Adversarial attacks and mitigation for anomaly detectors of cyber-physical systems.
Int. J. Crit. Infrastructure Prot., 2021

Better Pay Attention Whilst Fuzzing.
CoRR, 2021

Fairness Testing of Deep Image Classification with Adequacy Metrics.
CoRR, 2021

Improving Neural Network Verification through Spurious Region Guided Refinement.
Proceedings of the Tools and Algorithms for the Construction and Analysis of Systems, 2021

Attack as defense: characterizing adversarial examples using robustness.
Proceedings of the ISSTA '21: 30th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2021

RobOT: Robustness-Oriented Testing for Deep Learning Systems.
Proceedings of the 43rd IEEE/ACM International Conference on Software Engineering, 2021

2020
Towards Repairing Neural Networks Correctly.
CoRR, 2020

Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction.
Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, 2020

White-box fairness testing through adversarial sampling.
Proceedings of the ICSE '20: 42nd International Conference on Software Engineering, Seoul, South Korea, 27 June, 2020

An Empirical Study on Correlation between Coverage and Robustness for Deep Neural Networks.
Proceedings of the 25th International Conference on Engineering of Complex Computer Systems, 2020

2019
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks.
CoRR, 2019

Analyzing Recurrent Neural Network by Probabilistic Abstraction.
CoRR, 2019

Adversarial sample detection for deep neural network through model mutation testing.
Proceedings of the 41st International Conference on Software Engineering, 2019

2018
Learning probabilistic models for model checking: an evolutionary approach and an empirical study.
Int. J. Softw. Tools Technol. Transf., 2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.
CoRR, 2018

Towards optimal concolic testing.
Proceedings of the 40th International Conference on Software Engineering, 2018

Towards 'Verifying' a Water Treatment System.
Proceedings of the Formal Methods - 22nd International Symposium, 2018

Importance Sampling of Interval Markov Chains.
Proceedings of the 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2018

2017
Toward 'verifying' a Water Treatment System.
CoRR, 2017

Improving Probability Estimation Through Active Probabilistic Model Learning.
Proceedings of the Formal Methods and Software Engineering, 2017

Learning Likely Invariants to Explain Why a Program Fails.
Proceedings of the 22nd International Conference on Engineering of Complex Computer Systems, 2017

Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study.
Proceedings of the Fundamental Approaches to Software Engineering, 2017

2016
Verifying Complex Systems Probabilistically through Learning, Abstraction and Refinement.
CoRR, 2016

Service Adaptation with Probabilistic Partial Models.
Proceedings of the Formal Methods and Software Engineering, 2016

Towards Concolic Testing for Hybrid Systems.
Proceedings of the FM 2016: Formal Methods, 2016


  Loading...