Wenbo Jiang

Orcid: 0000-0002-4592-8094

Affiliations:
  • University of Electronic Science and Technology of China, School of Computer Science and Engineering, Chengdu, China


According to our database1, Wenbo Jiang authored at least 28 papers between 2016 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0
5
10
15
20
1
15
1
1
1
3
1
2
1
1
1

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
DivTrackee versus DynTracker: Promoting Diversity in Anti-Facial Recognition against Dynamic FR Strategy.
CoRR, January, 2025

2024
Stealthy Targeted Backdoor Attacks Against Image Captioning.
IEEE Trans. Inf. Forensics Secur., 2024

Incremental Learning, Incremental Backdoor Threats.
IEEE Trans. Dependable Secur. Comput., 2024

A Comprehensive Defense Framework Against Model Extraction Attacks.
IEEE Trans. Dependable Secur. Comput., 2024

Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features.
CoRR, 2024

TrojanEdit: Backdooring Text-Based Image Editing Models.
CoRR, 2024

Combinational Backdoor Attack against Customized Text-to-Image Models.
CoRR, 2024

One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks.
CoRR, 2024

Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion.
CoRR, 2024

OnePath: Efficient and Privacy-Preserving Decision Tree Inference in the Cloud.
CoRR, 2024

ITPatch: An Invisible and Triggered Physical Adversarial Patch against Traffic Sign Recognition.
CoRR, 2024

Backdoor Attacks against Hybrid Classical-Quantum Neural Networks.
CoRR, 2024

DDFAD: Dataset Distillation Framework for Audio Data.
CoRR, 2024

Backdoor Attacks against Image-to-Image Networks.
CoRR, 2024

Talk Too Much: Poisoning Large Language Models under Token Limit.
CoRR, 2024

Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization.
CoRR, 2024

Instruction Backdoor Attacks Against Customized LLMs.
Proceedings of the 33rd USENIX Security Symposium, 2024

An Efficient and Secure Privacy-Preserving Federated Learning Via Lattice-Based Functional Encryption.
Proceedings of the IEEE International Conference on Communications, 2024

Mtisa: Multi-Target Image-Scaling Attack.
Proceedings of the IEEE International Conference on Communications, 2024

2023
Physical Black-Box Adversarial Attacks Through Transformations.
IEEE Trans. Big Data, June, 2023

Color Backdoor: A Robust Poisoning Attack in Color Space.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2020
Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles.
IEEE Trans. Veh. Technol., 2020

Accelerating Poisoning Attack Through Momentum and Adam Algorithms.
Proceedings of the 92nd IEEE Vehicular Technology Conference, 2020

A Practical Black-Box Attack Against Autonomous Speech Recognition Model.
Proceedings of the IEEE Global Communications Conference, 2020

2019
PTAS: Privacy-preserving Thin-client Authentication Scheme in blockchain-based PKI.
Future Gener. Comput. Syst., 2019

A Flexible Poisoning Attack Against Machine Learning.
Proceedings of the 2019 IEEE International Conference on Communications, 2019

2018
A Privacy-Preserving Thin-Client Scheme in Blockchain-Based PKI.
Proceedings of the IEEE Global Communications Conference, 2018

2016
Research on big data in business model innovation based on GA-BP model.
Proceedings of the 2016 IEEE International Conference on Service Operations and Logistics, 2016


  Loading...