Jiaming Zhang

Orcid: 0000-0003-0991-7109

Affiliations:
  • Beijing Jiaotong University, Beijing, China


According to our database1, Jiaming Zhang authored at least 21 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models.
CoRR, 2024

Debiasing Vison-Language Models with Text-Only Training.
CoRR, 2024

AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models.
CoRR, 2024

Adversarial Prompt Tuning for Vision-Language Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

2023
Low-mid adversarial perturbation against unauthorized face recognition system.
Inf. Sci., November, 2023

Attention, Please! Adversarial Defense via Activation Rectification and Preservation.
ACM Trans. Multim. Comput. Commun. Appl., 2023

Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks.
CoRR, 2023

Unlearnable Clusters: Towards Label-Agnostic Unlearnable Examples.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

ImageNet Pre-training Also Transfers Non-robustness.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System.
CoRR, 2022

Towards Adversarial Attack on Vision-Language Pre-training Models.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022

Benign Adversarial Attack: Tricking Models for Goodness.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022

2021
Robust CAPTCHAs Towards Malicious OCR.
IEEE Trans. Multim., 2021

Benign Adversarial Attack: Tricking Algorithm for Goodness.
CoRR, 2021

Pre-training also Transfers Non-Robustness.
CoRR, 2021

APF: An Adversarial Privacy-preserving Filter to Protect Portrait Information.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

Trustworthy Multimedia Analysis.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

2020
Adversarial Privacy-preserving Filter.
Proceedings of the MM '20: The 28th ACM International Conference on Multimedia, 2020

2019
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness.
CoRR, 2019

2018
Attention, Please! Adversarial Defense via Attention Rectification and Preservation.
CoRR, 2018

2017
A Demo for Image-Based Personality Test.
Proceedings of the MultiMedia Modeling - 23rd International Conference, 2017


  Loading...