Yinpeng Dong

Orcid: 0000-0003-1299-683X

According to our database1, Yinpeng Dong authored at least 101 papers between 2016 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Improving transferability of 3D adversarial attacks with scale and shear transformations.
Inf. Sci., 2024

Real-world Adversarial Defense against Patch Attacks based on Diffusion Model.
CoRR, 2024

T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models.
CoRR, 2024

Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study.
CoRR, 2024

AutoBreach: Universal and Adaptive Jailbreaking with Efficient Wordplay-Guided Optimization.
CoRR, 2024

Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy.
CoRR, 2024

The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition.
CoRR, 2024

FaceCat: Enhancing Face Recognition Security with a Unified Generative Model Framework.
CoRR, 2024

BSPA: Exploring Black-box Stealthy Prompt Attacks against Image Generators.
CoRR, 2024

Discovering Universal Semantic Triggers for Text-to-Image Synthesis.
CoRR, 2024

Your Diffusion Model is Secretly a Certifiably Robust Classifier.
CoRR, 2024

Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction.
Proceedings of the 33rd USENIX Security Symposium, 2024

Natural Language Induced Adversarial Images.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

Toward Availability Attacks in 3D Point Clouds.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Robust Classification via a Single Diffusion Model.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Rethinking Model Ensemble in Transfer-based Adversarial Attacks.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

DIFFender: Diffusion-Based Adversarial Defense Against Patch Attacks.
Proceedings of the Computer Vision - ECCV 2024, 2024

Exploring the Transferability of Visual Prompting for Multimodal Large Language Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial Training.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Towards Transferable Targeted 3D Adversarial Attack in the Physical World.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Batch virtual adversarial training for graph convolutional networks.
AI Open, January, 2023

The Art of Defense: Letting Networks Fool the Attacker.
IEEE Trans. Inf. Forensics Secur., 2023

Evil Geniuses: Delving into the Safety of LLM-based Agents.
CoRR, 2023

How Robust is Google's Bard to Adversarial Image Attacks?
CoRR, 2023

Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models.
CoRR, 2023

Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning.
CoRR, 2023

Improving Viewpoint Robustness for Visual Recognition via Adversarial Training.
CoRR, 2023

Distributional Modeling for Location-Aware Adversarial Patches.
CoRR, 2023

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks.
CoRR, 2023

DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World.
CoRR, 2023

Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving.
CoRR, 2023

Rethinking Model Ensemble in Transfer-based Adversarial Attacks.
CoRR, 2023

A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking.
CoRR, 2023

Learning Sample Difficulty from Pre-trained Models for Reliable Prediction.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

GNOT: A General Neural Operator Transformer for Operator Learning.
Proceedings of the International Conference on Machine Learning, 2023

Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Towards Viewpoint-Invariant Visual Recognition via Adversarial Training.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Understanding the Robustness of 3D Object Detection with Bird'View Representations in Autonomous Driving.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Compacting Binary Neural Networks by Sparse Kernel Selection.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Towards generalizable detection of face forgery via self-guided model-agnostic learning.
Pattern Recognit. Lett., 2022

Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

Artificial Intelligence Security Competition (AISC).
CoRR, 2022

Improving transferability of 3D adversarial attacks with scale and shear transformations.
CoRR, 2022

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition.
CoRR, 2022

AutoDA: Automated Decision-based Iterative Adversarial Attacks.
Proceedings of the 31st USENIX Security Symposium, 2022

Isometric 3D Adversarial Examples in the Physical World.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Pre-trained Adversarial Perturbations.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing.
Proceedings of the International Conference on Machine Learning, 2022

Exploring Memorization in Adversarial Training.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Kallima: A Clean-Label Framework for Textual Backdoor Attacks.
Proceedings of the Computer Security - ESORICS 2022, 2022

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks.
Proceedings of the Computer Vision - ECCV 2022, 2022

BadDet: Backdoor Attacks on Object Detection.
Proceedings of the Computer Vision - ECCV 2022 Workshops, 2022

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Unrestricted Adversarial Attacks on ImageNet Competition.
CoRR, 2021

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness.
CoRR, 2021

Adversarial Attacks on ML Defense Models Competition.
CoRR, 2021

Adversarial Training with Rectified Rejection.
CoRR, 2021

Automated Decision-based Adversarial Attacks.
CoRR, 2021

Accumulative Poisoning Attacks on Real-time Data.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Bag of Tricks for Adversarial Training.
Proceedings of the 9th International Conference on Learning Representations, 2021

Towards Face Encryption by Generating Adversarial Identity Masks.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Black-box Detection of Backdoor Attacks with Limited Information and Data.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Improving Transferability of Adversarial Patches on Face Recognition With Generative Models.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

2020
BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayeisan Fine-tuning.
CoRR, 2020

Delving into the Adversarial Robustness on Face Recognition.
CoRR, 2020

Towards Privacy Protection by Generating Adversarial Identity Masks.
CoRR, 2020

Boosting Adversarial Training with Hypersphere Embedding.
CoRR, 2020

Boosting Adversarial Training with Hypersphere Embedding.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Adversarial Distributional Training for Robust Deep Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Understanding and Exploring the Network with Stochastic Architectures.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Error-Silenced Quantization: Bridging Robustness and Compactness.
Proceedings of the Workshop on Artificial Intelligence Safety 2020 co-located with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), 2020

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.
Proceedings of the 8th International Conference on Learning Representations, 2020

Benchmarking Adversarial Robustness on Image Classification.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

2019
Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks.
Int. J. Comput. Vis., 2019

Benchmarking Adversarial Robustness.
CoRR, 2019

Improving Black-box Adversarial Attacks with a Transfer-based Prior.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

Composite Binary Decomposition Networks.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Adversarial Attacks and Defences Competition.
CoRR, 2018

Towards Robust Detection of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Learning Visual Knowledge Memory Networks for Visual Question Answering.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

Boosting Adversarial Attacks With Momentum.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

2017
Discovering Adversarial Examples with Momentum.
CoRR, 2017

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
CoRR, 2017

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization.
CoRR, 2017

Forecast the Plausible Paths in Crowd Scenes.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017

Improving Interpretability of Deep Neural Networks with Semantic Information.
Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization.
Proceedings of the British Machine Vision Conference 2017, 2017

2016
Feature Engineering and Ensemble Modeling for Paper Acceptance Rank Prediction.
CoRR, 2016

Crowd Scene Understanding with Coherent Recurrent Neural Networks.
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016

Efficient and Robust Semi-supervised Learning Over a Sparse-Regularized Graph.
Proceedings of the Computer Vision - ECCV 2016, 2016


  Loading...