Bo Li

Orcid: 0000-0003-4883-7267

Affiliations:
  • University of Chicago, Department of Computer Science, IL, USA
  • University of Illinois at Urbana-Champaign, Department of Computer Science, IL, USA
  • University of California, Berkeley, CA, USA (former)
  • Vanderbilt University, Nashville, TN, USA (PhD 2016)
  • Tongji University, Shanghai, China (former)


According to our database1, Bo Li authored at least 338 papers between 2009 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Perception simplex: Verifiable collision avoidance in autonomous vehicles amidst obstacle detection faults.
Softw. Test. Verification Reliab., September, 2024

LLM-PBE: Assessing Data Privacy in Large Language Models.
Proc. VLDB Endow., July, 2024

VeriFi: Towards Verifiable Federated Unlearning.
IEEE Trans. Dependable Secur. Comput., 2024

AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents.
CoRR, 2024

Reconstruction of Differentially Private Text Sanitization via Large Language Models.
CoRR, 2024

SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI.
CoRR, 2024

AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs.
CoRR, 2024

EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage.
CoRR, 2024

Revolutionizing Database Q&A with Large Language Models: Comprehensive Benchmark and Evaluation.
CoRR, 2024

Tamper-Resistant Safeguards for Open-Weight LLMs.
CoRR, 2024

AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies.
CoRR, 2024

AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases.
CoRR, 2024

BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning.
CoRR, 2024

Data, Data Everywhere: A Guide for Pretraining Dataset Construction.
CoRR, 2024

R<sup>2</sup>-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning.
CoRR, 2024

Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness.
CoRR, 2024

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies.
CoRR, 2024

SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors.
CoRR, 2024

GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning.
CoRR, 2024

Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Characte.
CoRR, 2024

AI Risk Management Should Incorporate Both Safety and Security.
CoRR, 2024

Provably Unlearnable Examples.
CoRR, 2024

Introducing v0.5 of the AI Safety Benchmark from MLCommons.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking.
CoRR, 2024

TablePuppet: A Generic Framework for Relational Federated Learning.
CoRR, 2024

2023 Low-Power Computer Vision Challenge (LPCVC) Summary.
CoRR, 2024

COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks.
CoRR, 2024

Mitigating Fine-tuning Jailbreak Attack with Backdoor Enhanced Alignment.
CoRR, 2024

Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors.
CoRR, 2024

Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks.
CoRR, 2024

Towards Trustworthy Large Language Models.
Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 2024

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.
Proceedings of the 33rd USENIX Security Symposium, 2024

SoK: Privacy-Preserving Data Synthesis.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM.
Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning, 2024

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk.
Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning, 2024

TextGuard: Provable Defense against Backdoor Attacks on Text Classification.
Proceedings of the 31st Annual Network and Distributed System Security Symposium, 2024

Can Public Large Language Models Help Private Cross-device Federated Learning?
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

SHINE: Shielding Backdoors in Deep Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Differentially Private Synthetic Data via Foundation Model APIs 2: Text.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Certifiably Byzantine-Robust Federated Conformal Prediction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

GRATH: Gradual Self-Truthifying for Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Fair Federated Learning via the Proportional Veto Core.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Effective and Efficient Federated Tree Learning on Hybrid Data.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Perada: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

MMSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

KnowGraph: Knowledge-Enabled Anomaly Detection via Logical Reasoning on Graph Data.
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 2024

LAMPS '24: ACM CCS Workshop on Large AI Systems and Models with Privacy and Safety Analysis.
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 2024

POSTER: Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

POSTER: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

CaMML: Context-Aware Multimodal Learner for Large Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

FriendlyFoe: Adversarial Machine Learning as a Practical Architectural Defense against Side Channel Attacks.
Proceedings of the 2024 International Conference on Parallel Architectures and Compilation Techniques, 2024

2023
Adversarial Attack and Defense on Graph Data: A Survey.
IEEE Trans. Knowl. Data Eng., August, 2023

Constructing gene features for robust 3D mesh zero-watermarking.
J. Inf. Secur. Appl., March, 2023

Can Pruning Improve Certified Robustness of Neural Networks?
Trans. Mach. Learn. Res., 2023

A Survey on Safety-Critical Driving Scenario Generation - A Methodological Perspective.
IEEE Trans. Intell. Transp. Syst., 2023

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses.
IEEE Trans. Pattern Anal. Mach. Intell., 2023

Trustworthy AI: From Principles to Practices.
ACM Comput. Surv., 2023

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
CoRR, 2023

Invariant-Feature Subspace Recovery: A New Class of Provable Domain Generalization Algorithms.
CoRR, 2023

Gradual Domain Adaptation: Theory and Algorithms.
CoRR, 2023

MultiSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos.
CoRR, 2023

Can Public Large Language Models Help Private Cross-device Federated Learning?
CoRR, 2023

PerAda: Parameter-Efficient and Generalizable Federated Learning Personalization with Guarantees.
CoRR, 2023

Interpolation for Robust Learning: Data Augmentation on Geodesics.
CoRR, 2023

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation.
CoRR, 2023

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing.
Proceedings of the 32nd USENIX Security Symposium, 2023

How to Cover up Anomalous Accesses to Electronic Health Records.
Proceedings of the 32nd USENIX Security Symposium, 2023

RAB: Provable Robustness Against Backdoor Attacks.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

SoK: Certified Robustness for Deep Neural Networks.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

Personal Data for Personal Use: Vision or Reality?
Proceedings of the Companion of the 2023 International Conference on Management of Data, 2023

CARE: Certifiably Robust Learning with Reasoning via Variational Inference.
Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning, 2023

EDoG: Adversarial Edge Detection For Graph Neural Networks.
Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning, 2023

FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs.
Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning, 2023

CBD: A Certified Backdoor Detector Based on Local Dominant Probability.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Interpolation for Robust Learning: Data Augmentation on Wasserstein Geodesics.
Proceedings of the International Conference on Machine Learning, 2023

UMD: Unsupervised Model Detection for X2X Backdoor Attacks.
Proceedings of the International Conference on Machine Learning, 2023

Reconstructive Neuron Pruning for Backdoor Defense.
Proceedings of the International Conference on Machine Learning, 2023

DensePure: Understanding Diffusion Models for Adversarial Robustness.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

On the Robustness of Safe Reinforcement Learning under Observational Perturbations.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Can Brain Signals Reveal Inner Alignment with Human Languages?
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023

Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

SCCS: Semantics-Consistent Cross-domain Summarization via Optimal Transport Alignment.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Towards Certifying the Asymmetric Robustness for Neural Networks: Quantification and Applications.
IEEE Trans. Dependable Secur. Comput., 2022

Toward Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-Based Method.
IEEE Internet Things J., 2022

DetectS ec: Evaluating the robustness of object detection models to adversarial attacks.
Int. J. Intell. Syst., 2022

EDoG: Adversarial Edge Detection For Graph Neural Networks.
CoRR, 2022

Are Multimodal Models Robust to Image and Text Perturbations?
CoRR, 2022

DensePure: Understanding Diffusion Models towards Adversarial Robustness.
CoRR, 2022

Coordinated Science Laboratory 70th Anniversary Symposium: The Future of Computing.
CoRR, 2022

Semantics-Consistent Cross-domain Summarization via Optimal Transport Alignment.
CoRR, 2022

Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability.
CoRR, 2022

Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks.
CoRR, 2022

Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future Directions.
CoRR, 2022

Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles.
CoRR, 2022

An Empirical Exploration of Cross-domain Alignment between Language and Electroencephalogram.
CoRR, 2022

UniFed: A Benchmark for Federated Learning Frameworks.
CoRR, 2022

FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data.
CoRR, 2022

Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM.
CoRR, 2022

Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive Privacy Analysis and Beyond.
CoRR, 2022

Game of Trojans: A Submodular Byzantine Approach.
CoRR, 2022

Data Debugging with Shapley Importance over End-to-End Machine Learning Pipelines.
CoRR, 2022

MHMS: Multimodal Hierarchical Multimedia Summarization.
CoRR, 2022

Test Against High-Dimensional Uncertainties: Accelerated Evaluation of Autonomous Vehicles with Deep Importance Sampling.
CoRR, 2022

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks.
CoRR, 2022

Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization.
CoRR, 2022

Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN.
CoRR, 2022

Perturbation type categorization for multiple adversarial perturbation robustness.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

General Cutting Planes for Bound-Propagation-Based Neural Network Verification.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Improving Certified Robustness via Statistical Learning with Logical Reasoning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

CoPur: Certifiably Robust Collaborative Inference via Feature Purification.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Certifying Some Distributional Fairness with Subpopulation Decomposition.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely?
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Fairness in Federated Learning via Core-Stability.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

SemAttack: Natural Textual Attacks via Different Semantic Spaces.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022, 2022

GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction.
Proceedings of the Machine Learning for Healthcare Conference, 2022

The Fourth Workshop on Adversarial Learning Methods for Machine Learning and Data Mining (AdvML 2022).
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Certifiable Evaluation for Autonomous Vehicle Perception Systems using Deep Importance Sampling (Deep IS).
Proceedings of the 25th IEEE International Conference on Intelligent Transportation Systems, 2022

Verifiable Obstacle Detection.
Proceedings of the IEEE 33rd International Symposium on Software Reliability Engineering, 2022

Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization.
Proceedings of the International Conference on Machine Learning, 2022

Certifying Out-of-Domain Generalization for Blackbox Functions.
Proceedings of the International Conference on Machine Learning, 2022

Provable Domain Generalization via Invariant-Feature Subspace Recovery.
Proceedings of the International Conference on Machine Learning, 2022

Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond.
Proceedings of the International Conference on Machine Learning, 2022

How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection.
Proceedings of the International Conference on Machine Learning, 2022

Constrained Variational Policy Optimization for Safe Reinforcement Learning.
Proceedings of the International Conference on Machine Learning, 2022

Double Sampling Randomized Smoothing.
Proceedings of the International Conference on Machine Learning, 2022

TPC: Transformation-Specific Smoothing for Point Cloud Models.
Proceedings of the International Conference on Machine Learning, 2022

On the Certified Robustness for Ensemble Models and Beyond.
Proceedings of the Tenth International Conference on Learning Representations, 2022

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing.
Proceedings of the Tenth International Conference on Learning Representations, 2022

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks.
Proceedings of the Tenth International Conference on Learning Representations, 2022

SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination.
Proceedings of the Computer Vision - ECCV 2022, 2022

Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

CausalAF: Causal Autoregressive Flow for Safety-Critical Driving Scenario Generation.
Proceedings of the Conference on Robot Learning, 2022

TrustLOG: The First Workshop on Trustworthy Learning on Graphs.
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022

PhysioMTL: Personalizing Physiological Patterns using Optimal Transport Multi-Task Regression.
Proceedings of the Conference on Health, Inference, and Learning, 2022

Characterizing Attacks on Deep Reinforcement Learning.
Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, 2022

2021
A Robust Hybrid Deep Learning Model for Spatiotemporal Image Fusion.
Remote. Sens., 2021

Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation.
IEEE Robotics Autom. Lett., 2021

Stability-Based Analysis and Defense against Backdoor Attacks on Edge Computing Services.
IEEE Netw., 2021

Robust 3D mesh zero-watermarking based on spherical coordinate and Skewness measurement.
Multim. Tools Appl., 2021

MEC-Enabled Hierarchical Emotion Recognition and Perturbation-Aware Defense in Smart Cities.
IEEE Internet Things J., 2021

Self-supervised attention flow for dialogue state tracking.
Neurocomputing, 2021

Editorial: Safe and Trustworthy Machine Learning.
Frontiers Big Data, 2021

Compromised ACC vehicles can degrade current mixed-autonomy traffic performance while remaining stealthy against detection.
CoRR, 2021

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method.
CoRR, 2021

CausalAF: Causal Autoregressive Flow for Goal-Directed Safety-Critical Scenes Generation.
CoRR, 2021

Semantically Controllable Scene Generation with Guidance of Explicit Knowledge.
CoRR, 2021

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness.
CoRR, 2021

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space.
CoRR, 2021

Robusta: Robust AutoML for Feature Selection via Reinforcement Learning.
CoRR, 2021

Detecting AI Trojans Using Meta Neural Analysis.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adversarial Attack Generation Empowered by Min-Max Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models.
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, 2021


G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Anti-Backdoor Learning: Training Clean Models on Poisoned Data.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

What Would Jiminy Cricket Do? Towards Agents That Behave Morally.
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, 2021

DeGNN: Improving Graph Neural Networks with Graph Decomposition.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Third Workshop on Adversarial Learning Methods for Machine Learning and Data Mining (AdvML 2021).
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation.
Proceedings of the 38th International Conference on Machine Learning, 2021

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation.
Proceedings of the 38th International Conference on Machine Learning, 2021

Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability.
Proceedings of the 38th International Conference on Machine Learning, 2021

Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.
Proceedings of the 9th International Conference on Learning Representations, 2021

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks.
Proceedings of the 9th International Conference on Learning Representations, 2021

AI-GAN: Attack-Inspired Generation of Adversarial Examples.
Proceedings of the 2021 IEEE International Conference on Image Processing, 2021

Can Shape Structure Features Improve Model Robustness under Diverse Adversarial Settings?
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Application-driven Privacy-preserving Data Publishing with Correlated Attributes.
Proceedings of the EWSN '21: Proceedings of the 2021 International Conference on Embedded Wireless Systems and Networks, 2021

Scalability vs. Utility: Do We Have To Sacrifice One for the Other in Data Importance Quantification?
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

TSS: Transformation-Specific Smoothing for Robustness Certification.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

Understanding Robustness in Teacher-Student Setting: A New Perspective.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Learning salient seeds refer to the manifold ranking and background-prior strategy.
Multim. Tools Appl., 2020

Adversarial examples detection through the sensitivity in space mappings.
IET Comput. Vis., 2020

On the Limitations of Denoising Strategies as Adversarial Defenses.
CoRR, 2020

Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing.
CoRR, 2020

Generating Adversarial yet Inconspicuous Patches with a Single Image.
CoRR, 2020

SoK: Certified Robustness for Deep Neural Networks.
CoRR, 2020

Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets.
CoRR, 2020

Does Adversarial Transferability Indicate Knowledge Transferability?
CoRR, 2020

Secure Network Release with Link Privacy.
CoRR, 2020

Towards Evaluating the Robustness of Chinese BERT Classifiers.
CoRR, 2020

Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations.
CoRR, 2020

Anomalous Instance Detection in Deep Learning: A Survey.
CoRR, 2020

End-to-end Robustness for Sensing-Reasoning Machine Learning Pipelines.
CoRR, 2020

Provable Robust Learning Based on Transformation-Specific Smoothing.
CoRR, 2020

Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States.
CoRR, 2020

AI-GAN: Attack-Inspired Generation of Adversarial Examples.
CoRR, 2020

Anomalous Example Detection in Deep Learning: A Survey.
IEEE Access, 2020

Leveraging EM Side-Channel Information to Detect Rowhammer Attacks.
Proceedings of the 2020 IEEE Symposium on Security and Privacy, 2020

On Convergence of Nearest Neighbor Classifiers over Feature Transformations.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

On the Impact of Perceptual Compression on Deep Learning.
Proceedings of the 3rd IEEE Conference on Multimedia Information Processing and Retrieval, 2020

Improving Robustness of Deep-Learning-Based Image Reconstruction.
Proceedings of the 37th International Conference on Machine Learning, 2020

Adversarial Mutual Information for Text Generation.
Proceedings of the 37th International Conference on Machine Learning, 2020

Towards Stable and Efficient Training of Verifiably Robust Neural Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

DBA: Distributed Backdoor Attacks against Federated Learning.
Proceedings of the 8th International Conference on Learning Representations, 2020

Unrestricted Adversarial Examples via Semantic Manipulation.
Proceedings of the 8th International Conference on Learning Representations, 2020

To Warn or Not to Warn: Online Signaling in Audit Games.
Proceedings of the 36th IEEE International Conference on Data Engineering, 2020

Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

SemanticAdv: Generating Adversarial Examples via Attribute-Conditioned Image Editing.
Proceedings of the Computer Vision - ECCV 2020, 2020

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

QEBA: Query-Efficient Boundary-Based Blackbox Attack.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

Controllable Orthogonalization in Training DNNs.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

A View-Adversarial Framework for Multi-View Network Embedding.
Proceedings of the CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, 2020

Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks.
Proceedings of the CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020

Reinforcement-Learning Based Portfolio Management with Augmented Asset Movement Prediction States.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

Reinforcement Learning with Perturbed Rewards.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Database Audit Workload Prioritization via Game Theory.
ACM Trans. Priv. Secur., 2019

Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms.
Proc. VLDB Endow., 2019

Attack-Resistant Federated Learning with Residual-based Reweighting.
CoRR, 2019

AdvCodec: Towards A Unified Framework for Adversarial Text Generation.
CoRR, 2019

An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms.
CoRR, 2019

Characterizing Attacks on Deep Reinforcement Learning.
CoRR, 2019

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.
CoRR, 2019

Scalable Differentially Private Generative Student Model via PATE.
CoRR, 2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing.
CoRR, 2019

Towards Stable and Efficient Training of Verifiably Robust Neural Networks.
CoRR, 2019

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense.
CoRR, 2019

How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning.
CoRR, 2019

Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.
CoRR, 2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation.
CoRR, 2019

DeepCT: Tomographic Combinatorial Testing for Deep Learning Systems.
Proceedings of the 26th IEEE International Conference on Software Analysis, 2019

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features.
Proceedings of the 28th USENIX Security Symposium, 2019

DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model.
Proceedings of the 2019 IEEE Symposium on Security and Privacy, 2019

TextBugger: Generating Adversarial Text Against Real-world Applications.
Proceedings of the 26th Annual Network and Distributed System Security Symposium, 2019

DeepHunter: a coverage-guided fuzz testing framework for deep neural networks.
Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2019

Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

Robust Inference via Generative Classifiers for Handling Noisy Labels.
Proceedings of the 36th International Conference on Machine Learning, 2019

Characterizing Audio Adversarial Examples Using Temporal Dependency.
Proceedings of the 7th International Conference on Learning Representations, 2019

Performing Co-membership Attacks Against Deep Generative Models.
Proceedings of the 2019 IEEE International Conference on Data Mining, 2019

AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos.
Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, 2019

MeshAdv: Adversarial Meshes for Visual Recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

Generating 3D Adversarial Point Clouds.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning.
Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019

Towards Efficient Data Valuation Based on the Shapley Value.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Evasion-Robust Classification on Binary Domains.
ACM Trans. Knowl. Discov. Data, 2018

Adversarial Attack and Defense on Graph Data: A Survey.
CoRR, 2018

Protecting Sensitive Attributes via Generative Adversarial Networks.
CoRR, 2018

Differentially Private Data Generative Models.
CoRR, 2018

Data Poisoning Attack against Unsupervised Node Embedding Methods.
CoRR, 2018

One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy.
CoRR, 2018

Realistic Adversarial Examples in 3D Meshes.
CoRR, 2018

Secure Deep Learning Engineering: A Software Quality Assurance Perspective.
CoRR, 2018

Coverage-Guided Fuzzing for Deep Neural Networks.
CoRR, 2018

The Helmholtz Method: Using Perceptual Compression to Reduce Machine Learning Complexity.
CoRR, 2018

Combinatorial Testing for Deep Learning Systems.
CoRR, 2018

Generative Model: Membership Attack, Generalization and Diversity.
CoRR, 2018

AUSERA: Large-Scale Automated Security Risk Assessment of Global Mobile Banking Apps.
CoRR, 2018

DeepGauge: Comprehensive and Multi-Granularity Testing Criteria for Gauging the Robustness of Deep Learning Systems.
CoRR, 2018

Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach.
Comput. Secur., 2018

Physical Adversarial Examples for Object Detectors.
Proceedings of the 12th USENIX Workshop on Offensive Technologies, 2018

From Patching Delays to Infection Symptoms: Using Risk Profiles for an Early Discovery of Vulnerabilities Exploited in the Wild.
Proceedings of the 27th USENIX Security Symposium, 2018

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning.
Proceedings of the 2018 IEEE Symposium on Security and Privacy, 2018

A Joint Optimization Approach for Personalized Recommendation Diversification.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2018

DeepGauge: multi-granularity testing criteria for deep learning systems.
Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, 2018

DeepMutation: Mutation Testing of Deep Learning Systems.
Proceedings of the 29th IEEE International Symposium on Software Reliability Engineering, 2018

Generating Adversarial Examples with Adversarial Networks.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018

Spatially Transformed Adversarial Examples.
Proceedings of the 6th International Conference on Learning Representations, 2018

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality.
Proceedings of the 6th International Conference on Learning Representations, 2018

Decision Boundary Analysis of Adversarial Examples.
Proceedings of the 6th International Conference on Learning Representations, 2018

Black-box Attacks on Deep Neural Networks via Gradient Estimation.
Proceedings of the 6th International Conference on Learning Representations, 2018

Get Your Workload in Order: Game Theoretic Prioritization of Database Auditing.
Proceedings of the 34th IEEE International Conference on Data Engineering, 2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation.
Proceedings of the Computer Vision - ECCV 2018, 2018

Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms.
Proceedings of the Computer Vision - ECCV 2018, 2018

Robust Physical-World Attacks on Deep Learning Visual Classification.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

Poisoning Attacks on Data-Driven Utility Learning in Games.
Proceedings of the 2018 Annual American Control Conference, 2018

Orthogonal Weight Normalization: Solution to Optimization Over Multiple Dependent Stiefel Manifolds in Deep Neural Networks.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018

2017
Scalable Iterative Classification for Sanitizing Large-Scale Datasets.
IEEE Trans. Knowl. Data Eng., 2017

SMP: Scalable Multicast Protocol for Granting Authority in Heterogeneous Networks.
Int. J. Netw. Secur., 2017

Exploring the Space of Black-box Attacks on Deep Neural Networks.
CoRR, 2017

Note on Attacking Object Detectors with Adversarial Stickers.
CoRR, 2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning.
CoRR, 2017

Projection Based Weight Normalization for Deep Neural Networks.
CoRR, 2017

Orthogonal Weight Normalization: Solution to Optimization over Multiple Dependent Stiefel Manifolds in Deep Neural Networks.
CoRR, 2017

Feature Conservation in Adversarial Classifier Evasion: A Case Study.
CoRR, 2017

Robust Physical-World Attacks on Machine Learning Models.
CoRR, 2017

Automated QoS-oriented cloud resource optimization using containers.
Autom. Softw. Eng., 2017

Large-Scale Identification of Malicious Singleton Files.
Proceedings of the Seventh ACM Conference on Data and Application Security and Privacy, 2017

Robust Linear Regression Against Training Data Poisoning.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017

Engineering Agreement: The Naming Game with Asymmetric and Heterogeneous Agents.
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
Secure learning in adversarial environments.
PhD thesis, 2016

Renovating Contaminative Image Archives Based on Patch Propagation and Adaptive Confidence Collation.
IEEE Trans. Circuits Syst. Video Technol., 2016

Optimizing annotation resources for natural language de-identification via a game theoretic framework.
J. Biomed. Informatics, 2016

Robust High-Dimensional Linear Regression.
CoRR, 2016

A General Retraining Framework for Scalable Adversarial Classification.
CoRR, 2016

Data Poisoning Attacks on Factorization-Based Collaborative Filtering.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Learning Clinical Workflows to Identify Subgroups of Heart Failure Patients.
Proceedings of the AMIA 2016, 2016

Behavioral Experiments in Email Filter Evasion.
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016

2015
Iterative Classification for Sanitizing Large-Scale Datasets.
Proceedings of the 2015 IEEE International Conference on Data Mining, 2015

Secure Learning and Mining in Adversarial Environments [Extended Abstract].
Proceedings of the IEEE International Conference on Data Mining Workshop, 2015

Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings.
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 2015

2014
Self-Recognized Image Protection Technique that Resists Large-Scale Cropping.
IEEE Multim., 2014

Feature Cross-Substitution in Adversarial Classification.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014

Shape-constrained multi-atlas segmentation of spleen in CT.
Proceedings of the Medical Imaging 2014: Image Processing, 2014

On study design in neuroimaging heritability analyses.
Proceedings of the Medical Imaging 2014: Image Processing, 2014

Optimal randomized classification in adversarial settings.
Proceedings of the International conference on Autonomous Agents and Multi-Agent Systems, 2014

2013
HORME: hierarchical-object-relational medical management for electronic record.
Secur. Commun. Networks, 2013

Notes on "Authentication protocol using an identifier in an ad hoc network environment".
Math. Comput. Model., 2013

Aryabhata remainder theorem-based non-iterative electronic lottery mechanism with robustness.
IET Inf. Secur., 2013

2012
Rapid prototyping of image processing workflows on massively parallel architectures.
Proceedings of the 10th International Workshop on Intelligent Solutions in Embedded Systems, 2012

2009
A Brand-New Mobile Value-Added Service: M-Check.
Proceedings of the International Conference on Networked Computing and Advanced Information Management, 2009


  Loading...