Neil Zhenqiang Gong

Orcid: 0000-0002-9900-9309

Affiliations:
  • Duke University, Durham, NC, USA


According to our database1, Neil Zhenqiang Gong authored at least 160 papers between 2009 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Link Stealing Attacks Against Inductive Graph Neural Networks.
Proc. Priv. Enhancing Technol., 2024

SoK: Secure Human-centered Wireless Sensing.
Proc. Priv. Enhancing Technol., 2024

Securing the Future of GenAI: Policy and Technology.
IACR Cryptol. ePrint Arch., 2024

Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment.
CoRR, 2024

Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models.
CoRR, 2024

StringLLM: Understanding the String Processing Capability of Large Language Models.
CoRR, 2024

Evaluating Large Language Model based Personal Information Extraction and Countermeasures.
CoRR, 2024

A General Framework for Data-Use Auditing of ML Models.
CoRR, 2024

Refusing Safe Prompts for Multi-modal Large Language Models.
CoRR, 2024

Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning.
CoRR, 2024

Self-Cognition in Large Language Models: An Exploratory Study.
CoRR, 2024

Byzantine-Robust Decentralized Federated Learning.
CoRR, 2024

AudioMarkBench: Benchmarking Robustness of Audio Watermarking.
CoRR, 2024

Stable Signature is Unstable: Removing Image Watermark from Diffusion Models.
CoRR, 2024

PLeak: Prompt Leaking Attacks against Large Language Model Applications.
CoRR, 2024

Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning.
CoRR, 2024

PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency.
CoRR, 2024

SoK: Gradient Leakage in Federated Learning.
CoRR, 2024

Watermark-based Detection and Attribution of AI-Generated Content.
CoRR, 2024

Optimization-based Prompt Injection Attack to LLM-as-a-Judge.
CoRR, 2024

A Transfer Attack to Image Watermarks.
CoRR, 2024

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis.
CoRR, 2024

TrustLLM: Trustworthiness in Large Language Models.
CoRR, 2024

Poisoning Federated Recommender Systems with Fake Users.
Proceedings of the ACM on Web Conference 2024, 2024

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks.
Proceedings of the Companion Proceedings of the ACM on Web Conference 2024, 2024

ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.
Proceedings of the 33rd USENIX Security Symposium, 2024

Formalizing and Benchmarking Prompt Injection Attacks and Defenses.
Proceedings of the 33rd USENIX Security Symposium, 2024

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models.
Proceedings of the 33rd USENIX Security Symposium, 2024

SneakyPrompt: Jailbreaking Text-to-image Generative Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning.
Proceedings of the IEEE Security and Privacy, 2024

PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts.
Proceedings of the 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis, 2024

FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error.
Proceedings of the Forty-first International Conference on Machine Learning, 2024


DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Certifiably Robust Image Watermark.
Proceedings of the Computer Vision - ECCV 2024, 2024

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents.
Proceedings of the Computer Vision - ECCV 2024, 2024

Data Poisoning Based Backdoor Attacks to Contrastive Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

Visual Hallucinations of Multi-modal Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Generation-based fuzzing? Don't build a new generator, reuse!
Comput. Secur., June, 2023

Mendata: A Framework to Purify Manipulated Training Data.
CoRR, 2023

Competitive Advantage Attacks to Decentralized Federated Learning.
CoRR, 2023

Prompt Injection Attacks and Defenses in LLM-Integrated Applications.
CoRR, 2023

DyVal: Graph-informed Dynamic Evaluation of Large Language Models.
CoRR, 2023

Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework.
CoRR, 2023

PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts.
CoRR, 2023

SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters.
CoRR, 2023

PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation.
Proceedings of the 32nd USENIX Security Symposium, 2023

Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation.
Proceedings of the 32nd USENIX Security Symposium, 2023

PORE: Provably Robust Recommender Systems against Data Poisoning Attacks.
Proceedings of the 32nd USENIX Security Symposium, 2023

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service.
Proceedings of the 30th Annual Network and Distributed System Security Symposium, 2023

IPCert: Provably Robust Intellectual Property Protection for Machine Learning.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation.
Proceedings of the 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Network, 2023

PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Evading Watermark based Detection of AI-Generated Content.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023

2022
FLCert: Provably Secure Federated Learning Against Poisoning Attacks.
IEEE Trans. Inf. Forensics Secur., 2022

Distributed information encoding and decoding using self-organized spatial patterns.
Patterns, 2022

SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing.
CoRR, 2022

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning.
CoRR, 2022

Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation.
CoRR, 2022

StolenEncoder: Stealing Pre-trained Encoders.
CoRR, 2022

Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data.
Proceedings of the 31st USENIX Security Symposium, 2022

PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning.
Proceedings of the 31st USENIX Security Symposium, 2022

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Deep Neural Network Piration without Accuracy Loss.
Proceedings of the 21st IEEE International Conference on Machine Learning and Applications, 2022

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Addressing Heterogeneity in Federated Learning via Distributional Transformation.
Proceedings of the Computer Vision - ECCV 2022, 2022

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning.
Proceedings of the Computer Vision - ECCV 2022, 2022

HERO: hessian-enhanced robust optimization for unifying and improving generalization and quantization performance.
Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

Membership Inference Attack in Face of Data Transformations.
Proceedings of the 10th IEEE Conference on Communications and Network Security, 2022

StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures.
Proceedings of the ASIA CCS '22: ACM Asia Conference on Computer and Communications Security, Nagasaki, Japan, 30 May 2022, 2022

GraphTrack: A Graph-based Cross-Device Tracking Framework.
Proceedings of the ASIA CCS '22: ACM Asia Conference on Computer and Communications Security, Nagasaki, Japan, 30 May 2022, 2022

AFLGuard: Byzantine-robust Asynchronous Federated Learning.
Proceedings of the Annual Computer Security Applications Conference, 2022

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
LogExtractor: Extracting digital evidence from android log messages via string and taint analysis.
Digit. Investig., 2021

10 Security and Privacy Problems in Self-Supervised Learning.
CoRR, 2021

FaceGuard: Proactive Deepfake Detection.
CoRR, 2021

Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention.
CoRR, 2021

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation.
Proceedings of the WWW '21: The Web Conference 2021, 2021

Data Poisoning Attacks and Defenses to Crowdsourcing Systems.
Proceedings of the WWW '21: The Web Conference 2021, 2021

Stealing Links from Graph Neural Networks.
Proceedings of the 30th USENIX Security Symposium, 2021

Data Poisoning Attacks to Local Differential Privacy Protocols.
Proceedings of the 30th USENIX Security Symposium, 2021

Backdoor Attacks to Graph Neural Networks.
Proceedings of the SACMAT '21: The 26th ACM Symposium on Access Control Models and Technologies, 2021

Practical Blind Membership Inference Attack via Differential Comparisons.
Proceedings of the 28th Annual Network and Distributed System Security Symposium, 2021

Data Poisoning Attacks to Deep Learning Based Recommender Systems.
Proceedings of the 28th Annual Network and Distributed System Security Symposium, 2021

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping.
Proceedings of the 28th Annual Network and Distributed System Security Symposium, 2021

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Unveiling Fake Accounts at the Time of Registration: An Unsupervised Approach.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

On the Intrinsic Differential Privacy of Bagging.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

Understanding the Security of Deepfake Detection.
Proceedings of the Digital Forensics and Cyber Crime - 12th EAI International Conference, 2021

PointGuard: Provably Robust 3D Point Cloud Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

On Detecting Growing-Up Behaviors of Malicious Accounts in Privacy-Centric Mobile Social Networks.
Proceedings of the ACSAC '21: Annual Computer Security Applications Conference, Virtual Event, USA, December 6, 2021

Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Provably Secure Federated Learning against Malicious Clients.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Certified Robustness of Nearest Neighbors against Data Poisoning Attacks.
CoRR, 2020

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing.
CoRR, 2020

Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.
Proceedings of the WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, 2020

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems.
Proceedings of the WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, 2020

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.
Proceedings of the 29th USENIX Security Symposium, 2020

State Estimation via Inference on a Probabilistic Graphical Model - A Different Perspective.
Proceedings of the IEEE Power & Energy Society Innovative Smart Grid Technologies Conference, 2020

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing.
Proceedings of the 8th International Conference on Learning Representations, 2020

Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
Proceedings of the Adaptive Autonomous Secure Cyber Systems., 2020

2019
Structure-Based Sybil Detection in Social Networks via Local Rule-Based Propagation.
IEEE Trans. Netw. Sci. Eng., 2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
CoRR, 2019

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation.
Proceedings of the 26th Annual Network and Distributed System Security Symposium, 2019

Characterizing and Detecting Malicious Accounts in Privacy-Centric Mobile Social Networks: A Case Study.
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019

Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge.
Proceedings of the 2019 IEEE Conference on Computer Communications, 2019

Detecting Fake Accounts in Online Social Networks at the Time of Registrations.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

Attacking Graph-based Classification via Manipulating the Graph Structure.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
Attribute Inference Attacks in Online Social Networks.
ACM Trans. Priv. Secur., 2018

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.
Proceedings of the 27th USENIX Security Symposium, 2018

A Dynamic Taint Analysis Tool for Android App Forensics.
Proceedings of the 2018 IEEE Security and Privacy Workshops, 2018

Stealing Hyperparameters in Machine Learning.
Proceedings of the 2018 IEEE Symposium on Security and Privacy, 2018

SybilBlind: Detecting Fake Users in Online Social Networks Without Manual Labels.
Proceedings of the Research in Attacks, Intrusions, and Defenses, 2018

SYBILFUSE: Combining Local Attributes with Global Structure to Perform Robust Sybil Detection.
Proceedings of the 2018 IEEE Conference on Communications and Network Security, 2018

EviHunter: Identifying Digital Evidence in the Permanent Storage of Android Devices via Static Analysis.
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018

Poisoning Attacks to Graph-Based Recommender Systems.
Proceedings of the 34th Annual Computer Security Applications Conference, 2018

2017
Robust Spammer Detection in Microblogs: Leveraging User Carefulness.
ACM Trans. Intell. Syst. Technol., 2017

AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields.
Proceedings of the 26th International Conference on World Wide Web, 2017

Fake Co-visitation Injection Attacks to Recommender Systems.
Proceedings of the 24th Annual Network and Distributed System Security Symposium, 2017

SybilSCAR: Sybil detection in online social networks via local rule based propagation.
Proceedings of the 2017 IEEE Conference on Computer Communications, 2017

GANG: Detecting Fraudulent Users in Online Social Networks via Guilt-by-Association on Directed Graphs.
Proceedings of the 2017 IEEE International Conference on Data Mining, 2017

PIANO: Proximity-Based User Authentication on Voice-Powered Internet-of-Things Devices.
Proceedings of the 37th IEEE International Conference on Distributed Computing Systems, 2017

Random Walk Based Fake Account Detection in Online Social Networks.
Proceedings of the 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2017

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification.
Proceedings of the 33rd Annual Computer Security Applications Conference, 2017

2016
Structural Analysis of User Choices for Mobile App Recommendation.
ACM Trans. Knowl. Discov. Data, 2016

Seed-Based De-Anonymizability Quantification of Social Networks.
IEEE Trans. Inf. Forensics Secur., 2016

You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors.
Proceedings of the 25th USENIX Security Symposium, 2016

Forgery-Resistant Touch-based Authentication on Mobile Devices.
Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security, 2016

2015
What You Submit Is Who You Are: A Multimodal Approach for Deanonymizing Scientific Publications.
IEEE Trans. Inf. Forensics Secur., 2015

Towards Forgery-Resistant Touch-based Biometric Authentication on Mobile Devices.
CoRR, 2015

SybilFrame: A Defense-in-Depth Framework for Structure-Based Sybil Detection.
CoRR, 2015

Personalized Mobile App Recommendation: Reconciling App Functionality and User Privacy Preference.
Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, 2015

On Your Social Network De-anonymizablity: Quantification and Large Scale Evaluation with Seed Knowledge.
Proceedings of the 22nd Annual Network and Distributed System Security Symposium, 2015

Protecting Your Children from Inappropriate Content in Mobile Apps: An Automatic Maturity Rating Framework.
Proceedings of the 24th ACM International Conference on Information and Knowledge Management, 2015

2014
Joint Link Prediction and Attribute Inference Using a Social-Attribute Network.
ACM Trans. Intell. Syst. Technol., 2014

On the Security of Trustee-Based Social Authentications.
IEEE Trans. Inf. Forensics Secur., 2014

SybilBelief: A Semi-Supervised Learning Approach for Structure-Based Sybil Detection.
IEEE Trans. Inf. Forensics Secur., 2014

Reciprocal versus parasocial relationships in online social networks.
Soc. Netw. Anal. Min., 2014

Connect the dots by understanding user status and transitions.
Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014

2013
Reciprocity in Social Networks: Measurements, Predictions, and Implications
CoRR, 2013

2012
On the Feasibility of Internet-Scale Author Identification.
Proceedings of the IEEE Symposium on Security and Privacy, 2012

Evolution of social-attribute networks: measurements, modeling, and implications using google+.
Proceedings of the 12th ACM SIGCOMM Internet Measurement Conference, 2012

2011
Predicting Links and Inferring Attributes using a Social-Attribute Network (SAN)
CoRR, 2011

Efficient Top-K Query Algorithms Using Density Index.
Proceedings of the Applied Informatics and Communication - International Conference, 2011

Efficient Approximate Top-<i>k</i> Query Algorithm Using Cube Index.
Proceedings of the Web Technologies and Applications - 13th Asia-Pacific Web Conference, 2011

2010
Protecting Privacy in Location-Based Services Using K-Anonymity without Cloaked Region.
Proceedings of the Eleventh International Conference on Mobile Data Management, 2010

2009
Efficient Top-<i>k</i> Query Algorithms Using <i>K</i>-Skyband Partition.
Proceedings of the Scalable Information Systems, 4th International ICST Conference, 2009


  Loading...