Jinyuan Jia

Orcid: 0000-0002-9785-7769

Affiliations:
  • Penn State University, State College, PA, USA
  • University of Illinois Urbana-Champaign, IL, USA (former)
  • Duke University, USA (former)


According to our database1, Jinyuan Jia authored at least 75 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., August, 2024

Evaluating Large Language Model based Personal Information Extraction and Countermeasures.
CoRR, 2024

TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models.
CoRR, 2024

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models.
CoRR, 2024

Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code.
CoRR, 2024

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning.
CoRR, 2024

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.
Proceedings of the 33rd USENIX Security Symposium, 2024

Formalizing and Benchmarking Prompt Injection Attacks and Defenses.
Proceedings of the 33rd USENIX Security Symposium, 2024

FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning.
Proceedings of the IEEE Security and Privacy, 2024

TextGuard: Provable Defense against Backdoor Attacks on Text Classification.
Proceedings of the 31st Annual Network and Distributed System Security Symposium, 2024

SHINE: Shielding Backdoors in Deep Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Graph Neural Network Explanations are Fragile.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Certifiably Robust Image Watermark.
Proceedings of the Computer Vision - ECCV 2024, 2024

Data Poisoning Based Backdoor Attacks to Contrastive Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Towards General Robustness Verification of MaxPool-Based Convolutional Neural Networks via Tightening Linear Approximation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

MMCert: Provable Defense Against Adversarial Attacks to Multi-Modal Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Poster: Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

POSTER: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

Jailbreak Open-Sourced Large Language Models via Enforced Decoding.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
CoRR, 2023

Prompt Injection Attacks and Defenses in LLM-Integrated Applications.
CoRR, 2023

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?
CoRR, 2023

PORE: Provably Robust Recommender Systems against Data Poisoning Attacks.
Proceedings of the 32nd USENIX Security Symposium, 2023

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service.
Proceedings of the 30th Annual Network and Distributed System Security Symposium, 2023

Screen Perturbation: Adversarial Attack and Defense on Under-Screen Camera.
Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, 2023

Graph Contrastive Backdoor Attacks.
Proceedings of the International Conference on Machine Learning, 2023

PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Privacy Protection via Adversarial Examples.
PhD thesis, 2022

FLCert: Provably Secure Federated Learning Against Poisoning Attacks.
IEEE Trans. Inf. Forensics Secur., 2022

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning.
CoRR, 2022

StolenEncoder: Stealing Pre-trained Encoders.
CoRR, 2022

Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data.
Proceedings of the 31st USENIX Security Symposium, 2022

PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning.
Proceedings of the 31st USENIX Security Symposium, 2022

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Deep Neural Network Piration without Accuracy Loss.
Proceedings of the 21st IEEE International Conference on Machine Learning and Applications, 2022

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations.
Proceedings of the Tenth International Conference on Learning Representations, 2022

StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
10 Security and Privacy Problems in Self-Supervised Learning.
CoRR, 2021

Stealing Links from Graph Neural Networks.
Proceedings of the 30th USENIX Security Symposium, 2021

Data Poisoning Attacks to Local Differential Privacy Protocols.
Proceedings of the 30th USENIX Security Symposium, 2021

Backdoor Attacks to Graph Neural Networks.
Proceedings of the SACMAT '21: The 26th ACM Symposium on Access Control Models and Technologies, 2021

Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

On the Intrinsic Differential Privacy of Bagging.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

Detection Of Malicious DNS and Web Servers using Graph-Based Approaches.
Proceedings of the IEEE International Conference on Acoustics, 2021

PointGuard: Provably Robust 3D Point Cloud Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Provably Secure Federated Learning against Malicious Clients.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Certified Robustness of Nearest Neighbors against Data Poisoning Attacks.
CoRR, 2020

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing.
CoRR, 2020

Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.
Proceedings of the WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, 2020

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.
Proceedings of the 29th USENIX Security Symposium, 2020

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing.
Proceedings of the 8th International Conference on Learning Representations, 2020

Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
Proceedings of the Adaptive Autonomous Secure Cyber Systems., 2020

2019
Structure-Based Sybil Detection in Social Networks via Local Rule-Based Propagation.
IEEE Trans. Netw. Sci. Eng., 2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
CoRR, 2019

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation.
Proceedings of the 26th Annual Network and Distributed System Security Symposium, 2019

Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge.
Proceedings of the 2019 IEEE Conference on Computer Communications, 2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.
Proceedings of the 27th USENIX Security Symposium, 2018

2017
AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields.
Proceedings of the 26th International Conference on World Wide Web, 2017

Random Walk Based Fake Account Detection in Online Social Networks.
Proceedings of the 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2017


  Loading...