Xinlei He

Orcid: 0009-0007-3879-9080

Affiliations:
  • Hong Kong University of Science and Technology, Hong Kong
  • CISPA Helmholtz Center for Information Security, Saarland University, Germany (PhD 2023)
  • Fudan University, Shanghai, China (former)


According to our database1, Xinlei He authored at least 43 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Link Stealing Attacks Against Inductive Graph Neural Networks.
Proc. Priv. Enhancing Technol., 2024

Automatic Dataset Construction (ADC): Sample Collection, Data Curation, and Beyond.
CoRR, 2024

Membership Inference Attack Against Masked Image Modeling.
CoRR, 2024

On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks.
CoRR, 2024

Jailbreak Attacks and Defenses Against Large Language Models: A Survey.
CoRR, 2024

JailbreakEval: An Integrated Toolkit for Evaluating Jailbreak Attempts Against Large Language Models.
CoRR, 2024

Hidden Question Representations Tell Non-Factuality Within and Across Large Language Models.
CoRR, 2024

Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging.
CoRR, 2024

SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models.
Proceedings of the 33rd USENIX Security Symposium, 2024

You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Test-Time Poisoning Attacks Against Test-Time Adaptation Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

2023
Privacy risk assessment of emerging machine learning paradigms.
PhD thesis, 2023

Trimming Mobile Applications for Bandwidth-Challenged Networks in Developing Regions.
IEEE Trans. Mob. Comput., 2023

A Comprehensive Study of Privacy Risks in Curriculum Learning.
CoRR, 2023

Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis.
CoRR, 2023

MGTBench: Benchmarking Machine-Generated Text Detection.
CoRR, 2023

A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots.
Proceedings of the 32nd USENIX Security Symposium, 2023

On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

Generated Graph Detection.
Proceedings of the International Conference on Machine Learning, 2023

Data Poisoning Attacks Against Multimodal Encoders.
Proceedings of the International Conference on Machine Learning, 2023

Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023

2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks.
CoRR, 2022

Backdoor Attacks in the Supply Chain of Masked Image Modeling.
CoRR, 2022

Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models.
CoRR, 2022

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
Proceedings of the 31st USENIX Security Symposium, 2022

Model Stealing Attacks Against Inductive Graph Neural Networks.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

On Xing Tian and the Perseverance of Anti-China Sentiment Online.
Proceedings of the Sixteenth International AAAI Conference on Web and Social Media, 2022

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning.
Proceedings of the Computer Vision - ECCV 2022, 2022

Auditing Membership Leakages of Multi-Exit Networks.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

2021
Cross-site Prediction on Social Influence for Cold-start Users in Online Social Networks.
ACM Trans. Web, 2021

DatingSec: Detecting Malicious Accounts in Dating Apps Using a Content-Based Attention Network.
IEEE Trans. Dependable Secur. Comput., 2021

Node-Level Membership Inference Attacks Against Graph Neural Networks.
CoRR, 2021

Stealing Links from Graph Neural Networks.
Proceedings of the 30th USENIX Security Symposium, 2021

Quantifying and Mitigating Privacy Risks of Contrastive Learning.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

2020
Did State-sponsored Trolls Shape the US Presidential Election Discourse? Quantifying Influence on Twitter.
CoRR, 2020

2019
On the Influence of Twitter Trolls during the 2016 US Presidential Election.
CoRR, 2019

2018
DeepScan: Exploiting Deep Learning for Malicious Account Detection in Location-Based Social Networks.
IEEE Commun. Mag., 2018

Understanding the behavioral differences between american and german users: A data-driven study.
Big Data Min. Anal., 2018

Deep Learning-Based Malicious Account Detection in the Momo Social Network.
Proceedings of the 27th International Conference on Computer Communication and Networks, 2018

LBSLab: A User Data Collection System in Mobile Environments.
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 2018

Identification of Influential Users in Emerging Online Social Networks Using Cross-site Linking.
Proceedings of the Computer Supported Cooperative Work and Social Computing, 2018


  Loading...