Arjun Nitin Bhagoji

Orcid: 0000-0002-2803-5649

According to our database1, Arjun Nitin Bhagoji authored at least 39 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
NetDiffusion: Network Data Augmentation Through Protocol-Constrained Traffic Generation.
Proc. ACM Meas. Anal. Comput. Syst., 2024

Towards Scalable and Robust Model Versioning.
Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning, 2024

Feasibility of State Space Models for Network Traffic Generation.
Proceedings of the 2024 SIGCOMM Workshop on Networks for AI Computing, 2024

"Community Guidelines Make this the Best Party on the Internet": An In-Depth Study of Online Platforms' Content Moderation Policies.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024

2023
LEAF: Navigating Concept Drift in Cellular Networks.
PACMNET, 2023

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker.
CoRR, 2023

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning.
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023

2022
Natural Backdoor Datasets.
CoRR, 2022

Understanding Robust Learning through the Lens of Representation Similarities.
CoRR, 2022

Can Backdoor Attacks Survive Time-Varying Models?
CoRR, 2022

Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks.
Proceedings of the 31st USENIX Security Symposium, 2022

Finding Naturally Occurring Physical Backdoors in Image Datasets.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Understanding Robust Learning through the Lens of Representation Similarities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Advances and Open Problems in Federated Learning.
Found. Trends Mach. Learn., 2021

Traceback of Data Poisoning Attacks in Neural Networks.
CoRR, 2021

A Real-time Defense against Website Fingerprinting Attacks.
CoRR, 2021

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking.
Proceedings of the 30th USENIX Security Symposium, 2021

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries.
Proceedings of the 38th International Conference on Machine Learning, 2021

Backdoor Attacks Against Deep Learning Systems in the Physical World.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Patch-based Defenses against Web Fingerprinting Attacks.
Proceedings of the AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021

2020
A Critical Evaluation of Open-World Machine Learning.
CoRR, 2020

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields.
CoRR, 2020

2019
Advances and Open Problems in Federated Learning.
CoRR, 2019

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.
CoRR, 2019

Lower Bounds on Adversarial Robustness from Optimal Transport.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Analyzing Federated Learning through an Adversarial Lens.
Proceedings of the 36th International Conference on Machine Learning, 2019

Analyzing the Robustness of Open-World Machine Learning.
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, 2019

2018
PAC-learning in the presence of evasion adversaries.
CoRR, 2018

DARTS: Deceiving Autonomous Cars with Toxic Signs.
CoRR, 2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos.
CoRR, 2018

PAC-learning in the presence of adversaries.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Black-box Attacks on Deep Neural Networks via Gradient Estimation.
Proceedings of the 6th International Conference on Learning Representations, 2018

Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms.
Proceedings of the Computer Vision - ECCV 2018, 2018

Enhancing robustness of machine learning systems via data transformations.
Proceedings of the 52nd Annual Conference on Information Sciences and Systems, 2018

Not All Pixels are Born Equal: An Analysis of Evasion Attacks under Locality Constraints.
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018

2017
Exploring the Space of Black-box Attacks on Deep Neural Networks.
CoRR, 2017

Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers.
CoRR, 2017


  Loading...