Ashwinee Panda

According to our database1, Ashwinee Panda authored at least 14 papers between 2020 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs.
CoRR, 2024

Safety Alignment Should Be Made More Than Just a Few Tokens Deep.
CoRR, 2024

Private Fine-tuning of Large Language Models with Zeroth-order Optimization.
CoRR, 2024

A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Privacy-Preserving In-Context Learning for Large Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Teach LLMs to Phish: Stealing Private Information from Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Visual Adversarial Examples Jailbreak Aligned Large Language Models.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Visual Adversarial Examples Jailbreak Large Language Models.
CoRR, 2023

Differentially Private In-Context Learning.
CoRR, 2023

Differentially Private Image Classification by Learning Priors from Random Processes.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
DP-RAFT: A Differentially Private Recipe for Accelerated Fine-Tuning.
CoRR, 2022

Neurotoxin: Durable Backdoors in Federated Learning.
Proceedings of the International Conference on Machine Learning, 2022

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2020
FetchSGD: Communication-Efficient Federated Learning with Sketching.
Proceedings of the 37th International Conference on Machine Learning, 2020


  Loading...