Tianyu Pang

Orcid: 0000-0003-0639-6176

According to our database1, Tianyu Pang authored at least 80 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Scaling up Masked Diffusion Models on Text.
CoRR, 2024

SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.
CoRR, 2024

Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts.
CoRR, 2024

Improving Long-Text Alignment for Text-to-Image Diffusion Models.
CoRR, 2024

When Attention Sink Emerges in Language Models: An Empirical View.
CoRR, 2024

Denial-of-Service Poisoning Attacks against Large Language Models.
CoRR, 2024

A Closer Look at Machine Unlearning for Large Language Models.
CoRR, 2024

Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates.
CoRR, 2024

RegMix: Data Mixture as Regression for Language Model Pre-training.
CoRR, 2024

Revisiting Backdoor Attacks against Large Vision-Language Models.
CoRR, 2024

Bootstrapping Language Models with DPO Implicit Rewards.
CoRR, 2024

Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.
CoRR, 2024

Crafting Heavy-Tails in Weight Matrix Spectrum without Gradient Noise.
CoRR, 2024

Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses.
CoRR, 2024

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models.
CoRR, 2024

Graph Diffusion Policy Optimization.
CoRR, 2024

Purifying Large Language Models by Ensembling a Small Language Model.
CoRR, 2024

Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One.
CoRR, 2024

Test-Time Backdoor Attacks on Multimodal Large Language Models.
CoRR, 2024

Weak-to-Strong Jailbreaking on Large Language Models.
CoRR, 2024

Benchmarking Large Multimodal Models against Common Corruptions.
CoRR, 2024

Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Intriguing Properties of Data Attribution on Diffusion Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Finetuning Text-to-Image Diffusion Models for Fairness.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Model Balancing Helps Low-data Training and Fine-tuning.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

BAFFLE: A Baseline of Backpropagation-Free Federated Learning.
Proceedings of the Computer Vision - ECCV 2024, 2024

Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
On Memorization in Diffusion Models.
CoRR, 2023

LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition.
CoRR, 2023

Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics.
CoRR, 2023

CoSDA: Continual Source-Free Domain Adaptation.
CoRR, 2023

A Recipe for Watermarking Diffusion Models.
CoRR, 2023

Does Federated Learning Really Need Backpropagation?
CoRR, 2023

Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

On Evaluating Adversarial Robustness of Large Vision-Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

On Calibrating Diffusion Probabilistic Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Efficient Diffusion Policies For Offline Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Gaussian Mixture Solvers for Diffusion Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Bag of Tricks for Training Data Extraction from Language Models.
Proceedings of the International Conference on Machine Learning, 2023

Improving Adversarial Robustness of Deep Equilibrium Models with Explicit Regulations Along the Neural Dynamics.
Proceedings of the International Conference on Machine Learning, 2023

Better Diffusion Models Further Improve Adversarial Training.
Proceedings of the International Conference on Machine Learning, 2023

Nonparametric Generative Modeling with Conditional Sliced-Wasserstein Flows.
Proceedings of the International Conference on Machine Learning, 2023

Exploring Incompatible Knowledge Transfer in Few-shot Image Generation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

O(N<sup>2</sup>) Universal Antisymmetry in Fermionic Neural Networks.
CoRR, 2022

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition.
CoRR, 2022

A Closer Look at the Adversarial Robustness of Deep Equilibrium Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Robustness and Accuracy Could Be Reconcilable by (Proper) Definition.
Proceedings of the International Conference on Machine Learning, 2022

Exploring Memorization in Adversarial Training.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks.
Proceedings of the Computer Vision - ECCV 2022, 2022

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Unrestricted Adversarial Attacks on ImageNet Competition.
CoRR, 2021

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness.
CoRR, 2021

Adversarial Attacks on ML Defense Models Competition.
CoRR, 2021

Adversarial Training with Rectified Rejection.
CoRR, 2021

Accumulative Poisoning Attacks on Real-time Data.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Bag of Tricks for Adversarial Training.
Proceedings of the 9th International Conference on Learning Representations, 2021

Towards Face Encryption by Generating Adversarial Identity Masks.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Black-box Detection of Backdoor Attacks with Limited Information and Data.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

2020
Efficient Learning of Generative Models via Finite-Difference Score Matching.
CoRR, 2020

Towards Privacy Protection by Generating Adversarial Identity Masks.
CoRR, 2020

Boosting Adversarial Training with Hypersphere Embedding.
CoRR, 2020

Boosting Adversarial Training with Hypersphere Embedding.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Efficient Learning of Generative Models via Finite-Difference Score Matching.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Adversarial Distributional Training for Robust Deep Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks.
Proceedings of the 8th International Conference on Learning Representations, 2020

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.
Proceedings of the 8th International Conference on Learning Representations, 2020

Benchmarking Adversarial Robustness on Image Classification.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

2019
Benchmarking Adversarial Robustness.
CoRR, 2019

Improving Black-box Adversarial Attacks with a Transfer-based Prior.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Improving Adversarial Robustness via Promoting Ensemble Diversity.
Proceedings of the 36th International Conference on Machine Learning, 2019

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

2018
Adversarial Attacks and Defences Competition.
CoRR, 2018

Detection of DGA Domains Based on Support Vector Machine.
Proceedings of the Third International Conference on Security of Smart Cities, 2018

Towards Robust Detection of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Max-Mahalanobis Linear Discriminant Analysis Networks.
Proceedings of the 35th International Conference on Machine Learning, 2018

Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

Boosting Adversarial Attacks With Momentum.
Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018

2017
Discovering Adversarial Examples with Momentum.
CoRR, 2017

Robust Deep Learning via Reverse Cross-Entropy Training and Thresholding Test.
CoRR, 2017


  Loading...