Hanze Dong

Orcid: 0000-0002-8846-1260

According to our database1, Hanze Dong authored at least 35 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation.
CoRR, 2024

ThinK: Thinner Key Cache by Query-Driven Pruning.
CoRR, 2024

Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference.
CoRR, 2024

RLHF Workflow: From Reward Modeling to Online RLHF.
CoRR, 2024

An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling.
CoRR, 2024

MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance.
CoRR, 2024

LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, 2024

Faster Sampling via Stochastic Gradient Proximal Sampler.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Spurious Feature Diversification Improves Out-of-distribution Generalization.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Reverse Diffusion Monte Carlo.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo.
Proceedings of the Thirty Seventh Annual Conference on Learning Theory, June 30, 2024

2023
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment.
Trans. Mach. Learn. Res., 2023

Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF.
CoRR, 2023

Mitigating the Alignment Tax of RLHF.
CoRR, 2023

Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach.
CoRR, 2023

RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment.
CoRR, 2023

Provable Particle-based Primal-Dual Algorithm for Mixed Nash Equilibrium.
CoRR, 2023

Particle-based Variational Inference with Preconditioned Functional Gradient Flow.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

DetGPT: Detect What You Need via Reasoning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
Weakly Supervised Disentangled Generative Causal Representation Learning.
J. Mach. Learn. Res., 2022

Learning the Compositional Domains for Generalized Zero-shot Learning.
Comput. Vis. Image Underst., 2022

Normalizing Flow with Variational Latent Representation.
CoRR, 2022

How Powerful is Implicit Denoising in Graph Neural Networks.
CoRR, 2022

Local Augmentation for Graph Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

Bayesian Invariant Risk Minimization.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Mathematical Models of Overparameterized Neural Networks.
Proc. IEEE, 2021

Local Augmentation for Graph Neural Networks.
CoRR, 2021

2020
Vocabulary-Informed Zero-Shot and Open-Set Learning.
IEEE Trans. Pattern Anal. Mach. Intell., 2020

Extreme vocabulary learning.
Frontiers Comput. Sci., 2020

Disentangled Generative Causal Representation Learning.
CoRR, 2020

2019
Higher-order Weighted Graph Convolutional Networks.
CoRR, 2019

Over Parameterized Two-level Neural Networks Can Learn Near Optimal Feature Representations.
CoRR, 2019

2018
Learning to Separate Domains in Generalized Zero-Shot and Open Set Learning: a probabilistic perspective.
CoRR, 2018


  Loading...