2025
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback.
CoRR, May, 2025

The Mirage of Multimodality: Where Truth is Tested and Honesty Unravels.
CoRR, May, 2025

Mitigating Deceptive Alignment via Self-Monitoring.
CoRR, May, 2025

Measuring Hong Kong Massive Multi-Task Language Understanding.
CoRR, May, 2025

Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models.
CoRR, March, 2025

ThinkPatterns-21k: A Systematic Study on the Impact of Thinking Patterns in LLMs.
CoRR, March, 2025

A control-oriented operation mode recognizing method using fuzzy evaluation and attention LSTM networks.
Appl. Soft Comput., 2025

Mitigating Reward Over-Optimization in RLHF via Behavior-Supported Regularization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

2024
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback.
CoRR, 2024

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction.
CoRR, 2024

Aligner: Efficient Alignment by Learning to Correct.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Safe Reinforcement Learning using Finite-Horizon Gradient-based Estimation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
AI Alignment: A Comprehensive Survey.
CoRR, 2023

Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark.
CoRR, 2023

Baichuan 2: Open Large-scale Language Models.
CoRR, 2023

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset.
CoRR, 2023

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research.
CoRR, 2023

Augmented Proximal Policy Optimization for Safe Reinforcement Learning.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning.
CoRR, 2022

Constrained Update Projection Approach to Safe Policy Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022