Jiaming Ji

Orcid: 0009-0004-9565-3086

According to our database1, Jiaming Ji authored at least 27 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Bi-DexHands: Towards Human-Level Bimanual Dexterous Manipulation.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

Heterogeneous-Agent Reinforcement Learning.
J. Mach. Learn. Res., 2024

Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback.
CoRR, 2024

ProgressGym: Alignment with a Millennium of Moral Progress.
CoRR, 2024

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models.
CoRR, 2024

SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset.
CoRR, 2024

Language Models Resist Alignment.
CoRR, 2024

Rethinking Information Structures in RLHF: Reward Generalization from a Graph Theory Perspective.
CoRR, 2024

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction.
CoRR, 2024

Multi-task Learning LSTM-based Traffic Prediction in Data Center Networks.
Proceedings of the 8th International Conference on Machine Learning and Soft Computing, 2024

Safe RLHF: Safe Reinforcement Learning from Human Feedback.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

SafeDreamer: Safe Reinforcement Learning with World Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Object Detection of Flexible Objects with Arbitrary Orientation Based on Rotation-Adaptive YOLOv5.
Sensors, 2023

AI Alignment: A Comprehensive Survey.
CoRR, 2023

Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark.
CoRR, 2023

Baichuan 2: Open Large-scale Language Models.
CoRR, 2023

Safe DreamerV3: Safe Reinforcement Learning with World Models.
CoRR, 2023

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset.
CoRR, 2023

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research.
CoRR, 2023

Heterogeneous-Agent Reinforcement Learning.
CoRR, 2023

Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Augmented Proximal Policy Optimization for Safe Reinforcement Learning.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning.
CoRR, 2022

Constrained Update Projection Approach to Safe Policy Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021


  Loading...