Zhiheng Xi

According to our database1, Zhiheng Xi authored at least 38 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Predicting Large Language Model Capabilities on Closed-Book QA Tasks Using Only Information Available Prior to Training.
CoRR, February, 2025

Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training.
CoRR, January, 2025

ToolHop: A Query-Driven Benchmark for Evaluating Large Language Models in Multi-Hop Tool Use.
CoRR, January, 2025

MathCritique-76k.
Dataset, January, 2025

2024
Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision.
CoRR, 2024

Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling.
CoRR, 2024

Distill Visual Chart Reasoning Ability from LLMs to MLLMs.
CoRR, 2024

Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs.
CoRR, 2024

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment.
CoRR, 2024

Toward Optimal LLM Alignments Using Two-Player Games.
CoRR, 2024

AgentGym: Evolving Large Language Model-based Agents across Diverse Environments.
CoRR, 2024

EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models.
CoRR, 2024

StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback.
CoRR, 2024

MouSi: Poly-Visual-Expert Vision-Language Models.
CoRR, 2024

Secrets of RLHF in Large Language Models Part II: Reward Modeling.
CoRR, 2024

Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Improving Generalization of Alignment with Human Preferences through Group Invariant Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Inverse-Q*: Token Level Reinforcement Learning for Aligning Large Language Models Without Preference Data.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Reward Modeling Requires Automatic Adjustment Based on Data Quality.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Improving Discriminative Capability of Reward Models in RLHF Using Contrastive Learning.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

ORTicket: Let One Robust BERT Ticket Transfer across Different Tasks.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

RoCoIns: Enhancing Robustness of Large Language Models through Code-Style Instructions.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

StepCoder: Improving Code Generation with Reinforcement Learning from Compiler Feedback.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment.
CoRR, 2023

Improving Generalization of Alignment with Human Preferences through Group Invariant Learning.
CoRR, 2023

TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models.
CoRR, 2023

The Rise and Potential of Large Language Model Based Agents: A Survey.
CoRR, 2023

Towards Understanding the Capability of Large Language Models on Code Clone Detection: A Survey.
CoRR, 2023

Secrets of RLHF in Large Language Models Part I: PPO.
CoRR, 2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.
CoRR, 2023

RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Characterizing the Impacts of Instances on Robustness.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Connectivity Patterns are Task Embeddings.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Efficient Adversarial Training with Robust Early-Bird Tickets.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022


  Loading...