Zhipeng Chen

Orcid: 0009-0009-4875-5465

Affiliations:
  • Renmin University of China, Gaoling School of Artificial Intelligence, Beijing, China
  • Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China
  • iFLYTEK AI Research, State Key Laboratory of Cognitive Intelligence, Wuhan, China


According to our database1, Zhipeng Chen authored at least 27 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems.
CoRR, 2024

Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search.
CoRR, 2024

Extracting and Transferring Abilities For Building Multi-lingual Ability-enhanced Large Language Models.
CoRR, 2024

Towards Effective and Efficient Continual Pre-training of Large Language Models.
CoRR, 2024

YuLan: An Open-source Large Language Model.
CoRR, 2024

Low-Redundant Optimization for Large Language Model Alignment.
CoRR, 2024

JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models.
CoRR, 2024

Not Everything is All You Need: Toward Low-Redundant Optimization for Large Language Model Alignment.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Don't Make Your LLM an Evaluation Benchmark Cheater.
CoRR, 2023

A Survey of Large Language Models.
CoRR, 2023

JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving.
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023

ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

2022
Augmented and challenging datasets with multi-step reasoning and multi-span questions for Chinese judicial reading comprehension.
AI Open, January, 2022

TextBox 2.0: A Text Generation Library with Pre-trained Language Models.
CoRR, 2022

ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

TextBox 2.0: A Text Generation Library with Pre-trained Language Models.
Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
TextBox: A Unified, Modularized, and Extensible Framework for Text Generation.
Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension.
Proceedings of the 28th International Conference on Computational Linguistics, 2020

TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020

2019
Contextual Recurrent Units for Cloze-style Reading Comprehension.
CoRR, 2019

A Span-Extraction Dataset for Chinese Machine Reading Comprehension.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for Commonsense Reading Comprehension.
CoRR, 2018

Dataset for the First Evaluation on Chinese Machine Reading Comprehension.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

2017
Attention-over-Attention Neural Networks for Reading Comprehension.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

2016
Consensus Attention-based Neural Networks for Chinese Reading Comprehension.
Proceedings of the COLING 2016, 2016


  Loading...