Zhengxiao Du

Orcid: 0000-0002-8223-4147

According to our database1, Zhengxiao Du authored at least 27 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning.
CoRR, 2024

MathGLM-Vision: Solving Mathematical Problems with Multi-Modal Large Language Model.
CoRR, 2024

VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents.
CoRR, 2024

ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools.
CoRR, 2024

ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback.
CoRR, 2024

Understanding Emergent Abilities of Language Models from the Loss Perspective.
CoRR, 2024

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning.
CoRR, 2024

GPT understands, too.
AI Open, 2024

AgentBench: Evaluating LLMs as Agents.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
CogKR: Cognitive Graph for Multi-Hop Knowledge Reasoning.
IEEE Trans. Knowl. Data Eng., 2023

WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences.
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023

GLM-130B: An Open Bilingual Pre-trained Model.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
GLM-130B: An Open Bilingual Pre-trained Model.
CoRR, 2022

P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2022

GLM: General Language Model Pretraining with Autoregressive Blank Infilling.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
POLAR++: Active One-Shot Personalized Article Recommendation.
IEEE Trans. Knowl. Data Eng., 2021

P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks.
CoRR, 2021

All NLP Tasks Are Generation Tasks: A General Pretraining Framework.
CoRR, 2021

WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models.
AI Open, 2021

Policy-Gradient Training of Fair and Unbiased Ranking Functions.
Proceedings of the SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021

2019
Fair Learning-to-Rank from Implicit Feedback.
CoRR, 2019

Cognitive Knowledge Graph Reasoning for One-shot Relational Learning.
CoRR, 2019

EFCNN: A Restricted Convolutional Neural Network for Expert Finding.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2019

Sequential Scenario-Specific Meta Learner for Online Recommendation.
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019

2018
POLAR: Attention-Based CNN for One-Shot Personalized Article Recommendation.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2018


  Loading...