Wangchunshu Zhou

Orcid: 0000-0003-4668-3348

According to our database1, Wangchunshu Zhou authored at least 78 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
X$^{2}$2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions.
CoRR, 2024

PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment.
CoRR, 2024

A Comparative Study on Reasoning Patterns of OpenAI's o1 Model.
CoRR, 2024

PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness.
CoRR, 2024

HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models.
CoRR, 2024

Towards LifeSpan Cognitive Systems.
CoRR, 2024

Symbolic Learning Enables Self-Evolving Agents.
CoRR, 2024

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series.
CoRR, 2024

MIMIR: A Streamlined Platform for Personalized Agent Tuning in Domain Expertise.
CoRR, 2024

CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models.
CoRR, 2024

Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science.
CoRR, 2024

Weaver: Foundation Models for Creative Writing.
CoRR, 2024

AUTOACT: Automatic Agent Learning from Scratch via Self-Planning.
CoRR, 2024

Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data?
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Short Papers, 2024

OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

How Many Are in This Image A Safety Evaluation Benchmark for Vision LLMs.
Proceedings of the Computer Vision - ECCV 2024, 2024

SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Scaling-up medical vision-and-language representation learning with federated learning.
Eng. Appl. Artif. Intell., November, 2023

How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs.
CoRR, 2023

ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks.
CoRR, 2023

RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models.
CoRR, 2023

Mixup Your Own Pairs.
CoRR, 2023

Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?
CoRR, 2023

Agents: An Open-source Framework for Autonomous Language Agents.
CoRR, 2023

SmartTrim: Adaptive Tokens and Parameters Pruning for Efficient Vision-Language Models.
CoRR, 2023

RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text.
CoRR, 2023

Interactive Natural Language Processing.
CoRR, 2023

Efficient Prompting via Dynamic In-Context Learning.
CoRR, 2023

Findings of the WMT 2023 Shared Task on Machine Translation with Terminologies.
Proceedings of the Eighth Conference on Machine Translation, 2023

To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Controlled Text Generation with Natural Language Instructions.
Proceedings of the International Conference on Machine Learning, 2023

Write and Paint: Generative Vision-Language Models are Unified Modal Learners.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Evaluating Large Language Models on Controlled Generation Tasks.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Doolittle: Benchmarks and Corpora for Academic Writing Formalization.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

Automatic Educational Question Generation with Difficulty Level Controls.
Proceedings of the Artificial Intelligence in Education - 24th International Conference, 2023

Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Commonsense Knowledge Transfer for Pre-trained Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
X<sup>2</sup>-VLM: All-In-One Pre-trained Model For Vision-Language Tasks.
CoRR, 2022

Prefix Language Models are Unified Modal Learners.
CoRR, 2022

Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training.
CoRR, 2022

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models.
CoRR, 2022

VLUE: A Multi-Task Multi-Dimension Benchmark for Evaluating Vision-Language Pre-training.
Proceedings of the International Conference on Machine Learning, 2022

Efficiently Tuned Parameters Are Task Embeddings.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

BERT Learns to Teach: Knowledge Distillation with Meta Learning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

Contextual Representation Learning beyond Masked Language Modeling.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Learning to Predict Persona Information forDialogue Personalization without Explicit Persona Description.
CoRR, 2021

A Survey on Green Deep Learning.
CoRR, 2021

Meta Learning for Knowledge Distillation.
CoRR, 2021

Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting.
CoRR, 2021

Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
Proceedings of the 9th International Conference on Learning Representations, 2021

Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
CoRR, 2020

BERT Loses Patience: Fast and Robust Inference with Early Exit.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Self-Adversarial Learning with Comparative Discrimination for Text Generation.
Proceedings of the 8th International Conference on Learning Representations, 2020

Scheduled DropHead: A Regularization Method for Transformer Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Improving Grammatical Error Correction with Machine Translation Pairs.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Pseudo-Bidirectional Decoding for Local Sequence Transduction.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

BERT-of-Theseus: Compressing BERT by Progressive Module Replacing.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

Connecting the Dots Between Fact Verification and Fake News Detection.
Proceedings of the 28th International Conference on Computational Linguistics, 2020

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning.
Proceedings of the Conference on Automated Knowledge Base Construction, 2020

Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
BERT-based Lexical Substitution.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019


  Loading...