Yingfa Chen

According to our database1, Yingfa Chen authored at least 14 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Multi-Modal Multi-Granularity Tokenizer for Chu Bamboo Slips.
Proceedings of the 31st International Conference on Computational Linguistics, 2025

2024
Sparsing Law: Towards Large Language Models with Greater Activation Sparsity.
CoRR, 2024

Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling.
CoRR, 2024

Configurable Foundation Models: Building LLMs from a Modular Perspective.
CoRR, 2024

Multi-Modal Multi-Granularity Tokenizer for Chu Bamboo Slip Scripts.
CoRR, 2024

∞Bench: Extending Long Context Evaluation Beyond 100K Tokens.
CoRR, 2024

Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Robust and Scalable Model Editing for Large Language Models.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

ınftyBench: Extending Long Context Evaluation Beyond 100K Tokens.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Sub-Character Tokenization for Chinese Pretrained Language Models.
Trans. Assoc. Comput. Linguistics, 2023

CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics.
CoRR, 2023

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
BMCook: A Task-agnostic Compression Toolkit for Big Models.
Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
SHUOWEN-JIEZI: Linguistically Informed Tokenizers For Chinese Language Model Pretraining.
CoRR, 2021


  Loading...