Ze-Feng Gao

Orcid: 0000-0002-6695-8209

According to our database1, Ze-Feng Gao authored at least 17 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
AI-driven inverse design of materials: Past, present and future.
CoRR, 2024

Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation.
CoRR, 2024

AI-accelerated discovery of high critical temperature superconductors.
CoRR, 2024

Discovering symbolic expressions with parallelized tree search.
CoRR, 2024

YuLan: An Open-source Large Language Model.
CoRR, 2024

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Enhancing Parameter-efficient Fine-tuning with Simple Calibration Based on Stable Rank.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Unlocking Data-free Low-bit Quantization with Matrix Decomposition for KV Cache Compression.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
AI-accelerated Discovery of Altermagnetic Materials.
CoRR, 2023

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture.
CoRR, 2023

Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Small Pre-trained Language Models Can be Fine-tuned as Large Models via Over-Parameterization.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

2021
Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
A Model Compression Method With Matrix Product Operators for Speech Enhancement.
IEEE ACM Trans. Audio Speech Lang. Process., 2020

Compressing LSTM Networks by Matrix Product Operators.
CoRR, 2020

2019
Compressing deep neural networks by matrix product operators.
CoRR, 2019


  Loading...