Mengzhao Chen

According to our database1, Mengzhao Chen authored at least 17 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs.
CoRR, 2024

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models.
CoRR, 2024

Adapting LLaMA Decoder to Vision Transformer.
CoRR, 2024

BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Super Vision Transformer.
Int. J. Comput. Vis., December, 2023

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization.
CoRR, 2023

Spatial Re-parameterization for N: M Sparsity.
CoRR, 2023

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization.
CoRR, 2023

DiffRate : Differentiable Compression Rate for Efficient Vision Transformers.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

SMMix: Self-Motivated Image Mixing for Vision Transformers.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

CF-ViT: A General Coarse-to-Fine Method for Vision Transformer.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Super Vision Transformer.
CoRR, 2022

Coarse-to-Fine Vision Transformer.
CoRR, 2022

Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need.
CoRR, 2022

Fine-grained Data Distribution Alignment for Post-Training Quantization.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Fine-grained Data Distribution Alignment for Post-Training Quantization.
CoRR, 2021


  Loading...