Qihuang Zhong

Orcid: 0009-0001-0118-5217

According to our database1, Qihuang Zhong authored at least 24 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
PanDa: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation.
IEEE Trans. Knowl. Data Eng., September, 2024

AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
Neural Networks, January, 2024

Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis.
CoRR, 2024

Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners.
CoRR, 2024

Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

Revisiting Knowledge Distillation for Autoregressive Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Knowledge Graph Augmented Network Towards Multiview Representation Learning for Aspect-Based Sentiment Analysis.
IEEE Trans. Knowl. Data Eng., October, 2023

Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation.
Neural Comput. Appl., February, 2023

Unified Instance and Knowledge Alignment Pretraining for Aspect-Based Sentiment Analysis.
IEEE ACM Trans. Audio Speech Lang. Process., 2023

Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks.
CoRR, 2023

Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
CoRR, 2023

Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE.
CoRR, 2023

Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Towards Making the Most of ChatGPT for Machine Translation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Revisiting Token Dropping Strategy in Efficient BERT Pretraining.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Self-Evolution Learning for Discriminative Language Model Pretraining.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Token-Level Self-Evolution Training for Sequence-to-Sequence Learning.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

2022
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE.
CoRR, 2022

E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation.
CoRR, 2022

Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

A Contrastive Cross-Channel Data Augmentation Framework for Aspect-Based Sentiment Analysis.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

2020
SemiText: Scene text detection with semi-supervised learning.
Neurocomputing, 2020


  Loading...