Nuo Chen

Orcid: 0000-0001-6563-1215

Affiliations:
  • East China Normal University, School of Data Science and Engineering, Shanghai, China


According to our database1, Nuo Chen authored at least 10 papers between 2022 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Structure-aware Fine-tuning for Code Pre-trained Models.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

2023
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting.
CoRR, 2023

Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Pass-Tuning: Towards Structure-Aware Parameter-Efficient Tuning for Code Representation Learning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Evaluating and Enhancing the Robustness of Code Pre-trained Models through Structure-Aware Adversarial Samples Generation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

HugNLP: A Unified and Comprehensive Library for Natural Language Processing.
Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023

When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box Scenario.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022


  Loading...