Qingyu Tan
According to our database1,
Qingyu Tan
authored at least 14 papers
between 2020 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
WiFi 7 With Different Multi-Link Channel Access Schemes: Modeling, Fairness and Optimization.
IEEE Trans. Commun., October, 2024
Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning.
Proceedings of the Findings of the Association for Computational Linguistics, 2024
2023
Unlocking Temporal Question Answering for Large Language Models Using Code Execution.
CoRR, 2023
Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023
Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023
2022
Revisiting DocRED - Addressing the Overlooked False Negative Problem in Relation Extraction.
CoRR, 2022
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022
Domain Generalization for Text Classification with Memory-Based Supervised Contrastive Learning.
Proceedings of the 29th International Conference on Computational Linguistics, 2022
Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022
2021
IEEE Trans. Veh. Technol., 2021
On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021
2020
Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification.
CoRR, 2020
Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020