Yunzhi Yao

Orcid: 0000-0001-9458-696X

According to our database1, Yunzhi Yao authored at least 25 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
LLMs for knowledge graph construction and reasoning: recent capabilities and future opportunities.
World Wide Web (WWW), September, 2024

Exploring Model Kinship for Merging Large Language Models.
CoRR, 2024

Benchmarking Chinese Knowledge Rectification in Large Language Models.
CoRR, 2024

Knowledge Circuits in Pretrained Transformers.
CoRR, 2024

WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models.
CoRR, 2024

A Comprehensive Study of Knowledge Editing for Large Language Models.
CoRR, 2024

OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.
Proceedings of Workshops at the 50th International Conference on Very Large Data Bases, 2024

Unveiling the Pitfalls of Knowledge Editing for Large Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Knowledge Mechanisms in Large Language Models: A Survey and Perspective.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Editing Conceptual Knowledge for Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Detoxifying Large Language Models via Knowledge Editing.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity.
CoRR, 2023

Editing Personality for LLMs.
CoRR, 2023

EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models.
CoRR, 2023

Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph Construction.
Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023

Editing Large Language Models: Problems, Methods, and Opportunities.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Knowledge Rumination for Pre-trained Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Reasoning with Language Model Prompting: A Survey.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Schema-aware Reference as Prompt Improves Data-Efficient Relational Triple and Event Extraction.
CoRR, 2022

Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction.
CoRR, 2022

DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population.
CoRR, 2022

KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction.
Proceedings of the WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25, 2022

Kformer: Knowledge Injection in Transformer Feed-Forward Layers.
Proceedings of the Natural Language Processing and Chinese Computing, 2022

Good Visual Guidance Make A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022, 2022

2021
Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021


  Loading...