Doyoung Kim

Affiliations:
  • Korea Advanced Institute of Science and Technology, Daejeon, South Korea


According to our database1, Doyoung Kim authored at least 11 papers between 2022 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

2022
2023
2024
0
1
2
3
4
5
6
7
2
1
4
4

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model.
CoRR, 2024

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards.
CoRR, 2024

How Well Do Large Language Models Truly Ground?
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Semiparametric Token-Sequence Co-Supervision.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Exploring the Benefits of Training Expert Language Models over Instruction Tuning.
Proceedings of the International Conference on Machine Learning, 2023

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization.
CoRR, 2022


  Loading...