Seongho Joe

Orcid: 0000-0003-1419-9930

According to our database1, Seongho Joe authored at least 14 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment.
CoRR, 2024

End to End Table Transformer.
Proceedings of the Document Analysis and Recognition - ICDAR 2024 - 18th International Conference, Athens, Greece, August 30, 2024

Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models.
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024

2023
Document Change Detection With Hierarchical Patch Comparison.
Proceedings of the IEEE International Conference on Image Processing, 2023

Is Cross-Modal Information Retrieval Possible Without Training?
Proceedings of the Advances in Information Retrieval, 2023

Model Intrinsic Features of Fine-tuning based Text Summarization Models for Factual Consistency.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
BiHPF: Bilateral High-Pass Filters for Robust Deepfake Detection.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022

Shuffle & Divide: Contrastive Learning for Long Text.
Proceedings of the 26th International Conference on Pattern Recognition, 2022

ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision.
Proceedings of the 26th International Conference on Pattern Recognition, 2022

2021
SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning.
CoRR, 2021

Enhancing Semantic Understanding with Self-Supervised Methods for Abstractive Dialogue Summarization.
Proceedings of the 22nd Annual Conference of the International Speech Communication Association, Interspeech 2021, Brno, Czechia, August 30, 2021

2020
KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks.
Proceedings of the 25th International Conference on Pattern Recognition, 2020


  Loading...