Dong Won Lee
Orcid: 0000-0002-6336-5512Affiliations:
- Massachusetts Institute of Technology, Cambridge, USA
- Carnegie Mellon University, Language Technologies Institute, Pittsburgh, PA, USA (former)
According to our database1,
Dong Won Lee
authored at least 11 papers
between 2020 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on linkedin.com
-
on orcid.org
On csauthors.net:
Bibliography
2024
Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback.
CoRR, 2024
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024
Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
2023
MultiPar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023
HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023
Lecture Presentations Multimodal Dataset: Towards Understanding Multimodality in Educational Videos.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
2022
Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides.
CoRR, 2022
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022
2021
Proceedings of the ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction, Montreal, QC, Canada, October 18, 2021
2020
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020
Style Transfer for Co-speech Gesture Animation: A Multi-speaker Conditional-Mixture Approach.
Proceedings of the Computer Vision - ECCV 2020, 2020