Ryo Ishii

Orcid: 0009-0001-3849-1656

According to our database1, Ryo Ishii authored at least 76 papers between 2006 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Selecting Iconic Gesture Forms Based on Typical Entity Images.
J. Inf. Process., 2024

Investigating Role of Big Five Personality Traits in Audio-Visual Rapport Estimation.
CoRR, 2024

User-Specific Dialogue Generation with User Profile-Aware Pre-Training Model and Parameter-Efficient Fine-Tuning.
CoRR, 2024

Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of Interlocutors.
IEEE Access, 2024

Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and Style.
Proceedings of the Companion Proceedings of the 26th International Conference on Multimodal Interaction, 2024

Rapport Prediction Using Pairwise Learning in Dyadic Conversations Among Strangers and Among Friends.
Proceedings of the Social Computing and Social Media, 2024

Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model.
Proceedings of the Social Computing and Social Media, 2024

2023
Estimating and Visualizing Persuasiveness of Participants in Group Discussions.
J. Inf. Process., 2023

A Ranking Model for Evaluation of Conversation Partners Based on Rapport Levels.
IEEE Access, 2023

Investigating the effect of video extraction summarization techniques on the accuracy of impression conveyance in group dialogue.
Proceedings of the 35th Australian Computer-Human Interaction Conference, 2023

Prediction of Various Backchannel Utterances Based on Multimodal Information.
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, 2023

A Study of Prediction of Listener's Comprehension Based on Multimodal Information.
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, 2023

How Far ahead Can Model Predict Gesture Pose from Speech and Spoken Text?
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, 2023

Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information.
Proceedings of the Human-Computer Interaction - INTERACT 2023 - 19th IFIP TC13 International Conference, York, UK, August 28, 2023

Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Continual Learning for Personalized Co-Speech Gesture Generation.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation.
Proceedings of the IEEE International Conference on Acoustics, 2023

Learning User Embeddings with Generating Context of Posted Social Network Service Texts.
Proceedings of the Social Computing and Social Media, 2023

2022
Modeling Japanese Praising Behavior by Analyzing Audio and Visual Behaviors.
Frontiers Comput. Sci., 2022

A Comparison of Praising Skills in Face-to-Face and Remote Dialogues.
Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022

Determining most suitable listener backchannel type for speaker's utterance.
Proceedings of the IVA '22: ACM International Conference on Intelligent Virtual Agents, Faro, Portugal, September 6, 2022

Predicting Persuasiveness of Participants in Multiparty Conversations.
Proceedings of the IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022, 2022

Analysis of praising skills focusing on utterance contents.
Proceedings of the 23rd Annual Conference of the International Speech Communication Association, 2022

Dialogue Acts Aided Important Utterance Detection Based on Multiparty and Multimodal Information.
Proceedings of the 23rd Annual Conference of the International Speech Communication Association, 2022

2021
Methods for Efficiently Constructing Text-dialogue-agent System using Existing Anime Characters.
J. Inf. Process., 2021

Evaluation of Driver Assistance System Presenting Information of Other Vehicles through Peripheral Vision at Unsignalized Intersections.
Int. J. Intell. Transp. Syst. Res., 2021

Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling?
Proceedings of the IVA '21: ACM International Conference on Intelligent Virtual Agents, 2021

Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing.
Proceedings of the HCI International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence, 2021

How People Distinguish Individuals from their Movements: Toward the Realization of Personalized Agents.
Proceedings of the HAI '21: International Conference on Human-Agent Interaction, Virtual Event, Japan, November 9, 2021

Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study.
CoRR, 2020

Can Prediction of Turn-management Willingness Improve Turn-changing Modeling?
Proceedings of the IVA '20: ACM International Conference on Intelligent Virtual Agents, 2020

Impact of Personality on Nonverbal Behavior Generation.
Proceedings of the IVA '20: ACM International Conference on Intelligent Virtual Agents, 2020

Analyzing Nonverbal Behaviors along with Praising.
Proceedings of the ICMI '20: International Conference on Multimodal Interaction, 2020

Methods of Efficiently Constructing Text-Dialogue-Agent System Using Existing Anime Character.
Proceedings of the HCI International 2020 - Late Breaking Papers: Interaction, Knowledge and Social Media, 2020

No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

2019
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation.
Multimodal Technol. Interact., 2019

Automatic Head-Nod Generation Using Utterance Text Considering Personality Traits.
Proceedings of the Increasing Naturalness and Flexibility in Spoken Dialogue Interaction, 2019

Determining Iconic Gesture Forms based on Entity Image Representation.
Proceedings of the International Conference on Multimodal Interaction, 2019

Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing.
Proceedings of the Social Computing and Social Media. Communication and Social Communities, 2019

Improving Speech-Based End-of-Turn Detection Via Cross-Modal Representation Learning with Punctuated Text Data.
Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2019

2018
Neural Dialogue Context Online End-of-Turn Detection.
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, 2018

Automatic Generation of Head Nods using Utterance Texts.
Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication, 2018

Predicting Nods by using Dialogue Acts in Dialogue.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

Automatic Generation System of Virtual Agent's Motion using Natural Language.
Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018

Generating Body Motions using Spoken Language in Dialogue.
Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018

Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level.
Proceedings of the 2018 on International Conference on Multimodal Interaction, 2018

Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective.
Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018

Automatically Generating Head Nods with Linguistic Information.
Proceedings of the Social Computing and Social Media. Technologies and Analytics, 2018

2017
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations.
IEEE Trans. Multim., 2017

Online End-of-Turn Detection from Speech Based on Stacked Time-Asynchronous Sequential Networks.
Proceedings of the 18th Annual Conference of the International Speech Communication Association, 2017

Analyzing gaze behavior during turn-taking for estimating empathy skill level.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings.
Proceedings of the 5th International Conference on Human Agent Interaction, 2017

Comparing empathy perceived by interlocutors in multiparty conversation and external observers.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

Computational model of idiosyncratic perception of others' emotions.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

2016
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst., 2016

Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst., 2016

Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings.
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

2015
Design and Evaluation of Mirror Interface MIOSS to Overlay Remote 3D Spaces.
Proceedings of the Human-Computer Interaction - INTERACT 2015, 2015

Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings.
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, November 09, 2015

Predicting next speaker based on head movement in multi-party meetings.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015

Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision.
Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2015

2014
Analysis of Timing Structure of Eye Contact in Turn-changing.
Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality, 2014

Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014

Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings.
Proceedings of the IEEE International Conference on Acoustics, 2014

2013
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze.
ACM Trans. Interact. Intell. Syst., 2013

MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces.
Proceedings of the 2013 International Conference on Multimodal Interaction, 2013

Predicting next speaker and timing from gaze transition patterns in multi-party meetings.
Proceedings of the 2013 International Conference on Multimodal Interaction, 2013

Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality.
Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013

Effectiveness of Gaze-Based Engagement Estimation in Conversational Agents.
Proceedings of the Eye Gaze in Intelligent User Interfaces, 2013

2011
Estimating a User's Conversational Engagement Based on Head Pose Information.
Proceedings of the Intelligent Virtual Agents - 11th International Conference, 2011

MoPaCo: High telepresence video communication system using motion parallax with monocular camera.
Proceedings of the IEEE International Conference on Computer Vision Workshops, 2011

MoPaCo: Pseudo 3D Video Communication System.
Proceedings of the Human Interface and the Management of Information. Interacting with Information, 2011

2010
Estimating user's engagement from eye-gaze behaviors in human-agent conversations.
Proceedings of the 15th International Conference on Intelligent User Interfaces, 2010

2008
Estimating User's Conversational Engagement Based on Gaze Behaviors.
Proceedings of the Intelligent Virtual Agents, 8th International Conference, 2008

2006
Avatar's Gaze Control to Facilitate Conversational Turn-Taking in Virtual-Space Multi-user Voice Chat System.
Proceedings of the Intelligent Virtual Agents, 6th International Conference, 2006


  Loading...