Shiro Kumano

Orcid: 0000-0002-1231-5566

According to our database1, Shiro Kumano authored at least 50 papers between 2008 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
State-Aware Deep Item Response Theory using student facial features.
Frontiers Artif. Intell., February, 2023

Analyzing and Recognizing Interlocutors' Gaze Functions from Multimodal Nonverbal Cues.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Analyzing Synergetic Functional Spectrum from Head Movements and Facial Expressions in Conversations.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Collision Probability Matching Loss for Disentangling Epistemic Uncertainty from Aleatoric Uncertainty.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

Emotion-Controllable Impression Utterance Generation for Visual Art.
Proceedings of the 11th International Conference on Affective Computing and Intelligent Interaction, 2023

2022
Real-time Auditory Feedback System for Bow-tilt Correction while Aiming in Archery.
Proceedings of the 22nd IEEE International Conference on Bioinformatics and Bioengineering, 2022

Cross-Linguistic Study on Affective Impression and Language for Visual Art Using Neural Speaker.
Proceedings of the 10th International Conference on Affective Computing and Intelligent Interaction, 2022

Consistent Smile Intensity Estimation from Wearable Optical Sensors.
Proceedings of the 10th International Conference on Affective Computing and Intelligent Interaction, 2022

2021
Archery Skill Assessment Using an Acceleration Sensor.
IEEE Trans. Hum. Mach. Syst., 2021

Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing.
Proceedings of the HCI International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence, 2021

Deep Explanatory Polytomous Item-Response Model for Predicting Idiosyncratic Affective Ratings.
Proceedings of the 9th International Conference on Affective Computing and Intelligent Interaction, 2021

2020
Gravity-Direction-Aware Joint Inter-Device Matching and Temporal Alignment between Camera and Wearable Sensors.
Proceedings of the Companion Publication of the 2020 International Conference on Multimodal Interaction, 2020

Interpersonal physiological linkage is related to excitement during a joint task.
Proceedings of the 42th Annual Meeting of the Cognitive Science Society, 2020

2019
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation.
Multimodal Technol. Interact., 2019

Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing.
Proceedings of the Social Computing and Social Media. Communication and Social Communities, 2019

Bayesian Item Response Model with Condition-specific Parameters for Evaluating the Differential Effects of Perspective-taking on Emotional Sharing.
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

The Invisible Potential of Facial Electromyography: A Comparison of EMG and Computer Vision when Distinguishing Posed from Spontaneous Smiles.
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019

Multitask Item Response Models for Response Bias Removal from Affective Ratings.
Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction, 2019

2018
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level.
Proceedings of the 2018 on International Conference on Multimodal Interaction, 2018

2017
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations.
IEEE Trans. Multim., 2017

Analyzing gaze behavior during turn-taking for estimating empathy skill level.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings.
Proceedings of the 5th International Conference on Human Agent Interaction, 2017

Comparing empathy perceived by interlocutors in multiparty conversation and external observers.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

Computational model of idiosyncratic perception of others' emotions.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

2016
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst., 2016

Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst., 2016

Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings.
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

2015
In the Mood for Vlog: Multimodal Inference in Conversational Social Video.
ACM Trans. Interact. Intell. Syst., 2015

Analyzing Interpersonal Empathy via Collective Impressions.
IEEE Trans. Affect. Comput., 2015

Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings.
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, November 09, 2015

Predicting next speaker based on head movement in multi-party meetings.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015

Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision.
Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2015

2014
Analyzing Perceived Empathy Based on Reaction Time in Behavioral Mimicry.
IEICE Trans. Inf. Syst., 2014

Analysis of Timing Structure of Eye Contact in Turn-changing.
Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality, 2014

Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014

Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings.
Proceedings of the IEEE International Conference on Acoustics, 2014

2013
Inferring mood in ubiquitous conversational video.
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, 2013

MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces.
Proceedings of the 2013 International Conference on Multimodal Interaction, 2013

Predicting next speaker and timing from gaze transition patterns in multi-party meetings.
Proceedings of the 2013 International Conference on Multimodal Interaction, 2013

Analyzing perceived empathy/antipathy based on reaction time in behavioral coordination.
Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2013

Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality.
Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013

2012
Enhancing Memory-Based Particle Filter with Detection-Based Memory Acquisition for Robustness under Severe Occlusion.
IEICE Trans. Inf. Syst., 2012

Reconstructing multiparty conversation field by augmenting human head motions via dynamic displays.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2012

Understanding communicative emotions from collective external observations.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2012

2011
Early facial expression recognition with high-frame rate 3D sensing.
Proceedings of the IEEE International Conference on Systems, 2011

A system for reconstructing multiparty conversation field based on augmented head motion by dynamic projection.
Proceedings of the 19th International Conference on Multimedia 2011, Scottsdale, AZ, USA, November 28, 2011

Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings.
Proceedings of the Ninth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2011), 2011

2009
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates.
Int. J. Comput. Vis., 2009

Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings.
Proceedings of the 11th International Conference on Multimodal Interfaces, 2009

2008
Combining Stochastic and Deterministic Search for Pose-Invariant Facial Expression Recognition.
Proceedings of the British Machine Vision Conference 2008, Leeds, UK, September 2008, 2008


  Loading...