Hailun Lian

Orcid: 0000-0002-1355-9503

According to our database1, Hailun Lian authored at least 20 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition.
IEEE Trans. Comput. Soc. Syst., August, 2024

Exploring corpus-invariant emotional acoustic feature for cross-corpus speech emotion recognition.
Expert Syst. Appl., 2024

Temporal Label Hierachical Network for Compound Emotion Recognition.
CoRR, 2024

Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition.
Proceedings of the IEEE International Conference on Acoustics, 2024

PAVITS: Exploring Prosody-Aware VITS for End-to-End Emotional Voice Conversion.
Proceedings of the IEEE International Conference on Acoustics, 2024

Improving Speaker-Independent Speech Emotion Recognition using Dynamic Joint Distribution Adaptation.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
Speech Emotion Recognition via an Attentive Time-Frequency Neural Network.
IEEE Trans. Comput. Soc. Syst., December, 2023

A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face.
Entropy, October, 2023

Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach.
CoRR, 2023

Label Distribution Adaptation for Multimodal Emotion Recognition with Multi-label Learning.
Proceedings of the 1st International Workshop on Multimodal and Responsible Affective Computing, 2023

Multimodal Emotion Recognition in Noisy Environment Based on Progressive Label Revision.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Learning Local to Global Feature Aggregation for Speech Emotion Recognition.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023

Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition.
Proceedings of the Neural Information Processing - 30th International Conference, 2023

Audio-Visual Group-based Emotion Recognition using Local and Global Feature Aggregation based Multi-Task Learning.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Deep Implicit Distribution Alignment Networks for cross-Corpus Speech Emotion Recognition.
Proceedings of the IEEE International Conference on Acoustics, 2023

2022
Progressive distribution adapted neural networks for cross-corpus speech emotion recognition.
Frontiers Neurorobotics, September, 2022

Adapting Multiple Distributions for Bridging Emotions from Different Speech Corpora.
Entropy, 2022

2019
Multimodal Voice Conversion Under Adverse Environment Using a Deep Convolutional Neural Network.
IEEE Access, 2019

Whisper to Normal Speech Conversion Using Sequence-to-Sequence Mapping Model With Auditory Attention.
IEEE Access, 2019

2018
Whisper to Normal Speech Based on Deep Neural Networks with MCC and F0 Features.
Proceedings of the 23rd IEEE International Conference on Digital Signal Processing, 2018


  Loading...