Ke-Han Lu

Orcid: 0000-0002-5331-0534

According to our database1, Ke-Han Lu authored at least 16 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Building a Taiwanese Mandarin Spoken Language Model: A First Attempt.
CoRR, 2024

Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks.
CoRR, 2024

Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data.
CoRR, 2024

Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models.
CoRR, 2024

SpeechCaps: Advancing Instruction-Based Universal Speech Models with Multi-Talker Speaking Style Captioning.
CoRR, 2024

Speech-Copilot: Leveraging Large Language Models for Speech Processing via Task Decomposition, Modularization, and Program Generation.
CoRR, 2024

Listen and Speak Fairly: A Study on Semantic Gender Bias in Speech Integrated Large Language Models.
CoRR, 2024

DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment.
CoRR, 2024

Investigating Zero-Shot Generalizability on Mandarin-English Code-Switched ASR And Speech-to-Text Translation of Recent Foundation Models with Self-Supervision and Weak Supervision.
Proceedings of the IEEE International Conference on Acoustics, 2024

Dynamic-Superb: Towards a Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark For Speech.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
HypR: A comprehensive study for ASR hypothesis revising with a reference corpus.
CoRR, 2023

Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech.
CoRR, 2023

2022
Non-Autoregressive ASR Modeling Using Pre-Trained Language Models for Chinese Speech Recognition.
IEEE ACM Trans. Audio Speech Lang. Process., 2022

A Context-Aware Knowledge Transferring Strategy for CTC-Based ASR.
Proceedings of the IEEE Spoken Language Technology Workshop, 2022

2021
A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021.
CoRR, 2021

ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-based semantic analyzer with word-level information.
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing, 2021


  Loading...