01
【AI Seminar】April 21, 2026 – Transforming Assistive Oral Communication Technologies through Artificial Intelligence – Prof. Yu Tsao
2026.04.08
Topic: Transforming Assistive Oral
Communication Technologies through Artificial Intelligence
Speaker: Prof. Yu Tsao, Research Fellow
(Professor) and the Deputy Director at the Research Center for Information
Technology Innovation, Academia Sinica
Time : April 21, 2026 (Tuesday), 2:00 –
4:00 p.m.
Venue: The Management Building, 11F, AI
Lecture Hall
Join Online: https://reurl.cc/grQlk4 or
scan QR code on poster
About the Speaker:
Yu
Tsao (Senior Member, IEEE) received the B.S. and M.S. degrees in Electrical
Engineering from National Taiwan University, Taipei, Taiwan, in 1999 and 2001,
respectively, and the Ph.D. degree in Electrical and Computer Engineering from
the Georgia Institute of Technology, At-lanta, GA, USA, in 2008. From 2009 to
2011, he was a Researcher at the National Institute of Information and
Communications Technology (NICT), Tokyo, Japan, where he conducted re-search
and product development in multilingual speech-to-speech translation systems,
focusing on automatic speech recognition. He is currently a Research Fellow
(Professor) and the Deputy Director at the Research Center for Information
Technology Innovation, Academia Sinica, Tai-pei, Taiwan. He also holds a joint
appointment as a Professor in the Department of Electrical Engineering at Chung
Yuan Christian University, Taoyuan, Taiwan. His research interests in-clude
assistive oral communication technologies, audio coding, and bio-signal
processing. He serves as an Associate Editor for IEEE Transactions on Consumer
Electronics and IEEE Signal Processing Letters. He received the Outstanding
Research Award from Taiwan’s National Sci-ence and Technology Council (NSTC),
the 2025 IEEE Chester W. Sall Memorial Award, and served as the corresponding
author of a paper that won the 2021 IEEE Signal Processing Society Young Author
Best Paper Award.
Abstract:
This presentation provides an overview of
AI-driven assistive oral communication technologies, encompassing both assistive
speaking and assistive hearing domains. The first part focuses on assistive
speaking technologies, highlighting intelligent diagnostic and enhancement
frame-works for speech disorders. It introduces machine learning approaches for
pathological speech classification, severity assessment, and targeted
enhancement for conditions such as dysarthria, post-surgical speech impairment,
and electrolaryngeal speech. The second part addresses assis-tive hearing,
presenting recent advances in AI-based diagnostic and signal processing
tech-niques for hearing disorders. Representative applications include
automated detection of otitis media with effusion, as well as AI-driven speech
generation and objective quality assessment methods for hearing aids and
cochlear implants. By integrating speech enhancement, assess-ment, and
generation within a unified AI framework, this presentation demonstrates the
poten-tial of neural-based technologies to enhance communication effectiveness
and accessibility, while underscoring the importance of interdisciplinary
research in advancing next-generation, human-centered assistive systems.
Organizers: College of Intelligent
Computing & Artificial Intelligence Research Center
※ No registration needed.