Move to the main content

Biomedical Electronics & Signals LAB

Advisor: Prof. Hsiao-Lung Chan 詹曉龍

TEL: 886-3-2118800-5145

Email: chanhl@mail.cgu.edu.tw

🔬 Research Focus Areas

📌 Artificial Intelligence (AI), Computer Vision, and Physiological Signal Analysis

We apply artificial intelligence and deep learning to analyze real-time sensor data, physiological signals, and human behavioral patterns. This research focuses on intelligent perception and decision-making technologies for smart rehabilitation systems, interactive applications, and autonomous devices.


️ Microcontroller-Based Biomedical and Biomedical sensing electronic circuit system

This research area centers on microcontrollers and embedded systems, integrating sensing, real-time processing, and control feedback to develop practical wearable devices and smart healthcare systems that operate reliably in real-world environments.


📡 Embedded AI, Edge AI, and TinyML System Integration

We focus on deploying AI models on embedded and low-power platforms, enabling real-time inference, intelligent decision-making, and interactive control under constrained computational and energy resources. This direction aligns with industry demands in smart devices, automotive systems, and edge computing.


🎯 AR/VR Immersive Interaction and Rehabilitation Training Systems

By combining AR/VR technologies with interactive feedback mechanisms, we design immersive, task-oriented rehabilitation training environments that enhance user engagement, motor learning, and training effectiveness.

The laboratory has long focused on research and practical development in wearable medical sensing, neural information processing, deep learning applications in medicine, and virtual reality (VR/AR)–based rehabilitation training systems. In recent years, the research scope has further expanded toward Embedded AI and intelligent system integration, emphasizing system-level implementation rather than isolated algorithm development.

The current research themes encompass sensor data acquisition, real-time signal processing, intelligent analytical algorithms, and interactive feedback system design, which are actively deployed on Embedded AI platforms, wearable devices, real-time control systems, and interactive intelligent systems. Through this approach, the laboratory develops fully functional system prototypes and reusable research platforms that operate under real-world constraints.

These platforms are designed to be extensible and reusable, serving as core foundations for subsequent research projects, field trials, and new grant proposals, thereby forming a sustainable and scalable ecosystem for both research and education.

To date, the laboratory has led 27 research projects funded by the National Science and Technology Council (NSTC), two phases (six years) and one additional three-year sub-project under the Ministry of Economic Affairs (MOEA) Academia–Industry Technology Program, 15 research projects commissioned by Chang Gung Memorial Hospital, and three academia-based technology development projects entrusted by the Industrial Technology Research Institute (ITRI).

The laboratory’s research outcomes include 82 peer-reviewed journal publications and six granted patents. According to SCOPUS statistics as of January 2026, the total number of citations has reached 2,014, with an h-index of 24, demonstrating the laboratory’s sustained research capability and strong international visibility in the fields of biomedical engineering and intelligent healthcare.

Embedded AI Autonomous Intelligent Vehicle × B5G Real-Time Interactive Monitoring System — Official Achievement Announcement

Under the leadership of Department Chair Prof. Wen-Biau Lin and Prof. Hsiao-Lung Chan, this project represents a key outcome of the department’s “Next-Generation Mobile Communication Vertical Application Demonstration Program.”

The student team — Chia-Hao Fan, Chien-Hung Lin, and Truong Minh Triet — was awarded Third Place in the 2025 Mobile Communication Practical Competition, a policy-driven, high-intensity national competition (S+ level) organized by the Ministry of Education.


https://proj.moe.edu.tw/b5gmoe/News_Photo_Content.aspx?n=6192&s=19321

Laboratory Honors and Awards – 2025 (Selected Recent Achievements Only)

Project Title

Competition

Award

Student Team

Embedded AI–Based Autonomous Intelligent Vehicle with 5G Monitoring System

2025 Mobile Communication Practical Competition (Ministry of Education, Taiwan)

Third Place

Chia-Hao Fan, Chien-Hung Lin, Truong Minh Triet

National Intelligent Technology Application Competition

Champion (1st Place)

Chia-Hao Fan, Chien-Hung Lin, Truong Minh Triet

Dual-NPU Edge AI Autonomous Conversational Intelligent System Using TI C7x DSP + MMA On-Chip NPU and External Hailo-8 NPU

2025 National AI Project and Innovation Competition

Outstanding Award

Geng-Li Wang, Kai-Ting Wu, Zi-Qing Lin, Mohamed Osama Ahme

Smart Plantar Pressure Sensing Shoes × AR Interactive Walkway System

Landseed International Innovation Competition (Healthcare Track)

Finalist

Hsing-Fu Deng, Shang-Lin Tsai, Mohamed Osama Ahme

2025 Precision Healthcare Demo Day

Merit Award

Shang-Lin Tsai, Mohamed Osama Ahme, Chia-Hao Fan

2025 Infinite Dimension Smart Healthcare Design Competition

Merit Award

Hsing-Fu Deng, Shang-Lin Tsai, Mohamed Osama Ahme

 

Research topics

1.      Deep Learning in Medicine

Utilizing physiological signals obtained from devices or virtual reality systems developed by our laboratory, or directly acquired from hospitals, such as EEG data from epilepsy, Parkinson's disease, and sleep apnea, advanced signal processing and deep learning-based AI algorithms are used to develop disease-related identification and assessment indicators.


Deep neural networks for the detection of temporal-lobe epileptiform discharges from scalp electroencephalograms, Biomedical Signal Processing and Control, 84:104698, 2023.

https://www.sciencedirect.com/science/article/abs/pii/S1746809423001313


Convolutional neural network for individual identification using phase space reconstruction of electrocardiogram, Sensors, 23(6):3164, 2023.

https://www.mdpi.com/1424-8220/23/6/3164


Deep neural network for the detections of fall and physical activities using foot pressures and inertial sensing, Sensors 23(1):495, 2023.

https://www.mdpi.com/1424-8220/23/1/495

Enhancing plantar pressure distribution reconstruction with conditional generative adversarial networks from multi-region foot pressure sensing, Biomedical Signal Processing and Control, 100:107187, 2025.

https://www.sciencedirect.com/science/article/pii/S174680942401245X


2.      AR/VR in Rehabilitation: Multi-task, immerse visual feedback to evoke cerebral reorganization and motor recovery

Using wearable devices as a bridge, this involves designing virtual reality (VR) rehabilitation scenarios tailored for Parkinson's disease and stroke patients. These scenarios provide feedback, enhance the sense of immersion and the variety of rehabilitation activities, improve brain plasticity, and enhance motor function.


Event-related brain potentials reveal enhancing and compensatory mechanisms during dual neurocognitive and cycling tasks, BMC Sports Science, Medicine and Rehabilitation, 15:133, 2023.

https://bmcsportsscimedrehabil.biomedcentral.com/articles/10.1186/s13102-023-00749-6


Resistance-induced brain activity changes during cycle ergometer exercises, BMC Sports Science, Medicine and Rehabilitation, 13:27, 2021.

https://bmcsportsscimedrehabil.biomedcentral.com/articles/10.1186/s13102-021-00252-w


Evaluation of anticipatory postural adjustment before quantified weight shifting—System development and reliability test, Applied Science, 11(2):758, 2021.

https://www.mdpi.com/2076-3417/11/2/758


3.      Embedded AI, Edge AI, and TinyML–Based Heterogeneous Accelerated Autonomous and Interactive Intelligent Systems

Today, artificial intelligence no longer runs only in the cloud or on desktop computers—it increasingly runs directly on devices. Smartphones, vehicles, robots, wearable devices, and a wide range of sensors are now equipped with AI chips, enabling systems to see, listen, make decisions, and act in real time.

The core focus of our laboratory is Embedded AI, Edge AI, and TinyML, which means deploying AI directly into embedded systems so that it can operate in real time under limited computing power and energy constraints.

  • Embedded AI
    AI becomes an integral part of the system, directly influencing device behavior and control, rather than being used only for offline analysis.
  • Edge AI
    Data is processed locally on the device, eliminating the need to wait for cloud transmission. This enables fast response, low latency, and high reliability.
  • TinyML
    AI models run on very small, low-power devices, making them especially suitable for wearable devices and sensor nodes.

Within this framework, we develop intelligent systems that can move autonomously, make decisions independently, and interact naturally with humans. These systems integrate computer vision, speech interaction, autonomous navigation, and real-time monitoring, and leverage 5G connectivity to achieve low-latency data transmission and remote system management.

This represents the direction in which modern industry is moving:

👉 AI is no longer just about accuracy—it must operate reliably in real-world environments and on real devices.

If you are interested in AI, hardware, system design, vehicles, robotics, or smart devices, this research area represents one of the most essential and future-proof engineering skill sets for the next decade.

4.      Virtual mirror feedback with robot-assisted training for post-stroke hemiplegia and 5G applications

Integrating mirror feedback training with motion detection and assessment into a robot-assisted training platform, this approach designs virtual mirror feedback scenarios for different types of movements to enhance patients' motivation during task execution. Therapists can also adjust the difficulty of tasks based on the patient's condition, balancing training demands with task completion. Additionally, physiological and behavioral assessment indicators can be directly calculated from the training process. The diversity of tasks is expected to stimulate more mirror neuron activity and enhance functional connectivity between brain regions.

Myoelectric, myo-oxygenation, and myotonometry changes during robot-assisted bilateral arm exercises with varying resistances, Sensors, 24(4):1061, 2024.

https://www.mdpi.com/1424-8220/24/4/1061


Myoelectric analysis of upper-extremity muscles during robot-assisted bilateral wrist flexion-extension in subjects with poststroke hemiplegia, Clinical Biomechanics, 87:105412, 2021.

https://www.sciencedirect.com/science/article/abs/pii/S026800332100142X



5.      Laser-light visual cueing for patients with Parkinson’s disease

In 2017, a two-year project funded by the National Science Council was conducted on "Development and Application of Intelligent Assistive Walking Shoes for Parkinson's Disease." In 2019, another two-year project funded by the National Science Council focused on "Development of Intelligent Assistive Devices for Independent Walking and Autonomous Rehabilitation Training at Home for Parkinson's Disease Patients" was executed. The research results have been reported by nine media outlets, including the Economic Daily News, Central News Agency, ETtoday Health Cloud, and China Broadcasting Corporation. Additionally, in 2022, the research was featured in special reports by Taiwan Television and Next TV.

"Magic Shoes Assist Walking: Smart Assistive Device for Parkinson’s Patients [Discovering Science]"

https://www.youtube.com/watch?v=H2NjVm5kd_M


"News Insights Episode 66: The Sports Industry – A New Blue Ocean (Video Duration: 30:12 ~ 33:48)"

https://www.youtube.com/watch?v=NzRFqB8m0GA&t=2034s


Laser-light cueing shoes with integrated foot pressure and inertial sensing for investigating the impact of visual cueing on gait characteristics in Parkinson’s disease individuals, Frontiers in Bioengineering and Biotechnology, 12:1334403, 2024.

https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2024.1334403/full


Swing limb detection using a convolutional neural network and a sequential hypothesis test based on foot pressure data during gait initialization in individuals with Parkinson's disease, Physiological Measurement, 45:125004, 2024.

https://iopscience.iop.org/article/10.1088/1361-6579/ad9af5



6.      Sleep apnea detections

Cooperated with Huijia Technology Co., Ltd. in developing a multi-parameter physiological signal monitoring system with recognition capabilities, specifically for sleep quality analysis. This collaboration resulted in the publication of one article in the SCI international academic journal Physiological Measurement and the filing of one patent.


Sleep apnea assessment using declination duration-based global metrics from unobtrusive fiber optic sensors, Physiological Measurement, 40:075005, 2019.

https://iopscience.iop.org/article/10.1088/1361-6579/ab21b5/meta


The Respiratory Fluctuation Index: a global metric of nasal airflow or thoracoabdominal wall movement time series to diagnose obstructive sleep apnea, Biomedical Signal Processing and Control, 49:250-262, 2019.

https://www.sciencedirect.com/science/article/abs/pii/S1746809418303148


Instantaneous respiratory estimation from thoracic impedance by empirical mode decomposition, Sensors, 15:16372-16387, 2015.

https://www.mdpi.com/1424-8220/15/7/16372