PICU Face and Thoracoabdominal Detection Using Self-Supervised Divided Space–Time Mamba

Research output: Contribution to journalJournal Articlepeer-review

Abstract

Non-contact vital sign monitoring in Pediatric Intensive Care Units is challenged by frequent occlusions, data scarcity, and the need for temporally stable anatomical tracking to extract reliable physiological signals. Traditional detectors produce unstable tracking, while video transformers are too computationally intensive for deployment on resource-limited clinical hardware. We introduce Divided Space–Time Mamba, an architecture that decouples spatial and temporal feature learning using State Space Models to achieve linear-time complexity, over 92% lower than standard transformers. To handle data scarcity, we employ self-supervised pre-training with masked autoencoders on over 50 k domain-specific video clips and further enhance robustness with multimodal RGB-D input. Our model demonstrates superior performance, achieving 0.96 [email protected], 0.62 mAP50-95, and 0.95 rotated IoU. Operating at 23 FPS (43 ms latency), our method is approximately 1.9× faster than VideoMAE and 5.7× faster than frame-wise YOLOv8, demonstrating its suitability for real-time clinical monitoring.

Original languageEnglish
Article number1706
JournalLife
Volume15
Issue number11
DOIs
Publication statusPublished - Nov 2025
Externally publishedYes

!!!Keywords

  • PICU
  • multimodal RGB-D
  • non-contact vital sign monitoring
  • self-supervised learning
  • state space models

Fingerprint

Dive into the research topics of 'PICU Face and Thoracoabdominal Detection Using Self-Supervised Divided Space–Time Mamba'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this