Paper FrBT1.5
Esparza, Daniela (Center for Research in Optics), Trejo, Sergio (Center for Research in Optics), Flores, Gerardo (Texas A&M International University)
Neuromorphic Motion Segmentation with Spiking Neural Networks for Robotic Perception
Scheduled for presentation during the Regular Session "Automotive, flying and maritime systems" (FrBT1), Friday, July 18, 2025,
15:20−15:40, Room 105
Joint 10th IFAC Symposium on Mechatronic Systems and 14th Symposium on Robotics, July 15-18, 2025, Paris, France
This information is tentative and subject to change. Compiled on August 2, 2025
|
|
Keywords Robot Navigation, Programming, and Vision, Data-Based Methods and Machine Learning, Sensors and Measurement Systems
Abstract
Event-based cameras have emerged as a promising technology for robotic applications, offering high temporal resolution and low latency. Motion segmentation is a critical task for dynamic environments, enabling essential capabilities such as obstacle avoidance, SLAM, and autonomous navigation. In this work, we propose a spiking neural network (SNN) architecture based on a U-Net encoder-decoder to perform motion segmentation using event streams captured by event-based cameras. The network processes spatio-temporal data efficiently and is trained with a combination of Dice loss and binary cross-entropy to handle imbalanced segmentation masks. We leverage the EVIMO dataset, which provides pixel-wise motion labels, egomotion data, and event streams, to train and evaluate the model. Our method achieves an Intersection over Union (IoU) score of 51.6%, comparable to state-of-the-art approaches like SpikeMS, while maintaining computational efficiency. Qualitative results show accurate pixel-wise segmentation of dynamic objects, demonstrating clear visual alignment between the predictions and the ground truth. These results highlight the potential of our approach for deployment in neuromorphic robotic systems requiring real-time processing.
|
|