Blog postScreenshot of Carla synthetic dataset with RSU perception ground truth for algorithms validation

Intelligent, connected road infrastructure plays a central role in the deployment of Cooperative Intelligent Transportation Systems (C-ITS). Intelligent road infrastructure participates in the analysis of traffic situations and supports connected vehicles and other traffic participants in making the right decisions to improve road safety.

An essential task of this traffic situation analysis pipeline concerns the detection of dynamic objects.

Multiple sensor technologies have been investigated for this task. While radars can exploit the Doppler effect to detect moving objects, their sparse detection capability limits accurate localisation. Cameras can provide high classification accuracy, but their limited 3D localisation capability affects their use in this task. LIDAR (LIght Detection And Ranging) sensors have the advantage of providing 3D Point Clouds (PCs) with much higher density and angular precision, which is beneficial in dense urban traffic situations.

However, most LIDAR-based approaches that reach interesting accuracy levels are based on deep neural networks, which require high-performance GPUs consuming up to hundreds of Watts of power to run in real-time. Alternatively, grid-based approaches are less energy intensive but, to meet real-time constraints, existing approaches still need GPUs whose power consumption can reach up to 10 times the power consumption of the LIDAR device.

Intelligent road infrastructure, such as multi-sensor Road Side Units (RSU), typically embarks one or more smart cameras that perform object detection and classification. The association of these cameras with a range sensor both compensates each type of sensor’s limitations (e.g., dependence on lighting conditions for cameras, lower classification capability for range sensors) and improves the precision of speed estimated for moving objects. The multi-sensor association, finally, also enables new functionalities such an increase of the RSU’s field-of-view and RSU malfunction detection, due to technical failure or malicious intervention.

But how can sensor fusion be performed efficiently for resource constrained (such as solar-powered) infrastructure?

What are we doing within the SELFY project?

Within the SELFY EU-project we are developing a new lightweight approach for detecting the points related to dynamic obstacles within LIDAR point clouds.

Thanks to its low complexity, the algorithm can be used either to enable near-sensor embedded functionalities or to enhance the capabilities of intelligent infrastructure in the C-ITS context. Experimental results on the real-world TUMTraf Intersection Dataset show that the proposed approach can run in real-time on an ARM Cortex A9 CPU, while still reaching a detection precision of 69.1%, which is consistent with state-of-the-art performance of deep neural network-based approaches.

This algorithm is used in the Sensor Fusion & Anomaly Detection tool (SFAD), part of SELFY’s Situational Awareness and Collaborative Perception macro-tool, which orchestrates the integration of data from Range Sensors and Smart Cameras installed on Road-Side Unit (RSU) infrastructure, culminating in a unified depiction of the surrounding environment. By assessing the coherence among various sensor sources within the RSU, the SFAD discerns internal anomalies effectively.

Moreover, it scrutinizes the alignment between the RSU’s consolidated perception and data from Cooperative Intelligent Transport Systems (C-ITS), further supporting the cooperative system in the detection anomalies.

Please see more details about the approach in the following paper:

T. Rakotovao, P. Ménard, C. Bernier, “Low Complexity Dynamic Obstacle Detection for Intelligent Road Infrastructure,” IEEE Sensors Conference, Kobe, Japan, 2024.

Authors: CEA