This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 The AuthorsABSTRACT
Rapid and accurate assessment of damage following natural or man made disasters is critical for effective emergency response and recovery planning. Traditional ground-based inspection methods are often time-consuming, hazardous, and resource-intensive. In contrast, multi sensor vision fusion — combining data from optical satellite imagery, Synthetic Aperture Radar (SAR), LiDAR, and UAV-based sensors — offers a powerful alternative. By applying advanced computer vision and deep learning techniques on fused sensor data, it is possible to perform near real-time structural damage detection, inundation mapping, and debris estimation. This paper reviews recent literature in multi-sensor fusion for disaster assessment, proposes a unified real-time vision fusion pipeline, discusses practical and technical challenges, and outlines future research directions. The proposed methodology aims to improve accuracy, robustness, and speed of damage assessment, supporting first responders and decision makers with timely, reliable information.
Keywords: Multi-Sensor Fusion; Disaster Damage Assessment;Remote Sensing; Computer Vision; Deep Learning; Optical and SAR Imagery;
LiDAR and UAV Data.
Received : Jun 02, 2025
Revised : Jun 04, 2025
Accepted : Jul 10, 2025
Walaa Rahim Gouda
| Acknowledgment | None |
|---|---|
| Author Contribution | All authors contributed equally to the main contributor to this paper. All authors read and approved the final paper. |
| Conflicts of Interest | “The authors declare no conflict of interest.” |
| Funding | “This research received no external funding” |
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 The Authors