Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11861/10701
DC FieldValueLanguage
dc.contributor.authorDr. NAWAZ Mehmooden_US
dc.contributor.authorKhan, Sheheryaren_US
dc.contributor.authorDaud, Muhammaden_US
dc.contributor.authorAsim, Muhammaden_US
dc.contributor.authorAnwar, Ghazanfar Alien_US
dc.contributor.authorShahid, Ali Razaen_US
dc.date.accessioned2025-02-06T08:33:29Z-
dc.date.available2025-02-06T08:33:29Z-
dc.date.issued2025-
dc.identifier.citationIEEE Open Journal of Vehicular Technology, 2025, vol. 6, pp. 426-441.en_US
dc.identifier.issn2644-1330-
dc.identifier.urihttp://hdl.handle.net/20.500.11861/10701-
dc.description.abstractIn autonomous vehicles (AV), sensor fusion methods have proven to be effective in merging data from multiple sensors and enhancing their perception capabilities. In the context of sensor fusion, the distinct strengths of multi-sensors, such as LiDAR, RGB, Thermal sensors, etc., can be leveraged to mitigate the impact of challenges imposed by extreme weather conditions. In this paper, we address multi-sensor fusion in AVs and present a comprehensive integration of a thermal sensor aimed at enhancing the cognitive robustness of AVs. Thermal sensors possess an impressive capability to detect objects and hazards that may be imperceptible to traditional visible light sensors. When integrated with RGB and LiDAR sensors, the thermal sensor becomes highly beneficial for detecting and locating objects in adverse weather conditions. The proposed deep learning-assisted multi-sensor fusion technique consists of two parts: (1) visual information fusion and (2) object detection using LiDAR, RGB, and Thermal sensors. The visual fusion framework employs a CNN (convolutional neural network) inspired by a domain image fusion algorithm. The object detection framework uses the modified version of the YoloV8 model, which exhibits high accuracy in real-time detection. In the YoloV8 model, we adjusted the network architecture to incorporate additional convolutional layers and altered the loss function to enhance detection accuracy in foggy and rainy conditions. The proposed technique is effective and adaptable in challenging conditions, such as night or dark mode, smoke, and heavy rain. The experimental results of the proposed method demonstrate enhanced efficiency and cognitive robustness compared to state-of-the-art fusion and detection techniques. This is evident from tests conducted on two public datasets (FLIR and TarDAL) and one private dataset (CUHK).en_US
dc.language.isoenen_US
dc.relation.ispartofIEEE Open Journal of Vehicular Technologyen_US
dc.titleImproving autonomous vehicle cognitive robustness in extreme weather with deep learning and thermal camera fusionen_US
dc.typePeer Reviewed Journal Articleen_US
dc.identifier.doi10.1109/OJVT.2025.3529495-
item.fulltextNo Fulltext-
crisitem.author.deptDepartment of Applied Data Science-
Appears in Collections:Applied Data Science - Publication
Show simple item record

Page view(s)

10
checked on Feb 24, 2025

Google ScholarTM

Impact Indices

Altmetric

PlumX

Metrics


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.