Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands

DOI

Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands Submitted to AGILE24 Abstract Updating road networks in rapidly changing urban landscapes is an important but difficult task, often challenged by the complexity and errors of manual mapping processes. Traditional methods that primarily use RGB satellite imagery struggle with obstacles in the environment and varying road structures, leading to limitations in global data processing. This paper presents an innovative approach that utilizes deep learning and multispectral satellite imagery to improve road network extraction and mapping. By exploring U-Net models with DenseNet backbones and integrating different spectral bands we apply semantic segmentation and extensive post-processing techniques to create georeferenced road networks. We trained two identical models to evaluate the impact of using images created from specially selected multispectral bands rather than conventional RGB images. Our experiments demonstrate the positive impact of using multispectral bands, by improving the results of the metrics Intersection over Union (IoU) by 6.5%, F1 by 5.4%, and the newly proposed relative graph edit distance (relGED) and topology metrics by 2.2% and 2.6% respectively. Data To use the code in this repository, download the required data from SpaceNet Challenge 3 (https://spacenet.ai/spacenet-roads-dataset/) via AWS. The SpaceNet Dataset by SpaceNet Partners is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. SpaceNet was accessed on 05.01.2023 from https://registry.opendata.aws/spacenet Software The analysis and results of this research were achieved with Python and several software packages such as: - tensorflow - networkx - Pillow, cv2 - GDAL, rasterio, shapely - APLS For a fully reproducible environment and software versions refer to 'environment.yml'. All data is licensed under CC BY 4.0, all software files are licensed under the MIT License. Reproducibility To execute the scripts and train your model, first refer to the 'Data' section of this file to download the data from the providers. Apply the preprocessing steps from 'preprocessing.py', but consider that to avoid redundancy, preprocessing steps not included in this repository are the conversion of geojson road data into training images, the reduction of satellite images to an 8-bit format, and their conversion into '.png' files. These steps can be achieved by applying and, if necessary, modifying the APLS library which is publicly available under https://github.com/CosmiQ/apls. Apply preprocessing to both RGB and MS images. To generate the latter execute the 'ms_channel_seperation.py' script while specifying the wanted multispectral channels. Execute the 'train_model.py' script to train your semantic segmentation model, and apply post-processing procedures with 'postprocessing.py'. Generate the metrics results by executing 'evaluation.py'. To save storage space, not all the used data is made available in this repository. Please refer to the 'Data' section of this file to access and download the data from the providers. Exemplary preprocessed training data (100 split images of Las Vegas) is included in the folders './data/tiled512/small_test_sample/ms/' and './data/tiled512/small_test_sample/rgb/'. Post-processed results are provided in the corresponding folders './results/UNetDense_MS_512/' and './results/UNetDense_RGB_512/'. These include the stitched and recombined images, without any post-processing applied to them, as well as the extracted and post-processed graphs as '.pickle' files. This provided data was used to calculate the metrics Intersection over Union (IoU), F1 score, relGED, and topology metric as presented in the paper. The figures included in the paper can be reproduced by saving images created during the preprocessing, training, and post-processing steps. To generate the plots of resulting graphs, refer to the corresponding functions and enable the boolean parameter 'plot'. Bounding boxes seen in the figures were drawn manually and only serve an explanatory purpose. Please be advised that file paths and folder structure have to be adapted manually in the scripts to suit the users folder structure. Be aware of selecting uniform file paths and storing the results in folders named after their model. Furthermore, the code is not meant to be executed from the terminal, running the individual scripts in an IDE is recommended.

Identifier
DOI https://doi.org/10.48436/t4j1h-rfd81
Related Identifier Requires https://doi.org/10.48550/arXiv.1807.01232
Related Identifier IsVersionOf https://doi.org/10.48436/n3es7-6qg91
Metadata Access https://researchdata.tuwien.ac.at/oai2d?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:researchdata.tuwien.ac.at:t4j1h-rfd81
Provenance
Creator Hollendonner, Samuel ORCID logo; Alinaghi, Negar ORCID logo; Giannopoulos, Ioannis ORCID logo
Publisher TU Wien
Publication Year 2024
Rights Creative Commons Attribution 4.0 International; MIT License; https://creativecommons.org/licenses/by/4.0/legalcode; https://opensource.org/licenses/MIT
OpenAccess true
Contact tudata(at)tuwien.ac.at
Representation
Resource Type Software
Version 1.0.0
Discipline Other