Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands

DOI

Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands Submitted to AGILE24 Abstract Updating road networks in rapidly changing urban landscapes is an important but difficult task, often challenged by the complexity and errors of manual mapping processes. Traditional methods that primarily use RGB satellite imagery struggle with obstacles in the environment and varying road structures, leading to limitations in global data processing. This paper presents an innovative approach that utilizes deep learning and multispectral satellite imagery to improve road network extraction and mapping. By exploring U-Net models with DenseNet backbones and integrating different spectral bands we apply semantic segmentation and extensive post-processing techniques to create georeferenced road networks. We trained two identical models to evaluate the impact of using images created from specially selected multispectral bands rather than conventional RGB images. Our experiments demonstrate the positive impact of using multispectral bands, by improving the results of the metrics Intersection over Union (IoU) by 6.5%, F1 by 5.4%, and the newly proposed relative graph edit distance (relGED) and topology metrics by 2.2% and 2.6% respectively. Data To use the code in this repository, download the required data from SpaceNet Challenge 3 via AWS. The SpaceNet Dataset by SpaceNet Partners is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. SpaceNet was accessed on 05.01.2023 from https://registry.opendata.aws/spacenet Software The analysis and results of this research were achieved with Python and several software packages such as: - tensorflow - networkx - Pillow, cv2 - GDAL, rasterio, shapely - APLS For a fully reproducible environment and software versions refer to 'environment.yml'. All data is licensed under CC BY 4.0, all software files are licensed under the MIT License. Reproducibility To execute the scripts and train your model, first refer to the 'Data' section of this file to download the data from the providers. A subsample of the whole dataset containing ten images from Las Vegas is included in the './code/data/' subdirectory. Refer to the file 'base_structure.txt' to learn more about the recommended file structure if you plan to implement these functions with more images. Execute the scripts in the following order: - preprocessing.py - train_model.py - postprocessing.py - evaluation.py If you wish to conduct this experiment with multispectral instead of RGB images, please execute the script ms_channel_seperation.py first, and determine your selection of image bands in the 'channels' parameter of the function 'write_ms_image'. Please be cautious, and change the variables 'image_folder_name' and 'channel_selection' in the preprocessing.py script to accommodate your change between RGB or MS images. If multispectral images have been used, it is necessary to include the prefix 'MS' in your model name (it is highly recommended to use the prefix RGB otherwise). Additionally, the post-processed results referenced in the publication are provided in the corresponding folders './results/UNetDense_MS_512/' and './results/UNetDense_RGB_512/'. These include the stitched and recombined images, without any post-processing applied to them, as well as the extracted and post-processed graphs as '.pickle' files. This provided data was used to calculate the metrics Intersection over Union (IoU), F1 score, relGED, and topology metric as presented in the publication. If the whole dataset has been downloaded from Spacenet, the additional pre-processing steps of generating ground truth images have to be executed. These include the conversion of geojson road data into training images, the reduction of satellite images to an 8-bit format, and their conversion into '.png' files. These steps can be achieved by applying and, if necessary, modifying the APLS library which is publicly available under https://github.com/CosmiQ/apls. An article thoroughly describing this process can be found at https://medium.com/the-downlinq/creating-training-datasets-for-the-spacenet-road-detection-and-routing-challenge-6f970d413e2f. The figures included in the paper can be reproduced by saving images created during the preprocessing, training, and post-processing steps. To generate the plots of resulting graphs, refer to the corresponding functions and enable the boolean parameter 'plot'. Bounding boxes seen in the figures were drawn manually and only serve an explanatory purpose. Please be advised that file paths and folder structure have to be adapted manually in the scripts to suit the user's folder structure. Be aware of selecting uniform file paths and storing the results in folders named after their model. While the code can be executed from the terminal, parameter adjustments have to be implemented in the code itself, running the individual scripts in an IDE is recommended. The used setup was a Windows PC with an NVIDIA graphic card and the corresponding CUDA version installed. It should be noted, that different GPU memory resources might impact the training process, possibly leading to a necessary batch size reduction to avoid a memory overload.

Identifier
DOI https://doi.org/10.48436/d5z5b-3vk12
Related Identifier Requires https://doi.org/10.48550/arXiv.1807.01232
Metadata Access https://researchdata.tuwien.ac.at/oai2d?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:researchdata.tuwien.ac.at:d5z5b-3vk12
Provenance
Creator Hollendonner, Samuel ORCID logo; Alinaghi, Negar ORCID logo; Giannopoulos, Ioannis ORCID logo
Publisher TU Wien
Publication Year 2024
Rights Creative Commons Attribution 4.0 International; MIT License; https://creativecommons.org/licenses/by/4.0/legalcode; https://opensource.org/licenses/MIT
OpenAccess true
Contact tudata(at)tuwien.ac.at
Representation
Resource Type Software
Discipline Other