Object Detection on Depth Map with YOLO

DOI

A neural network, based on the ‘You Only Look Once’ (YOLO) network, has been trained to detect objects, using conventional RGB images. Taking advantage of the pixel relationship between the RGB image and the depth map, the positions of the detected objects will be projected onto a depth map. After some statistic analysis, the pixels pertaining to one object will be extracted. Finally, the 3D position of the object in the surroundings will be calculated.

Please follow the Instructions in the main code file 'run_tflite.ipynb'.

Identifier
DOI https://doi.org/10.18419/darus-3766
Metadata Access https://darus.uni-stuttgart.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.18419/darus-3766
Provenance
Creator Wang, Xiwei
Publisher DaRUS
Contributor Wang, Xiwei; Frenner, Karsten
Publication Year 2023
Rights AGPL 3.0 or later; info:eu-repo/semantics/openAccess; https://www.gnu.org/licenses/agpl-3.0-standalone.html
OpenAccess true
Contact Wang, Xiwei (University of Stuttgart); Frenner, Karsten (University of Stuttgart)
Representation
Resource Type Dataset
Format application/octet-stream; text/x-python; application/x-ipynb+json; image/jpeg
Size 6111569; 12128009; 3105385; 3128185; 3105637; 446; 8845; 4143; 5895; 5323; 2002; 63273
Version 1.0
Discipline Construction Engineering and Architecture; Engineering; Engineering Sciences