Automatic labeling of vascular structures with topological constraints via HMM [Research data]

DOI

The project contains the implementation of the method described in: Wang et al., "Automatic labeling of vascular structures with topological constraints via HMM", MICCAI 2017. We propose a novel graph labeling approach to anatomically label vascular structures of interest. Our algorithm can handle different topologies, like circle, chain and tree. By using coordinate independent geometrical features, it does not require prior global alignment.

Compile environment

Windows 7(64-bit) Intel(R) Core(TM) i7-4790 CPU @ 3.6GHz RAM: 8.00GB

Microsoft Visual Studio 2012 Python 2.7 Anaconda 4.1.0 (64-bit) XGBoost Library 0.4 (https://github.com/dmlc/xgboost/tree/master/windows) Scikit-Learn Library 0.18.1 hmmlearn 0.2.0 NURBS open-source library

Running the code

This file contains a summary of what you will find in each of the files that make up our experiments..

Step0: PreprocessingData Our proposed approach has been evaluated on the public dataset distributed by the MIDAS Data Server at Kitware Inc.. It contains 50 MRA images of the cerebral vasculature from healthy volunteers together with theirs segmentations and centerlines. (Bogunovi? et al. "Anatomical Labeling of the Circle of Willis Using Maximum A Posteriori Probability Estimation." IEEE Transactions on Medical Imaging 32(9) (2013):1587) We first prune the centerline model to a region around the CoW. “FeatureGenerating/data/skeleton”.

Step1: FeatureGenerating(C/C++): To generate a feature matrix “FeatureGenerating/feature” from the skeleton data “FeatureGenerating/data/skeleton” that has been marked with Ground Truth“FeatureGenerating/data/cood.txt”. We employed the NURBS curve with features calculate available in NURBS open-source library. To compile the code, you also need to include the library.

Step2: Pre_ML (C/C++): To Separate feature matrix “FeatureGenerating/feature” into the training set “data/ML/XXX/train.txt” and corresponding test set “data/ML/XXX/test.txt”.

Step3: XGBoost(Python): To train model based on the training set in “data/ML”, and predict the results ”data/res_XGBoost” of corresponding test set. To compile the code, you also need to include the XGBoost library.

Step4: Chain(C/C++): To “sort” the bifurcation and construct observation sequences “data/obs_list” and status sequences “data/GT_list” based on the results of XGBoost.

Step5: Pre_HMM(C/C++): To generates 50 sets of observation matrices “data/obs” and transfer matrices “data/trans” based on observation sequences and state sequences. Row 1 in “seg/XXX” is the sequence of state, and Row 2 in “data/seg/XXX” is its corresponding sequence of observations.

Step6: HMM(Python): Hidden Markov Process. Input:”data/seg/XXX”, “data/obs”, “data/trans” In the file “data/res_topo”, Row 1 is the results, and Row 2 is its corresponding Ground Truth. To compile the code, you also need to include the hmmlearn library.

Step7: Result analysis: Metrics. In the file”data/matrix_XGBoost”and ”data/matrix_topo”, the first part is TP, FN, FP, TN value, the second part is A, P, R, S value, the last part is the confusion matrix.

Identifier
DOI https://doi.org/10.34810/data438
Metadata Access https://dataverse.csuc.cat/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.34810/data438
Provenance
Creator Wang, Xingce ORCID logo; Liu, Yue; Wu, Zhongke ORCID logo; Mou, Xiao (ORCID: 0000-0002-4973-860X); Zhou, Mingquan ORCID logo; González Ballester, Miguel Ángel, 1973- ORCID logo
Publisher CORA.Repositori de Dades de Recerca
Publication Year 2023
Funding Reference Ministerio de Economía y Competitividad TIN2013-47913-C3-1-R
Rights Custom Dataset Terms; info:eu-repo/semantics/openAccess; https://dataverse.csuc.cat/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34810/data438
OpenAccess true
Representation
Resource Type Program source code; Dataset
Format text/plain; application/zip
Size 4651; 7851796
Version 1.0
Discipline Other