Multimodal ground truth datasets for abdominal medical image registration [data]


Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets. We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac–torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom; therefore, the generated dataset can serve as ground truth for image segmentation and registration. Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging, computed tomography (CT), and cone beam CT images are inherently co-registered.

Here, only the generated synthetic image data for the three modalities, MRI, CT, and CBCT are provided. Segmentation mask cannot be provided by us due to license rescritions. To obtain segmenation masks as described in the paper, you need to license the XCAT phantom (

Related Identifier
Metadata Access
Creator Zöllner, Frank (Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University)
Publisher heiDATA
Contributor Zöllner, Frank
Publication Year 2022
Funding Reference BMBF, 13GW0388A
Rights CC BY-NC 4.0; info:eu-repo/semantics/openAccess;
OpenAccess true
Contact Zöllner, Frank (Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University)
Resource Type synthetic medical images of the abdomen; Dataset
Format application/zip
Size 27228993659; 2968034134; 3796777237
Version 1.1
Discipline Life Sciences; Medicine