Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets.
We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac–torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom; therefore, the generated dataset can serve as ground truth for image segmentation and registration.
Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging, computed tomography (CT), and cone beam CT images are inherently co-registered.
Here, only the generated synthetic image data for the three modalities, MRI, CT, and CBCT are provided. Segmentation mask cannot be provided by us due to license rescritions. To obtain segmenation masks as described in the paper, you need to license the XCAT phantom(https://otc.duke.edu/technologies/xcat-library-of-anatomical-models-for-ct-imaging-research/).