The repository contains files required to reproduce the results. The three compressed filed are (i) torch_code, (ii) datasets, and (iii) experiments.
Detailed files description
torch_code
The main Pytorch source code used for training/testing is provided in torch_code.tar.gz file.
datasets
The training/validation/testing datasets have been provided in lmdb format which is ready to use in the code. The datasets in datasets.tar.gz contain:
Main train/validation/test dataset:
Training dataset:
data_train_OF-decaying2_f0_1_11_12_2_21_22_3_31_32_FHIT_particle_128_Re52-2D_320000_lmdb.lmdb
Validation dataset:
data_valid_outOfSample_OF-decaying2_f0_1_11_12_2_21_22_3_31_32_FHIT_particle_128_Re52-2D_8000_lmdb.lmdb
Test dataset:
data_test_outOfSample_OF-decaying2_f0_1_11_12_2_21_22_3_31_32_FHIT_particle_128_Re52-2D_16000_lmdb.lmdb
Note that the samples from 20 DNS cases are collected in order (each case 16000 samples for training and 800 samples for testing) which can be recognized using the provided metadata file in each folder.
Particle-free training and test datasets (used in Fig 6 of the paper):
Particle-free training dataset:
data_train_OF-f0_FHIT_particle_128_Re52_prolonged-2D_102400_lmdb.lmdb
Particle-free test dataset:
data_test_outOfSample_OF-f0_FHIT_particle_128_Re52_prolonged-2D_800_lmdb.lmdb
Out of sample test datasets:
Test Case4 in the paper:
data_test_outOfSample_OF-f41_FHIT_particle_128_Re52_test-2D_800_lmdb.lmdb
Test Case5 in the paper:
data_test_outOfSample_OF-f51_FHIT_particle_128_Re52_test-2D_800_lmdb.lmdb
experiments
The trained models are provided in experiments.tar.gz file. Each experiment contains the log file of the training, the last training state (for restart) and the model wights used in the publication.
Conditional model:
conditionalSRGAN trained model using particle-free dataset (used in Figs 6 and 7 of the paper):
00110-01G_PFT-NoPrt_ArchT_condSRGANModel_L64SP4x_Gcond_WaveDisc_f256g128b16_I64_BS16x2_Pix1-Grada-Adva_LrG45D5_fixedLR_DS-f0-102k_cPad_20241218
conditionalSRGAN trained model using the main dataset (used in Figs 9-13 and Figs 15-16 of the paper):
01004-00H_PFT-Prt_ArchTest_condSRGANModel_L64SP4x_Gcond_WaveDisc_f256g128b16_I64_BS32x4_Pix1-Grada-Adva_LrG45D5_fixedLR_DS-fxD-320k_cPad_20241219
Traditional model:
unconditional SRGAN model trained model using the main dataset (used in Fig 14 of the paper):
01005-00H_PFT-Prt_DiscTest_condSRGANModel_L64SP4x_Gcond_TradDisc_f256g128b16_I64_BS32x4_Pix1-Grada-Adva_LrG45D5_fixedLR_DS-fxD-320k_cPad_20241224
How to
Build the environment
To build the environment required for the training and inference you need Anaconda. Go to the torch_code folder and
conda env create -f environment.yml
Then create ipython kernel for post processing,
conda activate torch_22_2025_Shamooni_POF
python -m ipykernel install --user --name ipyk_torch_22_2025_Shamooni_POF --display-name "ipython kernel for post processing of POF2025"
Perform training
It is suggested to create softlinks to the dataset directly in the torch_code folder:
cd torch_code
ln -s <path to the dataset folder> datasets
Then activate the conda environment
conda activate torch_22_2025_Shamooni_POF
An example script to run on single node with 2 GPUs:
torchrun --standalone --nnodes=1 --nproc_per_node=2 train.py -opt options/train/condSRGAN/00110-01G_PFT-NoPrt_ArchT.yml --launcher pytorch
Make sure that the paths to datasets "dataroot_gt" and "meta_info_file" for both training and validation data in option files are set correctly.