README
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; line-height: 1.5; padding: 1rem; max-width: 900px; margin: auto; }
code { font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, "Liberation Mono", monospace; }
pre { background: #f6f8fa; padding: 0.75rem; overflow: auto; border-radius: 6px; }
table { border-collapse: collapse; margin: 1rem 0; width: 100%; }
th, td { border: 1px solid #ddd; padding: 0.5rem; text-align: left; }
thead th { background: #fafbfc; }
h1, h2, h3, h4, h5, h6 { line-height: 1.25; }
Repository for publication: A. Shamooni et al., Super-resolution reconstruction of scalar fields from the pyrolysis of pulverised biomass using deep learning, Proc. Combust. Inst. (2025)
Containing
torch_code
The main Pytorch source code used for training/testing is provided in torch_code.tar.gz file.
torch_code_tradGAN
To compare with traditional GAN, we use the code in torch_code_tradGAN with similar particle-laden datasets. The source code is torch_code_tradGAN.tar.gz file.
datasets
The training/validation/testing datasets have been provided in lmdb format which is ready to use in the code. The datasets in datasets.tar.gz contain:
Training dataset:
data_train_OF-mass_kinematics_mk0x_1x_2x_FHIT_particle_128_Re52-2D_20736_lmdb.lmdb
Test dataset:
data_valid_inSample_OF-mass_kinematics_mk0x_1x_2x_FHIT_particle_128_Re52-2D_3456_lmdb.lmdb
Note that the samples from 9 DNS cases are collected in order (each case 2304 samples for training and 384 samples for testing) which can be recognized using the provided metadata file in each folder.
Out of distribution test datasets:
Out of distribution test dataset (used in Fig 10 of the paper):
data_valid_inSample_OF-mass_kinematics_mk3x_FHIT_particle_128_Re52-2D_nonUniform_1024_lmdb.lmdb |
We have two separate OOD DNS cases and from each we select 512 samples.
experiments
The main trained models are provided in experiments.tar.gz file. Each experiment contains the log file of the training, the last training state (for restart) and the model wights used in the publication.
Trained model using the main dataset (used in Figs 2-10 of the paper):
h_oldOrder_mk_700-11-c_PFT_Inp4TrZk_outTrZ_RRDBNetCBAM-4Prt_DcondPrtWav_f128g64b16_BS16x4_LrG45D5_DS-mk012-20k_LStandLog
To compare with traditional GAN, we use the code in torch_code_tradGAN with similar particle-laden datasets as above. The training consists of one pre-training step and two separate fine-tuning. One fine-tuning with the loss weights from the litreature and one fine-tuning with tuned loss weights. The final results are in experiments/trad_GAN/experiments/
Pre-trained traditional GAN model (used in Figs 8-9 of the paper):
train_RRDB_SRx4_particle_PSNR
Fine-tuned traditional GAN model with loss weights from lit. (used in Figs 8-9 of the paper)
train_ESRGAN_SRx4_particle_Nista_oneBlock
Fine-tuned traditional GAN model with optimized loss weights (used in Figs 8-9 of the paper)
train_ESRGAN_SRx4_particle_oneBlock_betaA
inference_notebooks
The inference_notebooks folder contains example notebooks to do inference. The folder contains "torch_code_inference" and "torch_code_tradGAN_inference". The "torch_code_inference" is the inference of main trained model. The "torch_code_tradGAN_inference" is the inference for traditional GAN approach.
Move the inference folders in each of these folders into the corresponding torch_code roots. Also create softlinks of datasets and experiments in the main torch_code roots. Note that in each notebook you must double check the required paths to make sure they are set correctly.
How to
Build the environment
To build the environment required for the training and inference you need Anaconda. Go to the torch_code folder and
conda env create -f environment.yml
Then create ipython kernel for post processing,
conda activate torch_22_2025_Shamooni_PCI
python -m ipykernel install --user --name ipyk_torch_22_2025_Shamooni_PCI --display-name "ipython kernel for post processing of PCI2025"
Perform training
It is suggested to create softlinks to the dataset folder directly in the torch_code folder:
cd torch_code
ln -s <path to the dataset folder> datasets
You can also simply move the datasets and inference forlders in the torch_code folder beside the cfd_sr folder and other files.
In general, we prefer to have a root structure as below:
root files and directories:
cfd_sr
datasets
experiments
inference
options
init.py
test.py
train.py
version.py
Then activate the conda environment
conda activate torch_22_2025_Shamooni_PCI
An example script to run on single node with 2 GPUs:
torchrun --standalone --nnodes=1 --nproc_per_node=2 train.py -opt options/train/condSRGAN/use_h_mk_700-011_PFT.yml --launcher pytorch
Make sure that the paths to datasets "dataroot_gt" and "meta_info_file" for both training and validation data in option files are set correctly.