-
Notifications
You must be signed in to change notification settings - Fork 16
Description
During some experiments, I noticed that the batch_size parameter in the IGNNITION configuration was not behaving as expected.
Upon reviewing the code in ignnition_model.py, I found that the gnn_model.fit() function uses train_dataset, which is a generator. According to the official Keras documentation:
"Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.PyDataset instances (since they generate batches)."
Based on this, if a generator is passed as the input to fit(), any explicitly provided batch_size is silently ignored — which appears to be what's happening in IGNNITION.
I confirmed through a simple test using the official IGNNITION examples that changing the batch_size in the config:
- Has no effect on model training behavior or duration
- Does not change the number of steps per epoch when epoch_size is omitted (i.e., it does not follow the expected logic of steps_per_epoch = dataset_size / batch_size)
This strongly suggests that the generator is yielding data with an implicit batch size of 1, and that the batch_size config parameter is currently non-functional.
I attempted to find a straightforward way to modify the generator to respect batch_size, but could not identify an easy fix within the current framework setup.