-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Dear Dr. Zhang,
I am trying to run your model for a benchmark on paired multi-omics data. Both on my own and your example data I am running into out-of-memory issues when initializing the model. I have two GPUs with 24GB memory each available.
The error occurs in the initialization of TranslateAE (script train_model.py line 338 self.sess.run(tf.global_variables_initializer());
).
This is the error message:
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[93283,15172] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node translator_yx_px_r_genebatch/kernel/Adam_1/Assign (defined at bin/train_model_edited.py:348) ]]
.
The model seems to create a peak-by-gene matrix for the translator. Is this a desired behavior or might I have missed something in your data preprocessing before running the model?
Kind regards,
Viktoria