Hi,
Is it possible to train two different models on two different GPUs, in different threads of the same process? (Mainly because the training data itself is the same, but takes lots of memory; does keras support such use case?)
If yes, and thread safe (BTW with Tensorflow backend), can you give minimal clean example to train two different MNIST models in two different threads in keras?
Especially, with the new Python 3.14 no GIL ?
Thanks!