Skip to content

Q: is it possible (any examples?) to train two different models on two different GPUs, in different threads of the same process? #21862

@mw66

Description

@mw66

Hi,

Is it possible to train two different models on two different GPUs, in different threads of the same process? (Mainly because the training data itself is the same, but takes lots of memory; does keras support such use case?)

If yes, and thread safe (BTW with Tensorflow backend), can you give minimal clean example to train two different MNIST models in two different threads in keras?

Especially, with the new Python 3.14 no GIL ?

Thanks!

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions