Skip to content

Migration note : v0.3.0 to v0.4.0

vfdev edited this page Jun 21, 2020 · 4 revisions

Migration from v0.3.0 to v0.4.0 note

PyTorch-Ignite v0.4.0 has several backward compatibility breaking changes. Here we provide some information on how to adapt your code to v0.4.0.

Simplified Engine

Engine in v0.3.0 provided automatically a dataflow synchronization during the run to manage a reproducible training. However, it was reported that this approach has several important drawbacks that we decided to remove this behaviour in Engine of v0.4.0 (https://github.com/pytorch/ignite/pull/940):

  • no more internal patching of torch DataLoader
  • seed argument of Engine.run is deprecated

Code v0.3.0

trainer = ...

trainer.run(data, max_epochs=N, seed=12)

Code v0.4.0

from ignite.utils import manual_seed

trainer = ...

manual_seed(12)
trainer.run(data, max_epochs=N)

Similar behaviour to v0.3.0 can be achieved with DeterministicEngine.

create_supervised_trainer/create_supervised_evaluator and model device

In v0.4.0 create_supervised_trainer and create_supervised_evaluator do not move model to device anymore (https://github.com/pytorch/ignite/pull/910).

This is due to the fact that user should not move model to another device after constructing an optimizer for the model:

Code v0.3.0

model = ...
optimizer = SGD(model.parameters(), ...)
criterion = ...

trainer = create_supervised_trainer(model, optimizer, criterion, device="cuda")

Code v0.4.0

model = ...

model.to("cuda")

optimizer = SGD(model.parameters(), ...)
criterion = ...

trainer = create_supervised_trainer(model, optimizer, criterion, device="cuda")
Clone this wiki locally