Replies: 1 comment
-
@Asifzm I'm moving this to discussions because it's not a bug There have been quite a few changes to effdet and pytorch in that timespan. I've noticed a number of hardware and version specific performance regressions in PyTorch, esp 1.7. I'd try it in different releases, the cuda 10.2 variants of 1.7 will be closer to 1.4, the 11.x may have some specific performance issues with your GPU, I ran into some with a few of mine. You can also try 1.8 and NGC containers, I usually train on NGC containers... 20.12 and 21.02 both seem pretty good and have less issues than the official 1.7.x and 1.8 releases. In terms of this codebase, I made a number of changes in the summer that impact performance, some gained speed, but others lost some speed in exchange for better loss stability and results. You can try experimenting with You can also revert back to older loss fn with And finally, you can try The SiLU activation change should be an overall performance gain for PyTorch 1.7/1.8. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Thank you for your great repo, I highly appreciate it.
I have updated efficientdet-pytorch repo from previous (August 2020) version. Updated timm and torch from 1.4 to 1.7.
When running efficientdet_d1, I noticed training loop takes about twice the time. More specifically, backbone forward takes twice the time.
I noticed efficientnet_b1 works with SiLU activation instead of Swishme in previous version. Could that be the main reason for the time difference? Is there something else?
I work with one GPU, without parallelism, model EMA or mixed precision.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions