-
Notifications
You must be signed in to change notification settings - Fork 392
[Feature] Compressed storage gpu #3062
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3062
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 1 Cancelled Job, 14 Unrelated FailuresAs of commit 783e3ee with merge base 0627e85 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
When the tensor is on the CPU
|
95f532e
to
5581cf6
Compare
Added some examples of compressing on the cpu, batched decompression on the gpu. I noticed that in my example of an Atari rollout, the
Compressing on the CPU first, then transferring, and re-using this compressed observation for the next transition gets about double the Atari transitions per second. |
@vmoens I think we're essentially done with this PR, except for a cleanup pass. Do we want CompressedListStorage to be mentioned in the documentation? Maybe have a page on compression to showcase the VRAM storage savings on the gpu? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy with this! Mainly nits and minor aesthetic comments on the examples and doc but otherwise good to go!
examples/replay-buffers/compressed_cpu_decompressed_gpu_replay_buffer.py
Outdated
Show resolved
Hide resolved
examples/replay-buffers/compressed_cpu_decompressed_gpu_replay_buffer.py
Outdated
Show resolved
Hide resolved
examples/replay-buffers/compressed_gpu_decompressed_gpu_replay_buffer.py
Outdated
Show resolved
Hide resolved
examples/replay-buffers/compressed_gpu_decompressed_gpu_replay_buffer.py
Outdated
Show resolved
Hide resolved
examples/replay-buffers/compressed_gpu_decompressed_gpu_replay_buffer.py
Outdated
Show resolved
Hide resolved
Thanks for the code review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good on my end.
… cursor logic to a view class. Passing all tests now.
…o_bytestream speed test.
cf16102
to
783e3ee
Compare
@AdrianOrenstein I merged it but because I rebased on main I had to force-push and changes you just did might have been lost. It looks like some of the changes you made were not incorporated (eg I cannot see GPU examples anymore) |
Description
Replay buffers are used to store a lot of data and are used to feed neural networks with batched samples to learn from. So then ideally we could put this data as close to where the network is being updated. Often raw sensory observations are stored in these buffers, such as images, audio, or text, which consumes many gigabytes of precious memory. CPU memory and accelerator VRAM may be limited, or memory transfer between these devices may be costly. So this PR aims to streamline data compression to aid in efficient storage and memory transfer.
Mainly, creating a compressed storage object will aid in training state-of-the-art RL methods on benchmarks such as the Atari Learning Environment. The
~torchrl.data.replay_buffers.storages.CompressedStorage
class provides the memory savings through compression.closes #3058
closes #2983
Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!