Releasing HuggingFaceBgeEmbeddings memory can't work #10668
Replies: 1 comment 1 reply
-
🤖 Hi KP, In the LangChain framework, the import torch
# Your code to create embeddings
# ...
# Release GPU memory
torch.cuda.empty_cache() Please note that this will only clear the PyTorch GPU cache and the actual GPU memory management is handled by CUDA. If you want to completely free up the GPU memory, you may need to end the Python process using the GPU. I hope this helps! Let me know if you have any other questions. Best, SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have instantiated
embed = HuggingFaceBgeEmbeddings( model_name=model_path, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs )
after creating the embeddings, I just cant release the GPU vram by simply
del embed
There is a little drop but is way more than before instantiation.
Anyone any idea how to do it?
-KP
Beta Was this translation helpful? Give feedback.
All reactions