You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using llamafactory-cli for training inside the container, the GPU memory usage shown by nvidia-smi inside the container is inconsistent with that on the host machine, and the GPU memory usage cannot be controlled. #1322
Please provide an in-depth description of the question you have:
When using llamafactory-cli for training inside the container, the GPU memory usage shown by nvidia-smi inside the container is inconsistent with that on the host machine, and the GPU memory usage cannot be controlled.
In addition, I don’t see the training PID inside the container. Could you tell me why this is happening?