CUDA out of memory. Tried to allocate 1.71 GiB (GPU 0; 11.00 GiB total capacity; 8.78 GiB already allocated; 0 bytes free; 9.92 GiB reserved in total by PyTorch) #548
Unanswered
a-cold-bird
asked this question in
Q&A
Replies: 2 comments 2 replies
-
突然发现我用其他的歌可以正常转换?但是爆显存的歌时长只有2分钟,原本3,4分钟的歌都可以推理来着,也没用到切片。请问一下什么因素会导致推理的时间变长或者导致显存占用更多呢 |
Beta Was this translation helpful? Give feedback.
2 replies
-
既然是爆显存的问题,那就把歌切割成更小块就可以了 爆显存跟歌的内容也有关系的,不单单只看时长的 还有你用的是整合包吗,如果是整合包的话,可能会有些奇怪的bug的 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
这是在由svc-develop-svc的项目so vits上出现的问题,由于无法在其仓库提交issue就到这问问了,很抱歉打扰到各位
load
INFO:root:Loaded checkpoint 'D:\my_models\vits4_models\renge\renge_28800.pth' (iteration 375)
#=====segment start, 123.54s======
Traceback (most recent call last):
File "inference_main.py", line 104, in
main()
File "inference_main.py", line 88, in main
out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\inference\infer_tool.py", line 177, in infer
audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\models.py", line 417, in infer
z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale)
File "D:\so-vits-svc\Dependencies\lib\site-packages\torch\nn\modules\module.py", line 1194, in call_impl
return forward_call(*input, **kwargs)
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\models.py", line 114, in forward
x = self.enc(x * x_mask, x_mask)
File "D:\so-vits-svc\Dependencies\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\modules\attentions.py", line 85, in forward
y = self.attn_layers[i](x, x, attn_mask)
File "D:\so-vits-svc\Dependencies\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\modules\attentions.py", line 189, in forward
x, self.attn = self.attention(q, k, v, mask=attn_mask)
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\modules\attentions.py", line 221, in attention
relative_weights = self._absolute_position_to_relative_position(p_attn)
File "D:\BaiduNetdiskDownload\So-VITS-SVC\sovits\so-vits-svc-4.0\modules\attentions.py", line 287, in _absolute_position_to_relative_position
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.71 GiB (GPU 0; 11.00 GiB total capacity; 8.78 GiB already allocated; 0 bytes free; 9.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
在之前程序是能正常运行的,11g显存仅仅只是用于推理,也没有开启f0滤波器,但是还是不知道为什么显存爆了,目前已经试过了最新版本的so vits以及旧版本不带f0滤波器的版本,模型也都换过,都会在“segment start, 123.54s“显示后不久报错,从任务管理器返回的显存占用来看,显存占用最开始正常3g左右,过一会就直接吃满,过一会就跳cuda out of memory。请问一下应该怎么解决这个问题,我在其他报错解决方案后尝试在inference.py中导入os之后限制调用内存还是会报这个错误
使用window10
python3.8
显卡2080ti 11g
请问一下应该怎么解决这个问题,我在其他报错解决方案后尝试在inference.py中导入os之后限制调用内存还是会报这个错误
不胜感激
Beta Was this translation helpful? Give feedback.
All reactions