Skip to content

which pre-trained model is better? #2357

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
OutisLi opened this issue May 6, 2025 · 1 comment
Open

which pre-trained model is better? #2357

OutisLi opened this issue May 6, 2025 · 1 comment

Comments

@OutisLi
Copy link

OutisLi commented May 6, 2025

there are many models been downloaded , which is better?

789M    ./gsv-v4-pretrained
734M    ./s2Gv3.pth
734M    ./gsv-v4-pretrained/s2Gv4.pth
622M    ./chinese-roberta-wwm-ext-large/pytorch_model.bin
622M    ./chinese-roberta-wwm-ext-large
339M    ./gsv-v2final-pretrained
215M    ./models--nvidia--bigvgan_v2_24khz_100band_256x/bigvgan_generator.pt
215M    ./models--nvidia--bigvgan_v2_24khz_100band_256x
181M    ./chinese-hubert-base/pytorch_model.bin
181M    ./chinese-hubert-base
149M    ./s1v3.ckpt
149M    ./gsv-v2final-pretrained/s1bert25hz-5kh-longer-epoch=12-step=369668.ckpt
148M    ./s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt
126M    ./fast_langdetect/lid.176.bin
126M    ./fast_langdetect
102M    ./s2G488k.pth
102M    ./gsv-v2final-pretrained/s2G2333k.pth
90M     ./s2D488k.pth
90M     ./gsv-v2final-pretrained/s2D2333k.pth
56M     ./gsv-v4-pretrained/vocoder.pth
264K    ./chinese-roberta-wwm-ext-large/tokenizer.json
4.0K    ./models--nvidia--bigvgan_v2_24khz_100band_256x/config.json
4.0K    ./chinese-roberta-wwm-ext-large/config.json
4.0K    ./chinese-hubert-base/preprocessor_config.json
4.0K    ./chinese-hubert-base/config.json
4.0K    ./README.md
4.0K    ./.gitignore
@OutisLi
Copy link
Author

OutisLi commented May 6, 2025

the name is so confussing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant