Skip to content

api_v2.py支持中英的推理吗 #2289

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
asd351012 opened this issue Apr 14, 2025 · 2 comments
Open

api_v2.py支持中英的推理吗 #2289

asd351012 opened this issue Apr 14, 2025 · 2 comments

Comments

@asd351012
Copy link

api_v2.py支持中英的推理吗 用自带的预训练模型

@asd351012
Copy link
Author

运行api_v2 http://200.100.100.11:9880/tts?text=你好 hello&text_lang=zh&ref_audio_path=refaudio_zh.wav&prompt_lang=zh&prompt_text=根据您的体检报告,建议您采取一种均衡的膳食营养方案,该方案包含以下几个方面&text_split_method=cut5&batch_size=1&media_type=wav&streaming_mode=true
输出为TypeError: unsupported operand type(s) for +: 'ZipFilePathPointer' and 'str'

@asd351012
Copy link
Author

Traceback (most recent call last):
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/starlette/responses.py", line 253, in wrap
| await func()
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/starlette/responses.py", line 242, in stream_response
| async for chunk in self.body_iterator:
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/starlette/concurrency.py", line 62, in iterate_in_threadpool
| yield await anyio.to_thread.run_sync(_next, as_iterator)
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
| return await get_async_backend().run_sync_in_worker_thread(
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
| return await future
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
| result = context.run(func, *args)
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/starlette/concurrency.py", line 51, in _next
| return next(iterator)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/api_v2.py", line 352, in streaming_generator
| for sr, chunk in tts_generator:
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 57, in generator_context
| response = gen.send(request)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TTS.py", line 1207, in run
| raise e
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TTS.py", line 1051, in run
| item = make_batch(item)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TTS.py", line 1016, in make_batch
| phones, bert_features, norm_text = self.text_preprocessor.segment_and_extract_feature_for_text(
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py", line 120, in segment_and_extract_feature_for_text
| return self.get_phones_and_bert(text, language, version)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py", line 175, in get_phones_and_bert
| phones, word2ph, norm_text = self.clean_text_inf(textlist[i], lang, version)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py", line 206, in clean_text_inf
| phones, word2ph, norm_text = clean_text(text, language, version)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/text/cleaner.py", line 47, in clean_text
| phones = language_module.g2p(norm_text)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/text/english.py", line 365, in g2p
| phone_list = _g2p(text)
| File "/opt/anaconda3/envs/isdh2/GPT-SoVITS/GPT_SoVITS/text/english.py", line 273, in call
| tokens = pos_tag(words) # tuples of (word, tag)
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/nltk/tag/init.py", line 168, in pos_tag
| tagger = _get_tagger(lang)
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/nltk/tag/init.py", line 110, in _get_tagger
| tagger = PerceptronTagger()
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/nltk/tag/perceptron.py", line 183, in init
| self.load_from_json(lang)
| File "/opt/anaconda3/envs/isdh2/lib/python3.10/site-packages/nltk/tag/perceptron.py", line 274, in load_from_json
| with open(loc + TAGGER_JSONS[lang]["weights"]) as fin:
| TypeError: unsupported operand type(s) for +: 'ZipFilePathPointer' and 'str'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant