-
Notifications
You must be signed in to change notification settings - Fork 38
Description
hello, with the latest installation, when trying to train XTTS, there is a problem with the default flag
--fp16 true for faster whisper
--fp16 FP16 whether to perform inference in fp16; (default: True)
or
whisperx --compute_type int8
where can I adjust this parameter, which file ?
I tried to edit line 580 in the transcribe.py file: "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\faster_whisper\transcribe.py"
compute_type: str = "int8",
but it still doesn't help
2025-01-04 15:51:56,253 [INFO] XTTS Training: cli()
2025-01-04 15:51:56 | INFO | root | XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\whisperx\transcribe.py", line 171, in cli
2025-01-04 15:51:56,254 [INFO] XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\whisperx\transcribe.py", line 171, in cli
2025-01-04 15:51:56 | INFO | root | XTTS Training: model = load_model(model_name, device=device, device_index=device_index, download_root=model_dir, compute_type=compute_type, language=args['language'], asr_options=asr_options, vad_options={"vad_onset": vad_onset, "vad_offset": vad_offset}, task=task, threads=faster_whisper_threads)
2025-01-04 15:51:56,255 [INFO] XTTS Training: model = load_model(model_name, device=device, device_index=device_index, download_root=model_dir, compute_type=compute_type, language=args['language'], asr_options=asr_options, vad_options={"vad_onset": vad_onset, "vad_offset": vad_offset}, task=task, threads=faster_whisper_threads)
2025-01-04 15:51:56 | INFO | root | XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\whisperx\asr.py", line 292, in load_model
2025-01-04 15:51:56,256 [INFO] XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\whisperx\asr.py", line 292, in load_model
2025-01-04 15:51:56 | INFO | root | XTTS Training: model = model or WhisperModel(whisper_arch,
2025-01-04 15:51:56,257 [INFO] XTTS Training: model = model or WhisperModel(whisper_arch,
2025-01-04 15:51:56 | INFO | root | XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\faster_whisper\transcribe.py", line 634, in init
2025-01-04 15:51:56,258 [INFO] XTTS Training: File "C:\Users\Jacek\Downloads\Pandrator\conda\envs\whisperx_installer\lib\site-packages\faster_whisper\transcribe.py", line 634, in init
2025-01-04 15:51:56 | INFO | root | XTTS Training: self.model = ctranslate2.models.Whisper(
2025-01-04 15:51:56,260 [INFO] XTTS Training: self.model = ctranslate2.models.Whisper(
2025-01-04 15:51:56 | INFO | root | XTTS Training: ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
2025-01-04 15:51:56,266 [INFO] XTTS Training: ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
I know that in another closed thread it was mentioned, but when updating the problem returns, all other options work! :) Thank you for the fantastic project