Skip to content

Releases: rhasspy/wyoming-faster-whisper

v3.0.1

31 Oct 15:02

Choose a tag to compare

  • Fix model auto selection logic

v3.0.0

30 Oct 21:44

Choose a tag to compare

  • Add support for sherpa-onnx and Nvidia's parakeet model
  • Add support for GigaAM for Russian via onnx-asr
  • Add --stt-library to select speech-to-text library (deprecate --use-transformers)
  • Default --model to "auto" (prefer parakeet)
  • Add Docker build here
  • Default --language to "auto"
  • Add --cpu-threads for faster-whisper (@Zerwin)

v2.5.0

16 Jun 14:01

Choose a tag to compare

  • Add support for HuggingFace transformers Whisper models (--use-transformers)

v2.4.0

10 Dec 21:59

Choose a tag to compare

2.4.0

  • Add "auto" for model and beam size (0) to select values based on CPU

v2.3.0

03 Dec 17:43

Choose a tag to compare

  • Bump faster-whisper package to 1.1.0
  • Supports model turbo for faster processing

v2.2.0

11 Oct 16:37

Choose a tag to compare

2.2.0

  • Bump faster-whisper package to 1.0.3

v2.0.0

10 Mar 20:05

Choose a tag to compare

2.0.0

  • Use faster-whisper PyPI package
  • --model can now be a HuggingFace model ID like Systran/faster-distil-whisper-small.en as well as one of:
    • tiny-int8
    • tiny.en
    • tiny
    • base-int8
    • base.en
    • base
    • small-int8
    • small.en
    • small
    • medium-int8
    • medium.en
    • medium
    • large-v1
    • large-v2
    • large-v3
    • large
    • distil-large-v2
    • distil-medium.en
    • distil-small.en
  • --model may also be a path to a custom model directory