Over 800+⭐'s because this program this app just works! Works great for windows and mac. This whisper front-end app is the only one to generate a speaker.json
file which partitions the conversation by who doing the speaking.
Turbo Mac acceleration using the new lightning-whisper-mlx backend.
This is a communinity contribution by https://github.com/aj47. On behalf of all the mac users, thank you!
- 4x faster than the
mps
whisper backend. - Supports multiple languages (
mps
only supports english). - Supports custom vocabulary via
--initial_prompt
.
# Mac accelerated back-end
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device mlx
Special thank
Mac acceleration option using the new lightning-whisper-mlx backend. Enable with --device mlx
. Now supports multiple languages, custom vocabulary via --initial_prompt
, and both transcribe/translate tasks. 10x faster than Whisper CPP, 4x faster than previous MLX implementations!
Model Storage: MLX models are now stored in ~/.cache/whisper/mlx_models/
for consistency with other backends, instead of cluttering your current working directory.
GPU Accelerated Dockerfile
Recently added in 3.0.10 is a GPU accelerated Dockerfile.
If you are are doing translations at scale, check out the sister project: https://github.com/zackees/transcribe-everything.
You can pull the docker image like so:
docker pull niteris/transcribe-anything
Easiest whisper implementation to install and use. Just install with pip install transcribe-anything
. All whisper backends are executed in an isolated environment. GPU acceleration is automatic, using the blazingly fast insanely-fast-whisper as the backend for --device insane
. This is the only tool to optionally produces a speaker.json
file, representing speaker-assigned text that has been de-chunkified.
Hardware acceleration on Windows/Linux --device insane
MacArm acceleration when using --device mlx
(now with multi-language support and custom vocabulary)
Input a local file or youtube/rumble url and this tool will transcribe it using Whisper AI into subtitle files and raw text.
Uses whisper AI so this is state of the art translation service - completely free. 🤯🤯🤯
Your data stays private and is not uploaded to any service.
The new version now has state of the art speed in transcriptions, thanks to the new backend --device insane
, as well as producing a speaker.json
file.
pip install transcribe-anything
# Basic usage - CPU mode (works everywhere, slower)
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ
# GPU accelerated (Windows/Linux)
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device insane
# Mac Apple Silicon accelerated
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device mlx
# Advanced options (see Advanced Options section below for full details)
transcribe-anything video.mp4 --device mlx --batch_size 16 --verbose
transcribe-anything video.mp4 --device insane --batch-size 8 --flash True
python api
from transcribe_anything import transcribe_anything
transcribe_anything(
url_or_file="https://www.youtube.com/watch?v=dQw4w9WgXcQ",
output_dir="output_dir",
task="transcribe",
model="large",
device="cuda"
)
# Full function signiture:
def transcribe(
url_or_file: str,
output_dir: Optional[str] = None,
model: Optional[str] = None, # tiny,small,medium,large
task: Optional[str] = None, # transcribe or translate
language: Optional[str] = None, # auto detected if none, "en" for english...
device: Optional[str] = None, # cuda,cpu,insane,mlx
embed: bool = False, # Produces a video.mp4 with the subtitles burned in.
hugging_face_token: Optional[str] = None, # If you want a speaker.json
other_args: Optional[list[str]] = None, # Other args to be passed to to the whisper backend
initial_prompt: Optional[str] = None, # Custom prompt for better recognition of specific terms
) -> str:
This is by far the fastest combination. Experimental, it produces text that tends to be lower quality:
- Higher chance for repeated text patterns.
- Timestamps in the vtt/srt files become unaligned.
It's unclear if this is due to batching or large-v3
itself. More testing is needed. If you do this then please let us know the results by filing a bug in the issues page.
Large batch sizes require more significant amounts of Nvidia GPU Ram. For a 12 GB card, it's been experimentally shown that batch-size=8 will work on all videos from an internally tested data lake.
If you pass in --device insane
on a cuda platform then this tool will use this state of the art version of whisper: https://github.com/Vaibhavs10/insanely-fast-whisper, which is MUCH faster and has a pipeline for speaker identification (diarization) using the --hf_token
option.
Compatible with Python 3.10 and above. Backends use an isolated environment with pinned requirements and python version.
When diarization is enabled via --hf_token
(hugging face token) then the output json will contain speaker info labeled as SPEAKER_00
, SPEAKER_01
etc. For licensing agreement reasons, you must get your own hugging face token if you want to enable this feature. Also there is an additional step to agree to the user policies for the pyannote.audio
located here: https://huggingface.co/pyannote/segmentation-3.0. If you don't do this then you'll see runtime exceptions from pyannote
when the --hf_token
is used.
What's special to this app is that we also generate a speaker.json
which is a de-chunkified version of the output json speaker section.
[
{
"speaker": "SPEAKER_00",
"timestamp": [0.0, 7.44],
"text": "for that. But welcome, Zach Vorhees. Great to have you back on. Thank you, Matt. Craving me back onto your show. Man, we got a lot to talk about.",
"reason": "beginning"
},
{
"speaker": "SPEAKER_01",
"timestamp": [7.44, 33.52],
"text": "Oh, we do. 2023 was the year that OpenAI released, you know, chat GPT-4, which I think most people would say has surpassed average human intelligence, at least in test taking, perhaps not in, you know, reasoning and things like that. But it was a major year for AI. I think that most people are behind the curve on this. What's your take of what just happened in the last 12 months and what it means for the future of human cognition versus machine cognition?",
"reason": "speaker-switch"
},
{
"speaker": "SPEAKER_00",
"timestamp": [33.52, 44.08],
"text": "Yeah. Well, you know, at the beginning of 2023, we had a pretty weak AI system, which was a chat GPT 3.5 turbo was the best that we had. And then between the beginning of last",
"reason": "speaker-switch"
}
]
Note that speaker.json
is only generated when using --device insane
and not for --device cuda
nor --device cpu
.
Insane mode eats up a lot of memory and it's common to get out of memory errors while transcribing. For example a 3060 12GB nividia card produced out of memory errors are common for big content. If you experience this then pass in --batch-size 8
or smaller. Note that any arguments not recognized by transcribe-anything
are passed onto the backend transcriber.
Also, please don't use distil-whisper/distil-large-v2
, it produces extremely bad stuttering and it's not entirely clear why this is. I've had to switch it out of production environments because it's so bad. It's also non-deterministic so I think that somehow a fallback non-zero temperature is being used, which produces these stutterings.
cuda
is the original AI model supplied by openai. It's more stable but MUCH slower. It also won't produce a speaker.json
file which looks like this:
--embed
. This app will optionally embed subtitles directly "burned" into an output video.
This front end app for whisper boasts the easiest install in the whisper ecosystem thanks to isolated-environment. You can simply install it with pip, like this:
pip install transcribe-anything
We have a Dockerfile that will be descently fast for startup. It is tuned specifically for device=insane
. If you have extremely large batches of data you'd like to convert all at once then consider using the sister project transcribe-everything which operates on entire remote paths hierarchies.
GPU acceleration will be automatically enabled for windows and linux. Mac users can use --device mlx
for hardware acceleration on Apple Silicon. --device insane
may also work on Mac M1+ but has been less tested.
Windows/Linux:
- Use
--device insane
Mac:
- Use
--device mlx
Backend | Device Flag | Key Arguments | Best For |
---|---|---|---|
MLX | --device mlx |
--batch_size , --verbose , --initial_prompt |
Mac Apple Silicon |
Insanely Fast | --device insane |
--batch-size , --hf_token , --flash , --timestamp |
Windows/Linux GPU |
CPU | --device cpu |
Standard whisper args | Universal compatibility |
Note: Each backend has different capabilities. MLX is optimized for Apple Silicon with a focused feature set. Insanely Fast uses a transformer-based architecture with specific options. CPU backend supports the full range of standard OpenAI Whisper arguments.
Whisper supports custom prompts to improve transcription accuracy for domain-specific vocabulary, names, or technical terms. This is especially useful when transcribing content with:
- Technical terminology (AI, machine learning, programming terms)
- Proper names (people, companies, products)
- Medical or scientific terms
- Industry-specific jargon
# Direct prompt
transcribe-anything video.mp4 --initial_prompt "The speaker discusses artificial intelligence, machine learning, and neural networks."
# Load prompt from file
transcribe-anything video.mp4 --prompt_file my_custom_prompt.txt
from transcribe_anything import transcribe
# Direct prompt
transcribe(
url_or_file="video.mp4",
initial_prompt="The speaker discusses AI, PyTorch, TensorFlow, and deep learning algorithms."
)
# Load prompt from file
with open("my_prompt.txt", "r") as f:
prompt = f.read()
transcribe(
url_or_file="video.mp4",
initial_prompt=prompt
)
- Keep prompts concise but comprehensive for your domain
- Include variations of terms (e.g., "AI", "artificial intelligence")
- Focus on terms that Whisper commonly misrecognizes
- Test with and without prompts to measure improvement
The MLX backend supports additional arguments for fine-tuning performance:
# Adjust batch size for better performance/memory trade-off
transcribe-anything video.mp4 --device mlx --batch_size 24
# Enable verbose output for debugging
transcribe-anything video.mp4 --device mlx --verbose
# Use custom prompt for better recognition of specific terms
transcribe-anything video.mp4 --device mlx --initial_prompt "The speaker discusses AI, machine learning, and neural networks."
Argument | Type | Default | Description |
---|---|---|---|
--batch_size |
int | 12 | Batch size for processing. Higher values use more memory but may be faster |
--verbose |
flag | false | Enable verbose output for debugging |
--initial_prompt |
string | None | Custom vocabulary/context prompt for better recognition |
The MLX backend supports these whisper models optimized for Apple Silicon:
tiny
,small
,base
,medium
,large
,large-v2
,large-v3
- Distilled models:
distil-small.en
,distil-medium.en
,distil-large-v2
,distil-large-v3
Note: The MLX backend uses the lightning-whisper-mlx library which has a focused feature set optimized for Apple Silicon. Advanced whisper options like
--temperature
and--word_timestamps
are not currently supported by this backend.
The insanely-fast-whisper backend supports these specific options:
# Adjust batch size (critical for GPU memory management)
transcribe-anything video.mp4 --device insane --batch-size 8
# Use different model variants
transcribe-anything video.mp4 --device insane --model large-v3
# Enable Flash Attention 2 for faster processing
transcribe-anything video.mp4 --device insane --flash True
# Enable speaker diarization with HuggingFace token
transcribe-anything video.mp4 --device insane --hf_token your_token_here
# Specify exact number of speakers
transcribe-anything video.mp4 --device insane --hf_token your_token --num-speakers 3
# Set speaker range
transcribe-anything video.mp4 --device insane --hf_token your_token --min-speakers 2 --max-speakers 5
# Choose timestamp granularity
transcribe-anything video.mp4 --device insane --timestamp chunk # default
transcribe-anything video.mp4 --device insane --timestamp word # word-level
Argument | Type | Default | Description |
---|---|---|---|
--batch-size |
int | 24 | Batch size for processing. Critical for GPU memory management |
--flash |
bool | false | Use Flash Attention 2 for faster processing |
--timestamp |
choice | chunk | Timestamp granularity: "chunk" or "word" |
--hf_token |
string | None | HuggingFace token for speaker diarization |
--num-speakers |
int | None | Exact number of speakers (cannot use with min/max) |
--min-speakers |
int | None | Minimum number of speakers |
--max-speakers |
int | None | Maximum number of speakers |
--diarization_model |
string | pyannote/speaker-diarization | Diarization model to use |
Note: The insanely-fast-whisper backend uses a different architecture than standard OpenAI Whisper. It does NOT support standard whisper arguments like
--temperature
,--beam_size
,--best_of
, etc. These are specific to the OpenAI implementation.
The CPU backend uses the standard OpenAI Whisper implementation and supports many additional arguments:
# Language and task options (also available as main arguments)
transcribe-anything video.mp4 --device cpu --language es --task translate
# Generation parameters
transcribe-anything video.mp4 --device cpu --temperature 0.1 --best_of 5 --beam_size 5
# Quality thresholds
transcribe-anything video.mp4 --device cpu --compression_ratio_threshold 2.4 --logprob_threshold -1.0
# Output formatting
transcribe-anything video.mp4 --device cpu --word_timestamps --highlight_words True
# Audio processing
transcribe-anything video.mp4 --device cpu --threads 4 --clip_timestamps "0,30"
Note: The CPU backend supports most standard OpenAI Whisper arguments. These are passed through automatically and documented in the OpenAI Whisper repository.
MLX Backend (--device mlx
):
- Default: 12
- Recommended range: 8-24
- Higher values for more VRAM, lower for less
Insanely Fast Whisper (--device insane
):
- Default: 24
- Recommended for 8GB GPU: 4-8
- Recommended for 12GB GPU: 8-12
- Recommended for 24GB GPU: 16-24
- Use
--flash True
for better memory efficiency - Start low and increase if no OOM errors
# Basic transcription
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ
# Local file
transcribe-anything video.mp4
# Basic MLX usage
transcribe-anything video.mp4 --device mlx
# MLX with custom batch size and verbose output
transcribe-anything video.mp4 --device mlx --batch_size 16 --verbose
# MLX with custom prompt for technical content
transcribe-anything lecture.mp4 --device mlx --initial_prompt "The speaker discusses machine learning, neural networks, PyTorch, and TensorFlow."
# MLX with multiple options (using main arguments for language/task)
transcribe-anything video.mp4 --device mlx --batch_size 20 --verbose --task translate --language es
# Basic insane mode
transcribe-anything video.mp4 --device insane
# Insane mode with custom batch size (important for GPU memory)
transcribe-anything video.mp4 --device insane --batch-size 8
# Insane mode with Flash Attention 2 for speed
transcribe-anything video.mp4 --device insane --batch-size 12 --flash True
# Insane mode with speaker diarization
transcribe-anything video.mp4 --device insane --hf_token your_huggingface_token
# Insane mode with word-level timestamps and speaker diarization
transcribe-anything video.mp4 --device insane --timestamp word --hf_token your_token --num-speakers 3
# High-performance setup with all optimizations
transcribe-anything video.mp4 --device insane --batch-size 16 --flash True --timestamp word
# CPU mode (works everywhere, slower)
transcribe-anything video.mp4 --device cpu
# CPU with custom model and language
transcribe-anything video.mp4 --device cpu --model medium --language fr --task transcribe
If you encounter GPU out-of-memory errors:
# Reduce batch size for MLX
transcribe-anything video.mp4 --device mlx --batch_size 8
# Reduce batch size for insane mode
transcribe-anything video.mp4 --device insane --batch-size 4
# Use smaller model
transcribe-anything video.mp4 --device insane --model small --batch-size 8
For better quality:
# Use larger model
transcribe-anything video.mp4 --device insane --model large-v3
# Enable Flash Attention 2 for better performance
transcribe-anything video.mp4 --device insane --flash True
# Use custom prompt for domain-specific content (works with all backends)
transcribe-anything video.mp4 --initial_prompt "Medical terminology: diagnosis, treatment, symptoms, patient care"
# For CPU backend, you can use standard whisper quality options
transcribe-anything video.mp4 --device cpu --compression_ratio_threshold 2.0 --logprob_threshold -0.5
For faster processing:
# Increase batch size (if you have enough GPU memory)
transcribe-anything video.mp4 --device mlx --batch_size 24
transcribe-anything video.mp4 --device insane --batch-size 16
# Enable Flash Attention 2 for insane mode (significant speedup)
transcribe-anything video.mp4 --device insane --flash True --batch-size 16
# Use smaller model for speed
transcribe-anything video.mp4 --device insane --model small
# Use distilled models for even faster processing
transcribe-anything video.mp4 --device insane --model distil-whisper/large-v2 --flash True
Will output:
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:27.000] We're no strangers to love, you know the rules, and so do I
[00:27.000 --> 00:31.000] I've built commitments while I'm thinking of
[00:31.000 --> 00:35.000] You wouldn't get this from any other guy
[00:35.000 --> 00:40.000] I just wanna tell you how I'm feeling
[00:40.000 --> 00:43.000] Gotta make you understand
[00:43.000 --> 00:45.000] Never gonna give you up
[00:45.000 --> 00:47.000] Never gonna let you down
[00:47.000 --> 00:51.000] Never gonna run around and desert you
[00:51.000 --> 00:53.000] Never gonna make you cry
[00:53.000 --> 00:55.000] Never gonna say goodbye
[00:55.000 --> 00:58.000] Never gonna tell a lie
[00:58.000 --> 01:00.000] And hurt you
[01:00.000 --> 01:04.000] We've known each other for so long
[01:04.000 --> 01:09.000] Your heart's been aching but you're too shy to say it
[01:09.000 --> 01:13.000] Inside we both know what's been going on
[01:13.000 --> 01:17.000] We know the game and we're gonna play it
[01:17.000 --> 01:22.000] And if you ask me how I'm feeling
[01:22.000 --> 01:25.000] Don't tell me you're too much to see
[01:25.000 --> 01:27.000] Never gonna give you up
[01:27.000 --> 01:29.000] Never gonna let you down
[01:29.000 --> 01:33.000] Never gonna run around and desert you
[01:33.000 --> 01:35.000] Never gonna make you cry
[01:35.000 --> 01:38.000] Never gonna say goodbye
[01:38.000 --> 01:40.000] Never gonna tell a lie
[01:40.000 --> 01:42.000] And hurt you
[01:42.000 --> 01:44.000] Never gonna give you up
[01:44.000 --> 01:46.000] Never gonna let you down
[01:46.000 --> 01:50.000] Never gonna run around and desert you
[01:50.000 --> 01:52.000] Never gonna make you cry
[01:52.000 --> 01:54.000] Never gonna say goodbye
[01:54.000 --> 01:57.000] Never gonna tell a lie
[01:57.000 --> 01:59.000] And hurt you
[02:08.000 --> 02:10.000] Never gonna give
[02:12.000 --> 02:14.000] Never gonna give
[02:16.000 --> 02:19.000] We've known each other for so long
[02:19.000 --> 02:24.000] Your heart's been aching but you're too shy to say it
[02:24.000 --> 02:28.000] Inside we both know what's been going on
[02:28.000 --> 02:32.000] We know the game and we're gonna play it
[02:32.000 --> 02:37.000] I just wanna tell you how I'm feeling
[02:37.000 --> 02:40.000] Gotta make you understand
[02:40.000 --> 02:42.000] Never gonna give you up
[02:42.000 --> 02:44.000] Never gonna let you down
[02:44.000 --> 02:48.000] Never gonna run around and desert you
[02:48.000 --> 02:50.000] Never gonna make you cry
[02:50.000 --> 02:53.000] Never gonna say goodbye
[02:53.000 --> 02:55.000] Never gonna tell a lie
[02:55.000 --> 02:57.000] And hurt you
[02:57.000 --> 02:59.000] Never gonna give you up
[02:59.000 --> 03:01.000] Never gonna let you down
[03:01.000 --> 03:05.000] Never gonna run around and desert you
[03:05.000 --> 03:08.000] Never gonna make you cry
[03:08.000 --> 03:10.000] Never gonna say goodbye
[03:10.000 --> 03:12.000] Never gonna tell a lie
[03:12.000 --> 03:14.000] And hurt you
[03:14.000 --> 03:16.000] Never gonna give you up
[03:16.000 --> 03:23.000] If you want, never gonna let you down Never gonna run around and desert you
[03:23.000 --> 03:28.000] Never gonna make you hide Never gonna say goodbye
[03:28.000 --> 03:42.000] Never gonna tell you I ain't ready
from transcribe_anything.api import transcribe
transcribe(
url_or_file="https://www.youtube.com/watch?v=dQw4w9WgXcQ",
output_dir="output_dir",
)
Works for Ubuntu/MacOS/Win32(in git-bash) This will create a virtual environment
> cd transcribe_anything
> ./install.sh
# Enter the environment:
> source activate.sh
The environment is now active and the next step will only install to the local python. If the terminal
is closed then to get back into the environment cd transcribe_anything
and execute source activate.sh
pip install transcribe-anything
- The command
transcribe_anything
will magically become available.
- The command
transcribe_anything <YOUTUBE_URL>
- OpenAI whisper
- insanely-fast-whisper
- yt-dlp: https://github.com/yt-dlp/yt-dlp
- static-ffmpeg
- Every commit is tested for standard linters and a batch of unit tests.
transcribe-anything
now works much better across different configurations and is now much faster. Why? I switched the environment isolation that I was using from my own homespun version built on top of venv
to the AMAZING uv
system. The biggest improvement is the runtime speed and re-installs. UV is just insane at how fast it is for checking the environment. Also it turns out that uv
has strict package dependency checking which found a minor bug where a certain version of one of the pytorch
dependencies was being constantly re-installed because of a dependency conflict that pip was apparently perfectly happy to never warn about. This manifested as certain packages being constantly re-installed with the previous version. uv
identified this as an error immediately and was fixed.
The real reason behind transcribe-anything
's surprising popularity comes from the fact that it just works. And the reason for this is that I can isolate environments for different configurations and install them lazily. If you have the same problem then consider my other tool: https://github.com/zackees/iso-env
- 3.0.7: Insane whisperer mode no longer prints out the srt file during transcription completion.
- 3.0.6: MacOS MLX mode fixed/improved
- PR: #39
- Thank you https://github.com/aj47!
- 3.0.5: A temp wav file was not being cleaned up, now it is.
- 3.1.0: Upgraded Mac-arm backend to lightning-whisper-mlx, enable with
--device mlx
. Now supports multiple languages, custom vocabulary via--initial_prompt
, and both transcribe/translate tasks. 10x faster than Whisper CPP! - 3.0.0: Implemented new Mac-arm accelerated whisper-mps backend, enable with
--device mps
(now--device mlx
). Only does english, but is quite fast. - 2.3.0: Swapped out the environment isolator. Now based on
uv
, should fix the missing dll's on some windows systems. - 2.7.39: Fix
--hf-token
usage for insanely fast whisper backend. - 2.7.37: Fixed breakage due to numpy 2.0 being released.
- 2.7.36: Fixed some ffmpeg dependencies.
- 2.7.35: All
ffmpeg
commands are nowstatic_ffmpeg
commands. Fixes issue. - 2.7.34: Various fixes.
- 2.7.33: Fixes linux
- 2.7.32: Fixes mac m1 and m2.
- 2.7.31: Adds a warning if using python 3.12, which isn't supported yet in the backend.
- 2.7.30: adds --query-gpu-json-path
- 2.7.29: Made to json -> srt more robust for
--device insane
, bad entries will be skipped but warn. - 2.7.28: Fixes bad title fetching with weird characters.
- 2.7.27:
pytorch-audio
upgrades broke this package. Upgrade to latest version to resolve. - 2.7.26: Add model option
distil-whisper/distil-large-v2
- 2.7.25: Windows (Linux/MacOS) bug with
--device insane
and python 3.11 installing wronginsanely-fast-whisper
version. - 2.7.22: Fixes
transcribe-anything
on Linux. - 2.7.21: Tested that Mac Arm can run
--device insane
. Added tests to ensure this. - 2.7.20: Fixes wrong type being returned when speaker.json happens to be empty.
- 2.7.19: speaker.json is now in plain json format instead of json5 format
- 2.7.18: Fixes tests
- 2.7.17: Fixes speaker.json nesting.
- 2.7.16: Adds
--save_hf_token
- 2.7.15: Fixes 2.7.14 breakage.
- 2.7.14: (Broken) Now generates
speaker.json
when diarization is enabled. - 2.7.13: Default diarization model is now pyannote/speaker-diarization-3.1
- 2.7.12: Adds srt_swap for line breaks and improved isolated_environment usage.
- 2.7.11:
--device insane
now generates a *.vtt translation file - 2.7.10: Better support for namespaced models. Trims text output in output json. Output json is now formatted with indents. SRT file is now printed out for
--device insane
- 2.7.9: All SRT translation errors fixed for
--device insane
. All tests pass. - 2.7.8: During error of
--device insane
, write out the error.json file into the destination. - 2.7.7: Better error messages during failure.
- 2.7.6: Improved generation of out.txt, removes linebreaks.
- 2.7.5:
--device insane
now generates better conforming srt files. - 2.7.3: Various fixes for the
insane
mode backend. - 2.7.0: Introduces an
insanely-fast-whisper
, enable by using--device insane
- 2.6.0: GPU acceleration now happens automatically on Windows thanks to
isolated-environment
. This will also prevent interference with different versions of torch for other AI tools. - 2.5.0:
--model large
now aliases to--model large-v3
. Use--model large-legacy
to use original large model. - 2.4.0: pytorch updated to 2.1.2, gpu install script updated to same + cuda version is now 121.
- 2.3.9: Fallback to
cpu
device ifgpu
device is not compatible. - 2.3.8: Fix --models arg which
- 2.3.7: Critical fix: fixes dependency breakage with open-ai. Fixes windows use of embedded tool.
- 2.3.6: Fixes typo in readme for installation instructions.
- 2.3.5: Now has
--embed
to burn the subtitles into the video itself. Only works on local mp4 files at the moment. - 2.3.4: Removed
out.mp3
and instead use a temporary wav file, as that is faster to process. --no-keep-audio has now been removed. - 2.3.3: Fix case where there spaces in name (happens on windows)
- 2.3.2: Fix windows transcoding error
- 2.3.1: static-ffmpeg >= 2.5 now specified
- 2.3.0: Now uses the official version of whisper ai
- 2.2.1: "test_" is now prepended to all the different output folder names.
- 2.2.0: Now explictly setting a language will put the file in a folder with that language name, allowing multi language passes without overwriting.
- 2.1.2: yt-dlp pinned to new minimum version. Fixes downloading issues from old lib. Adds audio normalization by default.
- 2.1.1: Updates keywords for easier pypi finding.
- 2.1.0: Unknown args are now assumed to be for whisper and passed to it as-is. Fixes #3
- 2.0.13: Now works with python 3.9
- 2.0.12: Adds --device to argument parameters. This will default to CUDA if available, else CPU.
- 2.0.11: Automatically deletes files in the out directory if they already exist.
- 2.0.10: fixes local file issue #2
- 2.0.9: fixes sanitization of path names for some youtube videos
- 2.0.8: fix
--output_dir
not being respected. - 2.0.7:
install_cuda.sh
->install_cuda.py
- 2.0.6: Fixes twitter video fetching. --keep-audio -> --no-keep-audio
- 2.0.5: Fix bad filename on trailing urls ending with /, adds --keep-audio
- 2.0.3: GPU support is now added. Run the
install_cuda.sh
script to enable. - 2.0.2: Minor cleanup of file names (no more out.mp3.txt, it's now out.txt)
- 2.0.1: Fixes missing dependencies and adds whisper option.
- 2.0.0: New! Now a front end for Whisper ai!
- Insanely Fast whisper for GPU
- Fast Whisper for CPU
- A better whisper CLI that supports more options but has a manual install.
- Subtitles translator:
- Forum post on how to avoid stuttering
- More stable transcriptions: