Pretrained model refuses to converge. #9047
Unanswered
MostafaAhmed98
asked this question in
Q&A
Replies: 1 comment
-
Hi Ahmed, were you able to fix it? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Im trying to build an Arabic asr with common voice dataset with pretrained stt_en_citrinet_256 and after a 120 epochs i found that my model didn't improve and still making a bad WER (0.95) on test set.
i used sub-word sentencepiece tokenizer with 4096 size and changed the pretrained model vocab, tokenizer, train and test config as below:
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
params.model.train_ds.manifest_filepath = 'train.json'
params.model.validation_ds.manifest_filepath = 'dev.json'
first_asr_model.change_vocabulary( new_tokenizer_dir='tokenizer_spe_unigram_v4096', new_tokenizer_type="bpe" )
first_asr_model.setup_training_data(train_data_config=params['model']['train_ds'])
first_asr_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
below is my train, val loss and WER
Beta Was this translation helpful? Give feedback.
All reactions