Replies: 3 comments 2 replies
-
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Just try custom-vit-models-load-and-convert-weights-from-timm-torch-model: from keras_cv_attention_models import beit
mm = beit.BeitBasePatch16(pretrained=None, classifier_activation=None, num_classes=21841)
beit.keras_model_load_weights_from_pytorch_model(mm, 'beit_base_patch16_224_pt22k_ft22k.pth')
# >>>> Save model to: beit_base_patch16_224.h5
# >>>> Trying to load index file: /home/leondgarse/.keras/datasets/imagenet21k_class_index.json
# >>>> Keras model prediction: [('n02121808', 'domestic_cat, house_cat, Felis_domesticus, Felis_catus', 10.603789), ('n01317541', 'domestic_animal, domesticated_animal', 10.452324), ('n02123159', 'tiger_cat', 9.973137), ('n00015388', 'animal, animate_being, beast, brute, creature, fauna', 9.916692), ('n01318894', 'pet', 9.427024)] |
Beta Was this translation helpful? Give feedback.
1 reply
-
That one has a different architecture, updated add use_shared_pos_emb_for_attn parameter for beit supporting raw model w/o any finetuning. Try: from keras_cv_attention_models import beit
mm = beit.BeitBasePatch16(pretrained=None, classifier_activation=None, num_classes=8192, use_shared_pos_emb_for_attn=True)
beit.keras_model_load_weights_from_pytorch_model(mm, 'beit_base_patch16_224_pt22k.pth') |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Leondgarse,
Could you please explain the math used behind the MultiHeadRelativePositionalEmbedding()? Where can I find a source/paper/article that explains it?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions