Skip to content

Fine Tuning vs. Feature Extraction #68

Answered by mrdbourke
gauravreddy08 asked this question in Q&A
Discussion options

You must be logged in to vote

Hey Gaurav,

  • trainable = False sets all of the layers in a downloaded/pretrained model to frozen, so these won't update any of their internal patterns when you start training on your data.
  • For fine-tuning & keeping a few layers frozen you can use pretrained weights for your own dataset or, if your dataset is large enough, you can unfreeze the base layers (the already pretrained ones) and fine-tune them (tweak them) so they work better with your custom data - though many times feature extraction models (only top layers unfrozen) work quite well.
  • Fine-tuning is not always necessary, it's usually used as a step to improve a pretrained model for your custom data, but it may not always result …

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by mrdbourke
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants