Skip to content

Are we accidentally "leaking performance" by using a common Embedding layer in all models in NLP Disaster classification? #204

Answered by mrdbourke
niazangels asked this question in Q&A
Discussion options

You must be logged in to vote

Updated this to fix in 1673987

Also will be live in notebook 08 - https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/08_introduction_to_nlp_in_tensorflow.ipynb

Each model now creates its own embedding layer at the top of the model creation code.

Example:

# Set random seed and create embedding layer (new embedding layer for each model)
tf.random.set_seed(42)
from tensorflow.keras import layers
model_2_embedding = layers.Embedding(input_dim=max_vocab_length,
                                     output_dim=128,
                                     embeddings_initializer="uniform",
                                     input_length=max_length,
                                     n…

Replies: 3 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by mrdbourke
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants