Skip to content

Additional inputs (even existing embedding layers) throwing shape errors #124

@dterg

Description

@dterg

Adding additional embedding layers between existing layers (or any other features), throws shape input mismatch error. This is even if simply duplicating the existing embedding layers.

For example:

word_ids = Input(batch_shape=(None, None), dtype='int32', name='word_input')
word_ids_two = Input(batch_shape=(None, None), dtype='int32', name='word_input2')
char_ids = Input(batch_shape=(None, None, None), dtype='int32', name='char_input')
elmo_embeddings = Input(shape=(None, 1024), dtype='float32', name="elmo_input")

and then:

word_embeddings = Embedding(input_dim=self._embeddings.shape[0],
                                            output_dim=self._embeddings.shape[1],
                                            mask_zero=True,
                                            weights=[self._embeddings],
                                            name='word_embedding')(word_ids)
word_embeddings_two = Embedding(input_dim=self._embeddings.shape[0],
                                            output_dim=self._embeddings.shape[1],
                                            mask_zero=True,
                                            weights=[self._embeddings],
                                            name='word_embedding')(word_ids_two)

and:

word_embeddings = Concatenate()([word_embeddings, word_embeddings_two, char_embeddings, elmo_embeddings])


System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOs 10.14.2
  • TensorFlow/Keras version: 2.3.1
  • Python version: 3.6

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions