-
Notifications
You must be signed in to change notification settings - Fork 125
Open
Description
Checklist
- Checked the issue tracker for similar issues to ensure this is not a duplicate
- Read the documentation to confirm the issue is not addressed there and your configuration is set correctly
- Tested with the latest version to ensure the issue hasn't been fixed
How often does this bug occurs?
always
Expected behavior
As the title says, i create a project and run it, but an error occurs.
By the way, i want to know, which version of TensorFlow was used to convert the tflite model in these examples[helloworld/micro_speech/person_detection]?
Actual behavior (suspected bug)
The following is the code snippet of ESP:
void setup() {
// Map the model into a usable data structure. This doesn't involve any
// copying or parsing, it's a very lightweight operation.
model = tflite::GetModel(g_model);
if (model->version() != TFLITE_SCHEMA_VERSION) {
MicroPrintf("Model provided is schema version %d not equal to supported "
"version %d.", model->version(), TFLITE_SCHEMA_VERSION);
return;
}
// Pull in only the operation implementations we need.
static tflite::MicroMutableOpResolver<2> resolver;
if (resolver.AddUnidirectionalSequenceLSTM() != kTfLiteOk) {
return;
}
if (resolver.AddFullyConnected() != kTfLiteOk) {
return;
}
// Build an interpreter to run the model with.
static tflite::MicroInterpreter static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
MicroPrintf("AllocateTensors() failed");
return;
}
// Obtain pointers to the model's input and output tensors.
input = interpreter->input(0);
output = interpreter->output(0);
}
The following is the code snippet of Python to build model and save to tflite:
def build_model():
model = Sequential()
model.add(LSTM(64, return_sequences=True, input_shape=(60, 6)))
model.add(LSTM(32))
model.add(Dense(8))
model.add(Dense(gesture_classes_num, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy')
return model
def save_as_tflite(model, filename):
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.experimental_new_converter = True
converter.allow_custom_ops = True
tflite_model = converter.convert()
with open(filename, 'wb') as f:
f.write(tflite_model)
Error logs or terminal output
Didn't find op for builtin opcode 'SHAPE'
Failed to get registration from op code SHAPE
AllocateTensors() failed
Steps to reproduce the behavior
The current environment is as follows:
esp-idf: 5.2.0
esp-tflite-micro: 1.3.2
chip: esp32-s3 16MFlash and 8M PSRAM
python: 3.10.0
tensorflow: 2.16.1
Project release version
1.3.2
System architecture
Intel/AMD 64-bit (modern PC, older Mac)
Operating system
Linux
Operating system version
Window11
Shell
ZSH
Additional context
No response
Metadata
Metadata
Assignees
Labels
No labels