-
-
Notifications
You must be signed in to change notification settings - Fork 66
Description
I've trained a model for keypoints detection
Exported that to mlpackage with my macOs
`
model = YOLO("foam_boxes_model.pt")
Export the model to CoreML format
model.export(format="coreml")
`
Just to make sure it's working propertly I just tested out with the packageml generated:
`
image = Image.open('dual_foambox.jpg').convert('RGB')
coreml_model = YOLO("foam_boxes_model.mlpackage", task="pose")
results = coreml_model(image)
results[0].show()
`
and the result is correct:

But when It's imported to swift it just returns the boundingBox of one of the results(without the keypoints) and always appears "person", the labels are applied.
There is the result on iOS:

What I noticed it's on PoserEstimation.swift the hardcoded "person" class, ignoring the labels loaded.

Appreciate any help.