Skip to content

Custom Keypoints detection multiple labels #177

@wikyburns

Description

@wikyburns

I've trained a model for keypoints detection

Exported that to mlpackage with my macOs
`
model = YOLO("foam_boxes_model.pt")

Export the model to CoreML format
model.export(format="coreml")
`

Just to make sure it's working propertly I just tested out with the packageml generated:

`
image = Image.open('dual_foambox.jpg').convert('RGB')
coreml_model = YOLO("foam_boxes_model.mlpackage", task="pose")

results = coreml_model(image)
results[0].show()
`

and the result is correct:

Image

But when It's imported to swift it just returns the boundingBox of one of the results(without the keypoints) and always appears "person", the labels are applied.

There is the result on iOS:

Image

Image

What I noticed it's on PoserEstimation.swift the hardcoded "person" class, ignoring the labels loaded.

Image

Appreciate any help.

Metadata

Metadata

Assignees

No one assigned

    Labels

    exportsModel exports (ONNX, TensorRT, TFLite, etc.)posePose/keypoint estimation models

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions