Skip to content

Clarification on ModelNet Dataset Categories and Sample Counts #3

@languangduan

Description

@languangduan

Hello,

I've come across a couple of interesting quirks while working with the ModelNet dataset:

Sample Shortfall Surprise: According to the paper, there should be 24630 training samples and 6204 test samples. However, even if we assume each sample contains four images as stated, selecting the ten classes with the highest data counts in ModelNet doesn't add up to these quantities.

Accuracy Anomaly: When attempting to replicate the experiments locally, I pretrain on the same ten classes from ModelNet and use the parameters outlined in the paper. Surprisingly, the accuracy achieved exceeds 79%, rather than the 70% range mentioned in the paper.

I'm curious if you could provide some clarification on the specific categories of the ModelNet dataset you used in your experiments. Additionally, any insights you could offer regarding the observed discrepancies in sample counts and accuracy would be greatly appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions