Skip to content

Testing on single image #7

@adithyagaurav

Description

@adithyagaurav

Hi, I am trying to implement your model with the imagenet pre-trained weights you have provided on the repository. I'm hoping to run inference on a single image. The problem I'm facing is that, every time I run inference, model gives the output tensor(600), which means it's predicting the class for every image to be 600. I have tried different images (of different classes), the model consistently labels every image to 600.

I wish to know why must this be happening, am I doing something wrong? Following is my code:

model = darknet53(1000)
checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])

test_transform = transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
        ])

pil_image = test_transform(Image.open(image_path))
print(pil_image.shape)
torch_image = pil_image.unsqueeze(0)
print(torch_image.shape)

out = model(torch_image)
label = torch.argmax(out)
print(label)

Can you help me?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions