-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Unable to perform inference on pretrained weights #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have written the inference code myself but besides the very first image in PASCAL_VOC where it is able to detect a dog , there is no detection for any object for rest of many images. import config
import torch
import torch.optim as optim
from model import YOLOv3
from tqdm import tqdm
import cv2
from loss import YoloLoss
import warnings
warnings.filterwarnings("ignore")
from utils import (
mean_average_precision,
load_checkpoint,get_loaders, cells_to_bboxes,non_max_suppression,plot_image
)
torch.backends.cudnn.benchmark = True
import numpy as np
def plot_couple_examples(model,thresh, iou_thresh, anchors):
model.eval()
anchors = torch.tensor(anchors)
anchors = anchors.to(config.DEVICE)
x = cv2.imread('PASCAL_VOC/images/001931.jpg')#,cv2.COLOR_BGR2RGB 000001.jpg
x = cv2.resize(x,(416,416))
x= np.rollaxis(x, 2, 0)
x = np.expand_dims(x,0)
x = torch.tensor(x)
x = x.type(torch.cuda.FloatTensor)
x = x.to("cuda")
with torch.no_grad():
out = model(x)
bboxes = [[] for _ in range(x.shape[0])]
for i in range(1):
batch_size, A, S, _, _ = out[i].shape
anchor = anchors[i]
boxes_scale_i = cells_to_bboxes(
out[i], anchor, S=S, is_preds=True
)
for idx, (box) in enumerate(boxes_scale_i):
bboxes[idx] += box
model.train()
for i in range(batch_size):
print(bboxes)
nms_boxes = non_max_suppression(
bboxes[i], iou_threshold=iou_thresh, threshold=thresh, box_format="midpoint",
)
print(nms_boxes)
x = cv2.imread('PASCAL_VOC/images/001931.jpg',cv2.COLOR_BGR2RGB)
x = cv2.resize(x,(416,416))
plot_image(x, nms_boxes)
model = YOLOv3(num_classes=config.NUM_CLASSES).to(config.DEVICE)
optimizer = optim.Adam(
model.parameters(), lr=config.LEARNING_RATE, weight_decay=config.WEIGHT_DECAY
)
load_checkpoint(
config.CHECKPOINT_FILE, model, optimizer, config.LEARNING_RATE
)
anchors = config.ANCHORS
plot_couple_examples(model,0.3, 0.3, anchors) I am again reading the image because the original code was modifying the image in some odd ways |
Now i am getting some detections using the following code for inference ,however bounding box height and width seem to be very small .Plus I was getting no detections when i used my previous code in above comment can you please tell some solution to this. import config
import torch
import torch.optim as optim
from model import YOLOv3
from tqdm import tqdm
import cv2
from loss import YoloLoss
import warnings
warnings.filterwarnings("ignore")
from utils import (
mean_average_precision,
load_checkpoint,get_loaders, cells_to_bboxes,non_max_suppression,plot_image,plot_couple_examples
)
torch.backends.cudnn.benchmark = True
import numpy as np
model = YOLOv3(num_classes=config.NUM_CLASSES).to(config.DEVICE)
optimizer = optim.Adam(
model.parameters(), lr=config.LEARNING_RATE, weight_decay=config.WEIGHT_DECAY
)
load_checkpoint(
config.CHECKPOINT_FILE, model, optimizer, config.LEARNING_RATE
)
train_loader, test_loader, train_eval_loader = get_loaders(
train_csv_path=config.DATASET + "/8examples.csv", test_csv_path=config.DATASET + "/8examples.csv"
)
anchors = config.ANCHORS
anchors = torch.tensor(anchors)
anchors = anchors.to(config.DEVICE)
plot_couple_examples(model,test_loader,0.85, 0.85, anchors) |
hi! Have you found a solution? |
Hi Aladdin great tutorials you have here. I was really able to understand for the first time how to code YOLOv3 . But i couldn't find the code for inference so i decided to write one on my own but i stumbled across following issues.
The text was updated successfully, but these errors were encountered: