Skip to content

Batch Inference with RetinaFace is slower than single image inference of Insightface #5

@zeahmd

Description

@zeahmd

`
import numpy as np
from insightface.app import FaceAnalysis
import cv2
import matplotlib.pyplot as plt
import time
from batch_face import RetinaFace

img = cv2.imread('/home/zeeshan/Downloads/musk.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (640, 640))
plt.imshow(img)

model = FaceAnalysis(allowed_modules=['detection'],
providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
model.prepare(ctx_id=0)
detector = RetinaFace(gpu_id=0)

tik = time.time()
faces = model.get(img)
faces = model.get(img)
faces = model.get(img)
faces = model.get(img)
faces = model.get(img)
print(f"time taken: {time.time()-tik}")

tik = time.time()
faces = detector.detect([img, img, img, img, img])
print(f"time taken: {time.time()-tik}")
`
@elliottzheng could you please have a look at this code and help me to solve this issue?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions