You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -161,6 +166,12 @@ The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognit
161
166
162
167
The size of the quantized model is roughly 6.2 MB (**face_recognition_model**).
163
168
169
+
<aname="models-face-expression-recognition"></a>
170
+
171
+
## Face Expression Recognition Model
172
+
173
+
The face expression recognition model is lightweight, fast and provides reasonable accuracy. The model has a size of roughly 310kb and it employs depthwise separable convolutions and densely connected blocks. It has been trained on a variety of images from publicly available datasets as well as images scraped from the web. Note, that wearing glasses might decrease the accuracy of the prediction results.
All global neural network instances are exported via faceapi.nets:
@@ -319,13 +331,13 @@ You can tune the options of each face detector as shown [here](#usage-face-detec
319
331
320
332
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
321
333
322
-
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks)>**:
334
+
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes)>**:
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks) | undefined**:
340
+
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes) | undefined**:
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
344
356
345
-
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FullFaceDescription](#interface-full-face-description)>**:
357
+
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes)>**:
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes) | undefined**:
Detect the face with the highest confidence score in an image + recognize the face expression for that face. Returns **[WithFaceExpressions<WithFaceDetection<{}>>](#usage-utility-classes) | undefined**:
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[FullFaceDescription](#interface-full-face-description) | undefined**:
@@ -361,43 +409,43 @@ To perform face recognition, one can use faceapi.FaceMatcher to compare referenc
361
409
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images:
362
410
363
411
```javascript
364
-
constfullFaceDescriptions=await faceapi
412
+
constresults=await faceapi
365
413
.detectAllFaces(referenceImage)
366
414
.withFaceLandmarks()
367
415
.withFaceDescriptors()
368
416
369
-
if (!fullFaceDescriptions.length) {
417
+
if (!results.length) {
370
418
return
371
419
}
372
420
373
421
// create FaceMatcher with automatically assigned labels
374
422
// from the detection results for the reference image
0 commit comments