Skip to content

Commit b6e2867

Browse files
committed
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2 parents 2465def + 724b3c0 commit b6e2867

File tree

6 files changed

+140
-1
lines changed

6 files changed

+140
-1
lines changed
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
Background Subtraction {#tutorial_bgsegm_bg_subtraction}
2+
======================
3+
4+
Goal
5+
----
6+
7+
In this chapter,
8+
9+
- We will familiarize with the background subtraction methods available in OpenCV.
10+
11+
Basics
12+
------
13+
14+
Background subtraction is a major preprocessing step in many vision-based applications. For
15+
example, consider the case of a visitor counter where a static camera takes the number of visitors
16+
entering or leaving the room, or a traffic camera extracting information about the vehicles etc. In
17+
all these cases, first you need to extract the person or vehicles alone. Technically, you need to
18+
extract the moving foreground from static background.
19+
20+
If you have an image of background alone, like an image of the room without visitors, image of the road
21+
without vehicles etc, it is an easy job. Just subtract the new image from the background. You get
22+
the foreground objects alone. But in most of the cases, you may not have such an image, so we need
23+
to extract the background from whatever images we have. It becomes more complicated when there are
24+
shadows of the vehicles. Since shadows also move, simple subtraction will mark that also as
25+
foreground. It complicates things.
26+
27+
Several algorithms were introduced for this purpose.
28+
In the following, we will have a look at two algorithms from the `bgsegm` module.
29+
30+
### BackgroundSubtractorMOG
31+
32+
It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It was introduced in
33+
the paper "An improved adaptive background mixture model for real-time tracking with shadow
34+
detection" by P. KadewTraKuPong and R. Bowden in 2001. It uses a method to model each background
35+
pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent
36+
the time proportions that those colours stay in the scene. The probable background colours are the
37+
ones which stay longer and more static.
38+
39+
While coding, we need to create a background object using the function,
40+
**cv.bgsegm.createBackgroundSubtractorMOG()**. It has some optional parameters like length of history,
41+
number of gaussian mixtures, threshold etc. It is all set to some default values. Then inside the
42+
video loop, use backgroundsubtractor.apply() method to get the foreground mask.
43+
44+
See a simple example below:
45+
@code{.py}
46+
import numpy as np
47+
import cv2 as cv
48+
49+
cap = cv.VideoCapture('vtest.avi')
50+
51+
fgbg = cv.bgsegm.createBackgroundSubtractorMOG()
52+
53+
while(1):
54+
ret, frame = cap.read()
55+
56+
fgmask = fgbg.apply(frame)
57+
58+
cv.imshow('frame',fgmask)
59+
k = cv.waitKey(30) & 0xff
60+
if k == 27:
61+
break
62+
63+
cap.release()
64+
cv.destroyAllWindows()
65+
@endcode
66+
( All the results are shown at the end for comparison).
67+
68+
@note Documentation on the newer method **cv.createBackgroundSubtractorMOG2()** can be found here: @ref tutorial_background_subtraction
69+
70+
### BackgroundSubtractorGMG
71+
72+
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation.
73+
It was introduced by Andrew B. Godbehere, Akihiro Matsukawa, and Ken Goldberg in their paper "Visual
74+
Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art
75+
Installation" in 2012. As per the paper, the system ran a successful interactive audio art
76+
installation called “Are We There Yet?” from March 31 - July 31 2011 at the Contemporary Jewish
77+
Museum in San Francisco, California.
78+
79+
It uses first few (120 by default) frames for background modelling. It employs probabilistic
80+
foreground segmentation algorithm that identifies possible foreground objects using Bayesian
81+
inference. The estimates are adaptive; newer observations are more heavily weighted than old
82+
observations to accommodate variable illumination. Several morphological filtering operations like
83+
closing and opening are done to remove unwanted noise. You will get a black window during first few
84+
frames.
85+
86+
It would be better to apply morphological opening to the result to remove the noises.
87+
@code{.py}
88+
import numpy as np
89+
import cv2 as cv
90+
91+
cap = cv.VideoCapture('vtest.avi')
92+
93+
kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE,(3,3))
94+
fgbg = cv.bgsegm.createBackgroundSubtractorGMG()
95+
96+
while(1):
97+
ret, frame = cap.read()
98+
99+
fgmask = fgbg.apply(frame)
100+
fgmask = cv.morphologyEx(fgmask, cv.MORPH_OPEN, kernel)
101+
102+
cv.imshow('frame',fgmask)
103+
k = cv.waitKey(30) & 0xff
104+
if k == 27:
105+
break
106+
107+
cap.release()
108+
cv.destroyAllWindows()
109+
@endcode
110+
Results
111+
-------
112+
113+
**Original Frame**
114+
115+
Below image shows the 200th frame of a video
116+
117+
![image](images/resframe.jpg)
118+
119+
**Result of BackgroundSubtractorMOG**
120+
121+
![image](images/resmog.jpg)
122+
123+
**Result of BackgroundSubtractorGMG**
124+
125+
Noise is removed with morphological opening.
126+
127+
![image](images/resgmg.jpg)
128+
129+
Additional Resources
130+
--------------------
131+
132+
Exercises
133+
---------
Loading
Loading
Loading
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
Tutorials for bgsegm module {#tutorial_table_of_content_bgsegm}
2+
===============================================================
3+
4+
- @subpage tutorial_bgsegm_bg_subtraction
5+
6+
In several applications, we need to extract foreground for further operations like object tracking. Background Subtraction is a well-known method in those cases.

modules/sfm/src/libmv_light/libmv/correspondence/CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ FILE(GLOB CORRESPONDENCE_HDRS *.h)
88

99
ADD_LIBRARY(correspondence STATIC ${CORRESPONDENCE_SRC} ${CORRESPONDENCE_HDRS})
1010

11-
TARGET_LINK_LIBRARIES(correspondence LINK_PRIVATE multiview)
11+
TARGET_LINK_LIBRARIES(correspondence LINK_PRIVATE ${GLOG_LIBRARY} multiview)
1212
IF(TARGET Eigen3::Eigen)
1313
TARGET_LINK_LIBRARIES(correspondence LINK_PUBLIC Eigen3::Eigen)
1414
ENDIF()

0 commit comments

Comments
 (0)