You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If an import modifies 2000 products, and then changes their category
allocation, etc. there may be many duplicates, but not within a range
of 1000 changelog entries.
This loads all the ids in one batch (which should be relatively cheap
memory and time wise), and then runs over them with the indexer in
smaller chunks. This way indexers continue to seem the small chunk
sizes, in case they would fail with too many ids.
0 commit comments