-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Description
Please use the following questions as a guideline to help me answer
your issue/question without further inquiry. Thank you.
Which version of Elastic are you using?
- elastic.v7 (for Elasticsearch 7.x)
- elastic.v6 (for Elasticsearch 6.x)
- elastic.v5 (for Elasticsearch 5.x)
- elastic.v3 (for Elasticsearch 2.x)
- elastic.v2 (for Elasticsearch 1.x)
Please describe the expected behavior
This issue is similar to #1247 but with a bigger scope.
When an individual request fails in a bulk processor, it should not retry when the backoff returns false.
Please describe the actual behavior
When the backoff returns false and there is an error in the individual request , the failing request is added back to the bulkprocessor before the commit function returns. This causes the indefinite retries.
The following code in bulk_processor.go is responsible for this behaviour.
// commitFunc will commit bulk requests and, on failure, be retried
// via exponential backoff
commitFunc := func() error {
var err error
// Save requests because they will be reset in service.Do
reqs := w.service.requests
res, err = w.service.Do(ctx)
if err == nil {
// Overall bulk request was OK. But each bulk response item also has a status
if w.p.retryItemStatusCodes != nil && len(w.p.retryItemStatusCodes) > 0 {
// Check res.Items since some might be soft failures
if res.Items != nil && res.Errors {
// res.Items will be 1 to 1 with reqs in same order
for i, item := range res.Items {
for _, result := range item {
if _, found := w.p.retryItemStatusCodes[result.Status]; found {
w.service.Add(reqs[i]) //here the failing request is added back to the processor regardless of the backoff flag
if err == nil {
err = ErrBulkItemRetry
}
}
}
}
}
}
}
return err
}
Any steps to reproduce the behavior?
One easy way to recreate this is to retry on client side errors, and send wrong data type to a field in the index.
Metadata
Metadata
Assignees
Labels
No labels