-
Notifications
You must be signed in to change notification settings - Fork 387
Batch commit throw DEADLINE_EXCEEDED but it actually create the documents #2854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I found a few problems with this issue:
|
Hi @MrLibya, thank you for raising this issue. Quick confirmation, any batch that has been successful, will be removed from the re-try logic (this.batches), is it? |
Hello @milaGGL , Yes it will be auto removed, you also can see that when commit I log the batch id, so its same id each time and also in that case I only had 3 batchs and all 3 were failing, So there wasn't any batch got removed. Note that I only encountered this with slow internet connection, I still didn't dive deep into the firestore code but is there client side limit of 60s ? and it throw error ? or the error did actually came back from the server ? |
A timeout doesn't necessarily mean the operation failed, so the operation might be still running on the server side when the client timed out. I assume you can see the docs successfully added if you check the database. A workaround could be implement exponential backoff for retries, like increase the delay between retries exponentially, giving server more time to complete the batch operation. Or if it is acceptable, instead of creating documents, use set() with Or, you might want to consider making a smaller batch, so that it could be done within the timeout limit. |
Have you considered using BulkWriter instead? It should handle errors and retries on behalf. |
@milaGGL In the firestore documents it state that remote server will terminate request after 60s, So it shouldn't keep going. I can't use set with merge because its batch so it might have different operation, update, delete or create, Only in this test all documents was create. There's many workarounds can be done, like minimize operation per batch from 500 to 300 or something when having slow internet connection, But this doesn't solve the original issue which is if remote server return timeout it shouldn't persist the data and abort. About |
Environment
Mac M1 arm64
13.0.1
firestore
v20.18.0
10.9.0
The problem
I've class to handle more then 500 operation by simply creating new batch when reach 500, And when commit I'll run commit on each of the batches, And whenever any commit is failing it go to re-try commit that failed batch.
I had the batch commit fail with error of
Deadline exceeded after 60.001s,metadata filters: 0.085s,LB pick: 0.001s,remote_addr=142.250.181.234:443
then after few re-try it was failing for new reasonALREADY_EXISTS: Document already exists
and the data was created in the firestore !, If batch commit throw error it shouldn't write in firestore, but it seems commit throw error of the 60s timeout limits but data was actually successfully committed,full log
Steps to reproduce:
I had bad internet and I was writing the 1300 documents with
create
operationRelevant Code:
Code of my class
The text was updated successfully, but these errors were encountered: