Skip to content

Commit 169aafc

Browse files
authored
fix: bump GS datastore save_bytes timeout (#2397)
GS datastore uploads time out with large files due to default timeout being 60 seconds and not accounting for any activity in the connection. This PR bumps the timeout value to match that of the Azure storage implementation
1 parent d761b55 commit 169aafc

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

metaflow/plugins/datastores/gs_storage.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,9 @@ def save_bytes_single(
119119
blob.metadata = {"metaflow-user-attributes": json.dumps(metadata)}
120120
from google.cloud.storage.retry import DEFAULT_RETRY
121121

122-
blob.upload_from_filename(tmpfile, retry=DEFAULT_RETRY)
122+
blob.upload_from_filename(
123+
tmpfile, retry=DEFAULT_RETRY, timeout=(14400, 60)
124+
) # generous timeout for massive uploads. Use the same values as for Azure (connection_timeout, read_timeout)
123125
except Exception as e:
124126
process_gs_exception(e)
125127

0 commit comments

Comments
 (0)