-
-
Notifications
You must be signed in to change notification settings - Fork 206
Do not load the entire artifact in memory when uploading (#618) #677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This compiles, but is wholly untested. I might try to actually run it tomorrow on a fork or something. It would also be nice to be able to inject errors, somehow.... |
@konstin has all the context on this pattern in uv, I'll delegate review :) |
let result = request.send().await.map_err(|e| e.into()); | ||
|
||
if retryable_strategy.handle(&result) == Some(Retryable::Transient) { | ||
let retry_decision = retry_policy.should_retry(start_time, n_past_retries); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw we have a more extensive retry policy in uv, but the reqwest-retry may be sufficient here.
} | ||
} | ||
break result?; | ||
}; | ||
|
||
if !response.status().is_success() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means we don't retry on status errors, such as a 500 (only the 403 we handle in the github upload retry strategy)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This (and the above) is intended to match the existing logic using reqwest-retry... we ought to retry on a 500, though, if we don't let me fix that while we're here.
src/github.rs
Outdated
// reqwest wants to take ownership of the body, so it's hard for us to do anything | ||
// clever with reading the file once and calculating the sha256sum while we read. | ||
// So we open and read the file again. | ||
let mut file = tokio::fs::File::open(local_filename).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Wrapping the file in a BufReader
gives better performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it? My understanding was that BufReader
is helpful if you're doing lots of small reads, but here sha256 doesn't really have an opinion about the size of the buffer we pass in for each chunk, and I'm intentionally doing very large reads (1 MB). I think BufReader
defaults to tokion::io::util::DEFAULT_BUF_SIZE = 8192
bytes, and it looks like the implementation bypasses the internal buffer if you ask for a larger read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, with the manual large buffer this is actually not necessary.
dry_run, | ||
)); | ||
|
||
// reqwest wants to take ownership of the body, so it's hard for us to do anything |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's totally fine to read the file twice.
src/github.rs
Outdated
if len == 0 { | ||
break; | ||
}; | ||
hasher.update(&buf); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to slice the buf to actually read size? I don't know about the properties of SHA-512, but we're currently passing a variable number of trailing zero bytes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oof, yes. Not even zero bytes, whatever was at the end of the previous megabyte.
I feel like this is not the first time I have made this mistake in Rust, sigh. (This is also basically tedu's "heartbleed in Rust" from pre-1.0.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was surprised the api doesn't have a way to pass a whole Read
or AsyncRead
, that's way more idomatic and prevents such mistakes, I find myself avoiding the &mut buf
APIs usually.
src/github.rs
Outdated
format!("{}.sha256", dest), | ||
Bytes::copy_from_slice(format!("{}\n", digest).as_bytes()), | ||
format!("{dest}.sha256"), | ||
UploadSource::Data(Bytes::copy_from_slice(format!("{digest}\n").as_bytes())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UploadSource::Data(Bytes::copy_from_slice(format!("{digest}\n").as_bytes())), | |
UploadSource::Data(Bytes::from(format!("{digest}\n"))), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was like that when I found it :) thanks
Marking as draft since I'm working on a mock GitHub HTTP server to do some fault injection to test things properly and I want to do those tests, but I'm mildly confident about this code at the moment. |
This lets me test the release scripts against a custom, fault-injected Python server, but I suppose it might also be useful for downstream users who have GHES, maybe. Patches welcome if anyone is using this and it doesn't quite work right!
19e271d
to
ed37a6a
Compare
Sorry about the rebase, the relevant diff since the last push is diff --git a/src/github.rs b/src/github.rs
index cc5c4e4..676104f 100644
--- a/src/github.rs
+++ b/src/github.rs
@@ -117,6 +117,7 @@ async fn upload_release_artifact(
let response = loop {
let request = client
.put(url.clone())
+ .timeout(Duration::from_secs(60))
.header("Authorization", format!("Bearer {auth_token}"))
.header("Content-Type", "application/octet-stream");
let request = match body {
@@ -134,7 +135,7 @@ async fn upload_release_artifact(
if retryable_strategy.handle(&result) == Some(Retryable::Transient) {
let retry_decision = retry_policy.should_retry(start_time, n_past_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision {
- println!("retrying {url}: {result:?}");
+ println!("retrying upload to {url} after {result:?}");
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default()); plus the addition of src/github_api_tester.py, which is a little Flask app to try to do some fault injection. The tests there pass but it doesn't seem like it's helpful to run the tests on every push, especially since they're kind of slow (since the uploader is Rust and the tests are Python, I can't take advantage of either language's async framework's autojump clock). Adding the 60-second timeout was necessary to get the tests to pass, because for some reason the retry on 401 is hanging - the client believes it's sent the request and the server doesn't seem to be doing anything. While this is probably a bug in my test server (or its dependencies), it does bring up the point that we have no timeout on uploads and maybe one would be good. I'm not sure if 60 seconds is too small. Oh, and if you missed it in the prior push, the thing above with mishandled buffer sizes was solved with by adding tokio-util and using its ReaderStream: @@ -540,19 +539,15 @@ pub async fn command_upload_release_distributions(args: &ArgMatches) -> Result<(
// reqwest wants to take ownership of the body, so it's hard for us to do anything
// clever with reading the file once and calculating the sha256sum while we read.
// So we open and read the file again.
- let mut file = tokio::fs::File::open(local_filename).await?;
- let mut hasher = Sha256::new();
- let mut buf = vec![0; 1048576];
- loop {
- let len = file.read(&mut buf).await?;
- if len == 0 {
- break;
- };
- hasher.update(&buf);
- }
- drop(file);
-
- let digest = hex::encode(hasher.finalize());
+ let digest = {
+ let file = tokio::fs::File::open(local_filename).await?;
+ let mut stream = tokio_util::io::ReaderStream::with_capacity(file, 1048576);
+ let mut hasher = Sha256::new();
+ while let Some(chunk) = stream.next().await {
+ hasher.update(&chunk?);
+ }
+ hex::encode(hasher.finalize())
+ };
digests.insert(dest.clone(), digest.clone());
} |
Now that we're not loading every artifact in memory, hopefully the normal runner will work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't check the test server in detail, but the rust part looks good
No description provided.