Skip to content

Commit b140bb1

Browse files
committed
Auto merge of #653 - Mark-Simulacrum:watch-disk-often, r=Mark-Simulacrum
Check disk space more often Currently seeing a high rate of failure in git clone due to disk space limits being exhausted, which is presumably due to this check not running often enough, though we don't have great evidence for that. With current instance disk space (100 GB) we would need to always grow slower than 15GB in 5 minutes (roughly 3GB/minute), which seems like it should be reasonable, but maybe isn't. The new bound is every 30 seconds, and we'd need to grow 20 GB in that time, which seems very unlikely. It's likely also worth exploring a retry strategy for error'd crates, but that'll take more design work to make sure we do eventually complete.
2 parents 7c9db09 + 96e63b8 commit b140bb1

File tree

2 files changed

+3
-2
lines changed

2 files changed

+3
-2
lines changed

src/db/migrations.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ use std::collections::HashSet;
55

66
enum MigrationKind {
77
SQL(&'static str),
8+
#[allow(clippy::type_complexity)]
89
Code(Box<dyn Fn(&Transaction) -> ::rusqlite::Result<()>>),
910
}
1011

src/runner/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,8 @@ use std::collections::HashMap;
1616
use std::sync::Mutex;
1717
use std::time::Duration;
1818

19-
const DISK_SPACE_WATCHER_INTERVAL: Duration = Duration::from_secs(300);
20-
const DISK_SPACE_WATCHER_THRESHOLD: f32 = 0.85;
19+
const DISK_SPACE_WATCHER_INTERVAL: Duration = Duration::from_secs(30);
20+
const DISK_SPACE_WATCHER_THRESHOLD: f32 = 0.80;
2121

2222
#[derive(Debug, Fail)]
2323
#[fail(display = "overridden task result to {}", _0)]

0 commit comments

Comments
 (0)