Skip to content

Commit a31be59

Browse files
committed
Update readme
1 parent 8b8c74e commit a31be59

File tree

4 files changed

+33
-5
lines changed

4 files changed

+33
-5
lines changed

BUILD.main

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@ load("@rules_cc//cc:defs.bzl", "cc_binary")
33
cc_binary(
44
name = "hello_minio",
55
srcs = ["main.cc"],
6-
data = ["@archive_minio//:files"],
6+
data = ["@archive_gcloud//:files"],
77
)

README.md

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,20 @@
11
# Cloud archive
22

3-
This `WORKSPACE` rule for Google Bazel lets you securely download private
4-
workspace dependencies from S3 or Minio.
3+
This set of `WORKSPACE` rules for Bazel lets you securely download private
4+
workspace dependencies from Google Cloud Storage, S3, Minio, or Backblaze B2.
5+
This can be useful when pulling data, code, and binary dependencies from these
6+
storage backends into your Bazel build.
57

68
## Requirements
79

810
This currently only works on Linux, although adapting it to macOS and Windows
911
shouldn't be difficult.
1012

13+
### Google Cloud storage
14+
15+
The `gsutil` command must be installed, and authenticated using `gcloud auth
16+
login`.
17+
1118
### S3
1219

1320
AWS CLI is required to be in the path for S3 support, and must be set
@@ -20,11 +27,23 @@ multiple profiles.
2027
Likewise for Minio, `mc` command should be in the path, and Minio should be set
2128
up such that `mc cp` is able to download the referenced files.
2229

30+
### Backblaze
31+
32+
The `b2` command line utility must be installed and configured to access the
33+
account as per [the
34+
instructions](https://www.backblaze.com/b2/docs/quick_command_line.html).
35+
2336
## Usage
2437

2538
Please refer to `WORKSPACE` file in this repository for an example of how to
2639
use this.
2740

41+
## How to test
42+
43+
To test, you will need to point the workspace targets to your own cloud
44+
storage, as well as initialize cloud storage on your machine to the point where
45+
the typical `cp` command works.
46+
2847
## Future work
2948

3049
Quite obviously this can also be adapted to other cloud storage providers,

WORKSPACE

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
workspace(name = "cloud_archive")
22

3-
load(":cloud_archive.bzl", "minio_archive", "s3_archive")
3+
load(":cloud_archive.bzl", "minio_archive", "s3_archive", "gs_archive")
44

55
s3_archive(
66
name = "archive_s3",
@@ -18,3 +18,12 @@ minio_archive(
1818
sha256 = "bf4dd5304180561a745e816ee6a8db974a3fcf5b9d706a493776d77202c48bc9",
1919
strip_prefix = "cloud_archive_test",
2020
)
21+
22+
gs_archive(
23+
name = "archive_gcloud",
24+
build_file = "//:BUILD.archive",
25+
bucket = "depthwise-temp",
26+
file_path = "cloud_archive_test.tar.gz",
27+
sha256 = "bf4dd5304180561a745e816ee6a8db974a3fcf5b9d706a493776d77202c48bc9",
28+
strip_prefix = "cloud_archive_test",
29+
)

main.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ int main() {
44
printf("File contents:\n");
55
size_t bytes_read = 0;
66
char buf[256];
7-
FILE* f = fopen("external/archive_minio/cloud_archive_test.txt", "r");
7+
FILE* f = fopen("external/archive_gcloud/cloud_archive_test.txt", "r");
88
if (f == nullptr) {
99
fprintf(stderr, "Failed to open file.");
1010
return 1;

0 commit comments

Comments
 (0)