You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: LICENSE
+8Lines changed: 8 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -31,3 +31,11 @@ Unless required by applicable law or agreed to in writing, software distributed
31
31
See the License for specific language governing permissions and limitations under the License.
32
32
--------------------------------------------------------End of License---------------------------------------------------------------------------------------
33
33
34
+
Minio Client SDK for Go
35
+
Licensed under the Apache License, Version 2.0 (the "License");
36
+
you may not use this file except in compliance with the License. You may obtain a copy of the License at
37
+
http://www.apache.org/licenses/LICENSE-2.0
38
+
Unless required by applicable law or agreed to in writing, software
39
+
distributed under the License is distributed on an "AS IS" BASIS,
40
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
41
+
See the License for the specific language governing permissions and limitations under the License.
Note: It is recommended to perform this operation from a VM running on the cloud. This is a network bound operation where is uploaded as it is received from the source.
115
+
97
116
### Upload from an HTTP/HTTPS source to Azure Blob Storage
You can use this approach to transfer data between Azure Storage accounts and blob types - e.g. transfer a blob from one account to another or from a page blob to block blob.
108
126
109
-
The source is a page blob with a SAS token and the target is block blob:
127
+
### Synchronously Copy data in Azure Blob Storage
128
+
129
+
You can synchronously transfer data between Azure Storage accounts, containers and blob types.
130
+
131
+
First, you must set the account key of the source storage account.
132
+
133
+
```bash
134
+
export SOURCE_ACCOUNT_KEY=<YOUR KEY>
135
+
```
136
+
137
+
Then you can specify the URI of the source. Prefixes supported.
>Note: In HTTP/HTTPS to blob transfers, data is downloaded and uploaded as it is received without disk IO.
141
+
>Note: It is recommended to perform this operation from a VM running in the same region as source or the target. As with all HTTP based transfers, data is uploaded as it is downloaded from the source, therefore the transfer is primarily network bound.
114
142
115
143
### Download from Azure Blob Storage
116
144
@@ -139,7 +167,7 @@ By default files are downloaded to the same directory where you are running blob
139
167
140
168
## Command Options
141
169
142
-
-`-f`, `--source_file`*string* URL, file or files (e.g. /data/*.gz) to upload.
170
+
-`-f`, `--source_file`*string* URL, Azure Blob or S3 Endpoint, file or files (e.g. /data/*.gz) to upload.
143
171
144
172
-`-c`, `--container_name`*string* container name (e.g. `mycontainer`).
145
173
@@ -167,7 +195,7 @@ By default files are downloaded to the same directory where you are running blob
167
195
168
196
-`-d`, `--dup_check_level`*string* desired level of effort to detect duplicate data blocks to minimize upload size. Must be one of None, ZeroOnly, Full (default "None")
169
197
170
-
-`-t`, `--transfer_type`*string* defines the source and target of the transfer. Must be one of file-blockblob, file-pageblob, http-blockblob, http-pageblob, blob-file, pageblock-file (alias of blob-file), blockblob-file (alias of blob-file) or http-file
198
+
-`-t`, `--transfer_type`*string* defines the source and target of the transfer. Must be one of file-blockblob, file-pageblob, http-blockblob, http-pageblob, blob-file, pageblock-file (alias of blob-file), blockblob-file (alias of blob-file), http-file, blob-pageblob, blob-blockblob, s3-pageblob and s3-blockblob.
171
199
172
200
-`m`, `--compute_blockmd5`*bool* if present or true, block level MD5 has will be computed and included as a header when the block is sent to blob storage. Default is false.
173
201
@@ -189,7 +217,7 @@ By default, BlobPorter creates 5 readers and 8 workers for each core on the comp
189
217
190
218
- For transfers from fast disks (SSD) or HTTP sources reducing the number readers or workers could provide better performance than the default values. Reduce these values if you want to minimize resource utilization. Lowering these numbers reduces contention and the likelihood of experiencing throttling conditions.
191
219
192
-
- Transfers can be batched. Each batch transfer will concurrently read and transfer up to 200 files (default value) from the source. The batch size can be modified using the -x option, the maximum value is 500.
220
+
- Transfers can be batched. Each batch transfer will concurrently read and transfer up to 500 files (default value) from the source. The batch size can be modified using the -x option.
193
221
194
222
- Blobs smaller than the block size are transferred in a single operation. With relatively small files (<32MB) performance may be better if you set a block size equal to the size of the files. Setting the number of workers and readers to the number of files could yield performance gains.
0 commit comments