You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: LICENSE
+8Lines changed: 8 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -31,3 +31,11 @@ Unless required by applicable law or agreed to in writing, software distributed
31
31
See the License for specific language governing permissions and limitations under the License.
32
32
--------------------------------------------------------End of License---------------------------------------------------------------------------------------
33
33
34
+
Minio Client SDK for Go
35
+
Licensed under the Apache License, Version 2.0 (the "License");
36
+
you may not use this file except in compliance with the License. You may obtain a copy of the License at
37
+
http://www.apache.org/licenses/LICENSE-2.0
38
+
Unless required by applicable law or agreed to in writing, software
39
+
distributed under the License is distributed on an "AS IS" BASIS,
40
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
41
+
See the License for the specific language governing permissions and limitations under the License.
>Note: For better performance, consider running this tranfer from a VM running in the same region as source or the target. Data is uploaded as it is downloaded from the source, therefore the transfer is bound to the bandwidth of the VM for performance.
115
+
116
+
### Synchronously Copy data between Azure Blob Storage targets and sources
117
+
118
+
You can synchronously transfer data between Azure Storage accounts, containers and blob types.
119
+
120
+
First, you must set the account key of the source storage account.
121
+
122
+
```bash
123
+
export SOURCE_ACCOUNT_KEY=<YOUR KEY>
124
+
```
125
+
126
+
Then you can specify the URI of the source. Prefixes are supported.
>Note: For better performance, consider running this tranfer from a VM running in the same region as source or the target. Data is uploaded as it is downloaded from the source, therefore the transfer is bound to the bandwidth of the VM for performance.
131
+
97
132
### Upload from an HTTP/HTTPS source to Azure Blob Storage
You can use this approach to transfer data between Azure Storage accounts and blob types - e.g. transfer a blob from one account to another or from a page blob to block blob.
108
-
109
-
The source is a page blob with a SAS token and the target is block blob:
>Note: In HTTP/HTTPS to blob transfers, data is downloaded and uploaded as it is received without disk IO.
142
+
>Note: For better performance, consider running this tranfer from a VM running in the same region as source or the target. Data is uploaded as it is downloaded from the source, therefore the transfer is bound to the bandwidth of the VM for performance.
114
143
115
144
### Download from Azure Blob Storage
116
145
@@ -139,7 +168,7 @@ By default files are downloaded to the same directory where you are running blob
139
168
140
169
## Command Options
141
170
142
-
-`-f`, `--source_file`*string* URL, file or files (e.g. /data/*.gz) to upload.
171
+
-`-f`, `--source_file`*string* URL, Azure Blob or S3 Endpoint, file or files (e.g. /data/*.gz) to upload.
143
172
144
173
-`-c`, `--container_name`*string* container name (e.g. `mycontainer`).
145
174
@@ -167,7 +196,7 @@ By default files are downloaded to the same directory where you are running blob
167
196
168
197
-`-d`, `--dup_check_level`*string* desired level of effort to detect duplicate data blocks to minimize upload size. Must be one of None, ZeroOnly, Full (default "None")
169
198
170
-
-`-t`, `--transfer_type`*string* defines the source and target of the transfer. Must be one of file-blockblob, file-pageblob, http-blockblob, http-pageblob, blob-file, pageblock-file (alias of blob-file), blockblob-file (alias of blob-file) or http-file
199
+
-`-t`, `--transfer_type`*string* defines the source and target of the transfer. Must be one of file-blockblob, file-pageblob, http-blockblob, http-pageblob, blob-file, pageblock-file (alias of blob-file), blockblob-file (alias of blob-file), http-file, blob-pageblob, blob-blockblob, s3-pageblob and s3-blockblob.
171
200
172
201
-`m`, `--compute_blockmd5`*bool* if present or true, block level MD5 has will be computed and included as a header when the block is sent to blob storage. Default is false.
173
202
@@ -189,7 +218,7 @@ By default, BlobPorter creates 5 readers and 8 workers for each core on the comp
189
218
190
219
- For transfers from fast disks (SSD) or HTTP sources reducing the number readers or workers could provide better performance than the default values. Reduce these values if you want to minimize resource utilization. Lowering these numbers reduces contention and the likelihood of experiencing throttling conditions.
191
220
192
-
- Transfers can be batched. Each batch transfer will concurrently read and transfer up to 200 files (default value) from the source. The batch size can be modified using the -x option, the maximum value is 500.
221
+
- Transfers can be batched. Each batch transfer will concurrently read and transfer up to 500 files (default value) from the source. The batch size can be modified using the -x option.
193
222
194
223
- Blobs smaller than the block size are transferred in a single operation. With relatively small files (<32MB) performance may be better if you set a block size equal to the size of the files. Setting the number of workers and readers to the number of files could yield performance gains.
0 commit comments