@@ -147,14 +147,15 @@ The following examples show how to start InfluxDB 3 with different object store
147
147
``` bash
148
148
# Memory object store
149
149
# Stores data in RAM; doesn't persist data
150
- influxdb3 serve --node-id=local01 --object-store=memory
150
+ influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=memory
151
151
```
152
152
153
153
``` bash
154
154
# Filesystem object store
155
155
# Provide the filesystem directory
156
156
influxdb3 serve \
157
- --node-id=local01 \
157
+ --node-id=host01 \
158
+ --cluster-id=cluster01 \
158
159
--object-store=file \
159
160
--data-dir ~ /.influxdb3
160
161
```
@@ -178,6 +179,7 @@ docker run -it \
178
179
-v /path/on/host:/path/in/container \
179
180
quay.io/influxdb/influxdb3-enterprise:latest serve \
180
181
--node-id my_host \
182
+ --cluster-id my_cluster \
181
183
--object-store file \
182
184
--data-dir /path/in/container
183
185
```
@@ -188,7 +190,8 @@ docker run -it \
188
190
189
191
` ` ` bash
190
192
influxdb3 serve \
191
- --node-id=local01 \
193
+ --node-id=host01 \
194
+ --cluster-id=cluster01 \
192
195
--object-store=s3 \
193
196
--bucket=BUCKET \
194
197
--aws-access-key=AWS_ACCESS_KEY \
@@ -201,7 +204,7 @@ influxdb3 serve \
201
204
# Specify the object store type and associated options
202
205
203
206
` ` ` bash
204
- influxdb3 serve --node-id=local01 --object-store=s3 --bucket=BUCKET \
207
+ influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=s3 --bucket=BUCKET \
205
208
--aws-access-key=AWS_ACCESS_KEY \
206
209
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
207
210
--aws-endpoint=ENDPOINT \
@@ -844,104 +847,92 @@ In a basic HA setup:
844
847
> Compacted data is meant for a single writer, and many readers.
845
848
846
849
The following examples show how to configure and start two nodes
847
- for a basic HA setup.
848
- The example commands pass the following options:
849
-
850
- - `--read-from-node-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
851
- - `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
852
- - `--run-compactions`: ensures the Compactor runs the compaction process
850
+ for a basic HA setup. Node 1 is configured as the compactor as it' s also passed the ` compact` mode.
853
851
854
852
` ` ` bash
855
853
# # NODE 1
856
854
857
855
# Example variables
858
856
# node-id: 'host01'
857
+ # cluster-id: 'cluster01'
859
858
# bucket: 'influxdb-3-enterprise-storage'
860
- # compactor-id: ' c01'
861
859
862
860
863
- influxdb3 serve --node-id=host01 --read-from-node-ids=host02 --compactor- id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
861
+ influxdb3 serve --node-id=host01 --cluster- id=cluster01 --mode=ingest,query,compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host > }} --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
864
862
` ` `
865
863
866
864
` ` `
867
865
# # NODE 2
868
866
869
867
# Example variables
870
868
# node-id: 'host02'
869
+ # cluster-id: 'cluster01'
871
870
# bucket: 'influxdb-3-enterprise-storage'
872
871
873
- influxdb3 serve --node-id=host02 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
872
+ influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
874
873
--aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
875
874
` ` `
876
875
877
876
After the nodes have started, querying either node returns data for both nodes, and ` NODE 1` runs compaction.
878
- To add nodes to this setup, start more read replicas:
879
-
880
- ```bash
881
- influxdb3 serve --read-from-node-ids=host01,host02 [...OPTIONS]
882
- ```
877
+ To add nodes to this setup, start more read replicas with the same cluster ID:
883
878
884
879
> [! Note]
885
880
> To run this setup for testing, you can start nodes in separate terminals and pass a different ` --http-bind` value for each--for example:
886
881
>
887
882
> ` ` ` bash
888
883
> # In terminal 1
889
- > influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
884
+ > influxdb3 serve --node-id=host01 --cluster-id=cluster01 -- http-bind=http://{{< influxdb/host >}} [...OPTIONS]
890
885
> ` ` `
891
886
>
892
887
> ` ` ` bash
893
888
> # In terminal 2
894
- > influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
889
+ > influxdb3 serve --node-id=host01 --cluster-id=cluster01 -- http-bind=http://{{< influxdb/host >}} [...OPTIONS]
895
890
896
891
### High availability with a dedicated Compactor
897
892
898
893
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
899
- To ensure that your read-write node doesn’ t slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
894
+ To ensure that your read-write nodes don' t slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
900
895
901
896
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
902
897
903
898
The following examples show how to set up HA with a dedicated Compactor node:
904
899
905
- 1. Start two read-write nodes as read replicas, similar to the previous example,
906
- and pass the `--compactor-id` option with a dedicated compactor ID (which you' ll configure in the next step).
900
+ 1. Start two read-write nodes as read replicas, similar to the previous example.
907
901
908
902
` ` `
909
903
# # NODE 1 — Writer/Reader Node #1
910
904
911
905
# Example variables
912
906
# node-id: 'host01'
907
+ # cluster-id: 'cluster01'
913
908
# bucket: 'influxdb-3-enterprise-storage'
914
909
915
- influxdb3 serve --node-id=host01 --compactor -id=c01 --read-from-node-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host > }} --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
910
+ influxdb3 serve --node-id=host01 --cluster -id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host > }} --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
916
911
` ` `
917
912
918
913
` ` ` bash
919
914
# # NODE 2 — Writer/Reader Node #2
920
915
921
916
# Example variables
922
917
# node-id: 'host02'
918
+ # cluster-id: 'cluster01'
923
919
# bucket: 'influxdb-3-enterprise-storage'
924
920
925
- influxdb3 serve --node-id=host02 --compactor -id=c01 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
921
+ influxdb3 serve --node-id=host02 --cluster -id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
926
922
` ` `
927
923
928
- 2. Start the dedicated compactor node, which uses the following options:
929
-
930
- - ` --mode=compactor` : Ensures the node ** only** runs compaction.
931
- - ` --compaction-hosts` : Specifies a comma-delimited list of hosts to run compaction for.
932
-
933
- _** Don' t include the replicas (`--read-from-node-ids`) parameter because this node doesn' t replicate data._
924
+ 2. Start the dedicated compactor node, with the ` --mode=compact` option. This ensures the node ** only** runs compaction.
934
925
935
926
` ` ` bash
936
927
937
928
# # NODE 3 — Compactor Node
938
929
939
930
# Example variables
940
931
# node-id: 'host03'
932
+ # cluster-id: 'cluster01'
941
933
# bucket: 'influxdb-3-enterprise-storage'
942
- # compactor-id: 'c01'
943
934
944
- influxdb3 serve --node-id=host03 --mode=compactor --compactor- id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
935
+ influxdb3 serve --node-id=host03 --cluster- id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
945
936
` ` `
946
937
947
938
# ## High availability with read replicas and a dedicated Compactor
@@ -950,18 +941,18 @@ For a very robust and effective setup for managing time-series data, you can run
950
941
951
942
{{< img-hd src=" /img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt=" Workload Isolation Setup" /> }}
952
943
953
- 1. Start writer nodes for ingest. Enterprise doesn’t designate a write-only mode, so assign them ** ` read_write ` ** mode.
954
- To achieve the benefits of workload isolation, you' ll send _only write requests_ to these read-write nodes. Later, you' ll configure the _read-only_ nodes.
944
+ 1. Start ingest nodes by assigning them the ** ` ingest ` ** mode.
945
+ To achieve the benefits of workload isolation, you' ll send _only write requests_ to these ingest nodes. Later, you' ll configure the _read-only_ nodes.
955
946
956
947
` ` `
957
948
# # NODE 1 — Writer Node #1
958
949
959
950
# Example variables
960
951
# node-id: 'host01'
952
+ # cluster-id: 'cluster01'
961
953
# bucket: 'influxdb-3-enterprise-storage'
962
954
963
- influxdb3 serve --node-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host > }} --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
964
-
955
+ influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host > }} --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
965
956
` ` `
966
957
967
958
< ! -- The following examples use different ports for different nodes. Don' t use the influxdb/host shortcode below. -->
@@ -971,47 +962,45 @@ For a very robust and effective setup for managing time-series data, you can run
971
962
972
963
# Example variables
973
964
# node-id: ' host02'
965
+ # cluster-id: ' cluster01'
974
966
# bucket: ' influxdb-3-enterprise-storage'
975
967
976
- Usage: $ influxdb3 serve --node-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
968
+ Usage: $ influxdb3 serve --node-id=host02 --cluster-id=cluster01 -- mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
977
969
```
978
970
979
- 2. Start the dedicated Compactor node ( `--mode=compactor`) and ensure it runs compactions on the specified `compaction-hosts `.
971
+ 2. Start the dedicated Compactor node with `--mode=compact `.
980
972
981
973
```
982
974
## NODE 3 — Compactor Node
983
975
984
976
# Example variables
985
977
# node-id: ' host03'
978
+ # cluster-id: ' cluster01'
986
979
# bucket: ' influxdb-3-enterprise-storage'
987
980
988
- influxdb3 serve --node-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
981
+ influxdb3 serve --node-id=host03 --cluster-id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
989
982
```
990
983
991
- 3. Finally, start the query nodes as _read-only_.
992
- Include the following options:
993
-
994
- - `--mode=read`: Sets the node to _read-only_
995
- - `--read-from-node-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
984
+ 3. Finally, start the query nodes as _read-only_ with `--mode=query`.
996
985
997
- ```bash
986
+ ```
998
987
## NODE 4 — Read Node #1
999
988
1000
989
# Example variables
1001
990
# node-id: ' host04'
991
+ # cluster-id: ' cluster01'
1002
992
# bucket: ' influxdb-3-enterprise-storage'
1003
993
1004
- influxdb3 serve --node-id=host04 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
994
+ influxdb3 serve --node-id=host04 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
1005
995
```
1006
996
1007
997
```
1008
998
## NODE 5 — Read Node #2
1009
999
1010
1000
# Example variables
1011
1001
# node-id: ' host05'
1012
- # bucket: ' influxdb-3-enterprise-storage'
1013
1002
1014
- influxdb3 serve --node-id=host05 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
1003
+ influxdb3 serve --node-id=host05 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
1015
1004
```
1016
1005
1017
1006
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.
0 commit comments