@@ -130,21 +130,21 @@ To start your InfluxDB instance, use the `influxdb3 serve` command
130
130
and provide the following:
131
131
132
132
- ` --object-store ` : Specifies the type of Object store to use. InfluxDB supports the following: local file system (` file ` ), ` memory ` , S3 (and compatible services like Ceph or Minio) (` s3 ` ), Google Cloud Storage (` google ` ), and Azure Blob Storage (` azure ` ).
133
- - ` --writer -id ` : A string identifier that determines the server's storage path within the configured storage location, and, in a multi-node setup, is used to reference the node
133
+ - ` --node -id ` : A string identifier that determines the server's storage path within the configured storage location, and, in a multi-node setup, is used to reference the node
134
134
135
135
The following examples show how to start InfluxDB 3 with different object store configurations:
136
136
137
137
``` bash
138
138
# MEMORY
139
139
# Stores data in RAM; doesn't persist data
140
- influxdb3 serve --writer -id=local01 --object-store=memory
140
+ influxdb3 serve --node -id=local01 --object-store=memory
141
141
```
142
142
143
143
``` bash
144
144
# FILESYSTEM
145
145
# Provide the filesystem directory
146
146
influxdb3 serve \
147
- --writer -id=local01 \
147
+ --node -id=local01 \
148
148
--object-store=file \
149
149
--data-dir ~ /.influxdb3
150
150
```
@@ -161,21 +161,21 @@ To run the [Docker image](/influxdb3/enterprise/install/#docker-image) and persi
161
161
docker run -it \
162
162
-v /path/on/host:/path/in/container \
163
163
quay.io/influxdb/influxdb3-enterprise:latest serve \
164
- --writer -id my_host \
164
+ --node -id my_host \
165
165
--object-store file \
166
166
--data-dir /path/in/container
167
167
```
168
168
169
169
``` bash
170
170
# S3 (defaults to us-east-1 for region)
171
171
# Specify the Object store type and associated options
172
- influxdb3 serve --writer -id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
172
+ influxdb3 serve --node -id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
173
173
```
174
174
175
175
``` bash
176
176
# Minio/Open Source Object Store (Uses the AWS S3 API, with additional parameters)
177
177
# Specify the Object store type and associated options
178
- influxdb3 serve --writer -id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
178
+ influxdb3 serve --node -id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
179
179
```
180
180
181
181
_ For more information about server options, run ` influxdb3 serve --help ` ._
@@ -783,51 +783,51 @@ The following examples show how to configure and start two nodes
783
783
for a basic HA setup.
784
784
The example commands pass the following options:
785
785
786
- - `--read-from-writer -ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
786
+ - `--read-from-node -ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
787
787
- `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
788
788
- `--run-compactions`: ensures the Compactor runs the compaction process
789
789
790
790
```bash
791
791
## NODE 1
792
792
793
793
# Example variables
794
- # writer -id: ' host01'
794
+ # node -id: ' host01'
795
795
# bucket: ' influxdb-3-enterprise-storage'
796
796
# compactor-id: ' c01'
797
797
798
798
799
- influxdb3 serve --writer -id=host01 --read-from-writer -ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
799
+ influxdb3 serve --node -id=host01 --read-from-node -ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
800
800
```
801
801
802
802
```
803
803
## NODE 2
804
804
805
805
# Example variables
806
- # writer -id: ' host02'
806
+ # node -id: ' host02'
807
807
# bucket: ' influxdb-3-enterprise-storage'
808
808
809
- influxdb3 serve --writer -id=host02 --read-from-writer -ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
809
+ influxdb3 serve --node -id=host02 --read-from-node -ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
810
810
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
811
811
```
812
812
813
813
After the nodes have started, querying either node returns data for both nodes, and `NODE 1` runs compaction.
814
814
To add nodes to this setup, start more read replicas:
815
815
816
816
```bash
817
- influxdb3 serve --read-from-writer -ids=host01,host02 [...OPTIONS]
817
+ influxdb3 serve --read-from-node -ids=host01,host02 [...OPTIONS]
818
818
```
819
819
820
820
> [!Note]
821
821
> To run this setup for testing, you can start nodes in separate terminals and pass a different `--http-bind` value for each--for example:
822
822
>
823
823
> ```bash
824
824
> # In terminal 1
825
- > influxdb3 serve --writer -id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
825
+ > influxdb3 serve --node -id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
826
826
> ```
827
827
>
828
828
> ```bash
829
829
> # In terminal 2
830
- > influxdb3 serve --writer -id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
830
+ > influxdb3 serve --node -id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
831
831
832
832
### High availability with a dedicated Compactor
833
833
@@ -845,39 +845,39 @@ The following examples show how to set up HA with a dedicated Compactor node:
845
845
# # NODE 1 — Writer/Reader Node #1
846
846
847
847
# Example variables
848
- # writer -id: 'host01'
848
+ # node -id: 'host01'
849
849
# bucket: 'influxdb-3-enterprise-storage'
850
850
851
- influxdb3 serve --writer -id=host01 --compactor-id=c01 --read-from-writer -ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
851
+ influxdb3 serve --node -id=host01 --compactor-id=c01 --read-from-node -ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
852
852
` ` `
853
853
854
854
` ` ` bash
855
855
# # NODE 2 — Writer/Reader Node #2
856
856
857
857
# Example variables
858
- # writer -id: 'host02'
858
+ # node -id: 'host02'
859
859
# bucket: 'influxdb-3-enterprise-storage'
860
860
861
- influxdb3 serve --writer -id=host02 --compactor-id=c01 --read-from-writer -ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
861
+ influxdb3 serve --node -id=host02 --compactor-id=c01 --read-from-node -ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
862
862
` ` `
863
863
864
864
2. Start the dedicated compactor node, which uses the following options:
865
865
866
866
- ` --mode=compactor` : Ensures the node ** only** runs compaction.
867
867
- ` --compaction-hosts` : Specifies a comma-delimited list of hosts to run compaction for.
868
868
869
- _** Don' t include the replicas (`--read-from-writer -ids`) parameter because this node doesn' t replicate data._
869
+ _** Don' t include the replicas (`--read-from-node -ids`) parameter because this node doesn' t replicate data._
870
870
871
871
` ` ` bash
872
872
873
873
# # NODE 3 — Compactor Node
874
874
875
875
# Example variables
876
- # writer -id: 'host03'
876
+ # node -id: 'host03'
877
877
# bucket: 'influxdb-3-enterprise-storage'
878
878
# compactor-id: 'c01'
879
879
880
- influxdb3 serve --writer -id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
880
+ influxdb3 serve --node -id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
881
881
` ` `
882
882
883
883
# ## High availability with read replicas and a dedicated Compactor
@@ -893,21 +893,21 @@ For a very robust and effective setup for managing time-series data, you can run
893
893
# # NODE 1 — Writer Node #1
894
894
895
895
# Example variables
896
- # writer -id: 'host01'
896
+ # node -id: 'host01'
897
897
# bucket: 'influxdb-3-enterprise-storage'
898
898
899
- influxdb3 serve --writer -id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
899
+ influxdb3 serve --node -id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
900
900
901
901
` ` `
902
902
903
903
` ` `
904
904
# # NODE 2 — Writer Node #2
905
905
906
906
# Example variables
907
- # writer -id: 'host02'
907
+ # node -id: 'host02'
908
908
# bucket: 'influxdb-3-enterprise-storage'
909
909
910
- Usage: $ influxdb3 serve --writer -id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
910
+ Usage: $ influxdb3 serve --node -id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
911
911
` ` `
912
912
913
913
2. Start the dedicated Compactor node (` --mode=compactor` ) and ensure it runs compactions on the specified ` compaction-hosts` .
@@ -916,36 +916,36 @@ For a very robust and effective setup for managing time-series data, you can run
916
916
# # NODE 3 — Compactor Node
917
917
918
918
# Example variables
919
- # writer -id: 'host03'
919
+ # node -id: 'host03'
920
920
# bucket: 'influxdb-3-enterprise-storage'
921
921
922
- influxdb3 serve --writer -id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
922
+ influxdb3 serve --node -id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
923
923
` ` `
924
924
925
925
3. Finally, start the query nodes as _read-only_.
926
926
Include the following options:
927
927
928
928
- ` --mode=read` : Sets the node to _read-only_
929
- - ` --read-from-writer -ids=host01,host02` : A comma-demlimited list of host IDs to read data from
929
+ - ` --read-from-node -ids=host01,host02` : A comma-demlimited list of host IDs to read data from
930
930
931
931
` ` ` bash
932
932
# # NODE 4 — Read Node #1
933
933
934
934
# Example variables
935
- # writer -id: 'host04'
935
+ # node -id: 'host04'
936
936
# bucket: 'influxdb-3-enterprise-storage'
937
937
938
- influxdb3 serve --writer -id=host04 --mode=read --object-store=s3 --read-from-writer -ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
938
+ influxdb3 serve --node -id=host04 --mode=read --object-store=s3 --read-from-node -ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
939
939
` ` `
940
940
941
941
` ` `
942
942
# # NODE 5 — Read Node #2
943
943
944
944
# Example variables
945
- # writer -id: 'host05'
945
+ # node -id: 'host05'
946
946
# bucket: 'influxdb-3-enterprise-storage'
947
947
948
- influxdb3 serve --writer -id=host05 --mode=read --object-store=s3 --read-from-writer -ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
948
+ influxdb3 serve --node -id=host05 --mode=read --object-store=s3 --read-from-node -ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=< AWS_ACCESS_KEY_ID> --aws-secret-access-key=< AWS_SECRET_ACCESS_KEY>
949
949
` ` `
950
950
951
951
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.
0 commit comments