Skip to content

Commit ef40b6a

Browse files
peterbarnett03jstirnaman
authored andcommitted
update: adjust for new parameters, cluster configuration, modes, and some grammar
1 parent 8526d7d commit ef40b6a

File tree

6 files changed

+48
-59
lines changed

6 files changed

+48
-59
lines changed

api-docs/influxdb3/core/v3/ref.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ tags:
118118
InfluxDB 3 Core provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database.
119119
Use Processing engine plugins and triggers to run code and perform tasks for different database events.
120120
121-
To get started with the Processing engine, see the [Processing engine and Python plugins](/influxdb3/core/processing-engine/) guide.
121+
To get started with the Processing Engine, see the [Processing Engine and Python plugins](/influxdb3/core/processing-engine/) guide.
122122
- name: Quick start
123123
description: |
124124
1. [Check the status](#section/Server-information) of the InfluxDB server.

api-docs/influxdb3/enterprise/v3/ref.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ tags:
118118
InfluxDB 3 Enterprise provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database.
119119
Use Processing engine plugins and triggers to run code and perform tasks for different database events.
120120
121-
To get started with the Processing engine, see the [Processing engine and Python plugins](/influxdb3/enterprise/processing-engine/) guide.
121+
To get started with the Processing Engine, see the [Processing Engine and Python plugins](/influxdb3/enterprise/processing-engine/) guide.
122122
- name: Quick start
123123
description: |
124124
1. [Check the status](#section/Server-information) of the InfluxDB server.

content/influxdb3/core/plugins.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
2-
title: Processing engine and Python plugins
2+
title: Processing Engine and Python plugins
33
description: Use the Python processing engine to trigger and execute custom code on different events in an {{< product-name >}} instance.
44
menu:
55
influxdb3_core:
6-
name: Processing engine and Python plugins
6+
name: Processing Engine and Python plugins
77
weight: 4
88
influxdb3/core/tags: []
99
related:

content/influxdb3/enterprise/plugins.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
2-
title: Processing engine and Python plugins
2+
title: Processing Engine and Python plugins
33
description: Use the Python processing engine to trigger and execute custom code on different events in an {{< product-name >}} instance.
44
menu:
55
influxdb3_enterprise:
6-
name: Processing engine and Python plugins
6+
name: Processing Engine and Python plugins
77
weight: 4
88
influxdb3/core/tags: []
99
related:

content/shared/v3-core-get-started/_index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -156,14 +156,14 @@ The following examples show how to start InfluxDB 3 with different object store
156156
```bash
157157
# Memory object store
158158
# Stores data in RAM; doesn't persist data
159-
influxdb3 serve --node-id=local01 --object-store=memory
159+
influxdb3 serve --node-id=host01 --object-store=memory
160160
```
161161

162162
```bash
163163
# Filesystem object store
164164
# Provide the filesystem directory
165165
influxdb3 serve \
166-
--node-id=local01 \
166+
--node-id=host01 \
167167
--object-store=file \
168168
--data-dir ~/.influxdb3
169169
```
@@ -198,7 +198,7 @@ docker run -it \
198198

199199
```bash
200200
influxdb3 serve \
201-
--node-id=local01 \
201+
--node-id=host01 \
202202
--object-store=s3 \
203203
--bucket=BUCKET \
204204
--aws-access-key=AWS_ACCESS_KEY \
@@ -211,7 +211,7 @@ influxdb3 serve \
211211
# Specify the object store type and associated options
212212
213213
```bash
214-
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=BUCKET \
214+
influxdb3 serve --node-id=host01 --object-store=s3 --bucket=BUCKET \
215215
--aws-access-key=AWS_ACCESS_KEY \
216216
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
217217
--aws-endpoint=ENDPOINT \

content/shared/v3-enterprise-get-started/_index.md

Lines changed: 38 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -147,14 +147,15 @@ The following examples show how to start InfluxDB 3 with different object store
147147
```bash
148148
# Memory object store
149149
# Stores data in RAM; doesn't persist data
150-
influxdb3 serve --node-id=local01 --object-store=memory
150+
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=memory
151151
```
152152

153153
```bash
154154
# Filesystem object store
155155
# Provide the filesystem directory
156156
influxdb3 serve \
157-
--node-id=local01 \
157+
--node-id=host01 \
158+
--cluster-id=cluster01 \
158159
--object-store=file \
159160
--data-dir ~/.influxdb3
160161
```
@@ -178,6 +179,7 @@ docker run -it \
178179
-v /path/on/host:/path/in/container \
179180
quay.io/influxdb/influxdb3-enterprise:latest serve \
180181
--node-id my_host \
182+
--cluster-id my_cluster \
181183
--object-store file \
182184
--data-dir /path/in/container
183185
```
@@ -188,7 +190,8 @@ docker run -it \
188190

189191
```bash
190192
influxdb3 serve \
191-
--node-id=local01 \
193+
--node-id=host01 \
194+
--cluster-id=cluster01 \
192195
--object-store=s3 \
193196
--bucket=BUCKET \
194197
--aws-access-key=AWS_ACCESS_KEY \
@@ -201,7 +204,7 @@ influxdb3 serve \
201204
# Specify the object store type and associated options
202205
203206
```bash
204-
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=BUCKET \
207+
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=s3 --bucket=BUCKET \
205208
--aws-access-key=AWS_ACCESS_KEY \
206209
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
207210
--aws-endpoint=ENDPOINT \
@@ -844,104 +847,92 @@ In a basic HA setup:
844847
> Compacted data is meant for a single writer, and many readers.
845848
846849
The following examples show how to configure and start two nodes
847-
for a basic HA setup.
848-
The example commands pass the following options:
849-
850-
- `--read-from-node-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
851-
- `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
852-
- `--run-compactions`: ensures the Compactor runs the compaction process
850+
for a basic HA setup. Node 1 is configured as the compactor as it's also passed the `compact` mode.
853851
854852
```bash
855853
## NODE 1
856854
857855
# Example variables
858856
# node-id: 'host01'
857+
# cluster-id: 'cluster01'
859858
# bucket: 'influxdb-3-enterprise-storage'
860-
# compactor-id: 'c01'
861859
862860
863-
influxdb3 serve --node-id=host01 --read-from-node-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
861+
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest,query,compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
864862
```
865863
866864
```
867865
## NODE 2
868866
869867
# Example variables
870868
# node-id: 'host02'
869+
# cluster-id: 'cluster01'
871870
# bucket: 'influxdb-3-enterprise-storage'
872871
873-
influxdb3 serve --node-id=host02 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
872+
influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
874873
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
875874
```
876875
877876
After the nodes have started, querying either node returns data for both nodes, and `NODE 1` runs compaction.
878-
To add nodes to this setup, start more read replicas:
879-
880-
```bash
881-
influxdb3 serve --read-from-node-ids=host01,host02 [...OPTIONS]
882-
```
877+
To add nodes to this setup, start more read replicas with the same cluster ID:
883878
884879
> [!Note]
885880
> To run this setup for testing, you can start nodes in separate terminals and pass a different `--http-bind` value for each--for example:
886881
>
887882
> ```bash
888883
> # In terminal 1
889-
> influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
884+
> influxdb3 serve --node-id=host01 --cluster-id=cluster01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
890885
> ```
891886
>
892887
> ```bash
893888
> # In terminal 2
894-
> influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
889+
> influxdb3 serve --node-id=host01 --cluster-id=cluster01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
895890
896891
### High availability with a dedicated Compactor
897892
898893
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
899-
To ensure that your read-write node doesn’t slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
894+
To ensure that your read-write nodes don't slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
900895
901896
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
902897
903898
The following examples show how to set up HA with a dedicated Compactor node:
904899
905-
1. Start two read-write nodes as read replicas, similar to the previous example,
906-
and pass the `--compactor-id` option with a dedicated compactor ID (which you'll configure in the next step).
900+
1. Start two read-write nodes as read replicas, similar to the previous example.
907901
908902
```
909903
## NODE 1 — Writer/Reader Node #1
910904
911905
# Example variables
912906
# node-id: 'host01'
907+
# cluster-id: 'cluster01'
913908
# bucket: 'influxdb-3-enterprise-storage'
914909
915-
influxdb3 serve --node-id=host01 --compactor-id=c01 --read-from-node-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
910+
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
916911
```
917912
918913
```bash
919914
## NODE 2 — Writer/Reader Node #2
920915
921916
# Example variables
922917
# node-id: 'host02'
918+
# cluster-id: 'cluster01'
923919
# bucket: 'influxdb-3-enterprise-storage'
924920
925-
influxdb3 serve --node-id=host02 --compactor-id=c01 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
921+
influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
926922
```
927923
928-
2. Start the dedicated compactor node, which uses the following options:
929-
930-
- `--mode=compactor`: Ensures the node **only** runs compaction.
931-
- `--compaction-hosts`: Specifies a comma-delimited list of hosts to run compaction for.
932-
933-
_**Don't include the replicas (`--read-from-node-ids`) parameter because this node doesn't replicate data._
924+
2. Start the dedicated compactor node, with the `--mode=compact` option. This ensures the node **only** runs compaction.
934925
935926
```bash
936927
937928
## NODE 3 — Compactor Node
938929
939930
# Example variables
940931
# node-id: 'host03'
932+
# cluster-id: 'cluster01'
941933
# bucket: 'influxdb-3-enterprise-storage'
942-
# compactor-id: 'c01'
943934
944-
influxdb3 serve --node-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
935+
influxdb3 serve --node-id=host03 --cluster-id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
945936
```
946937
947938
### High availability with read replicas and a dedicated Compactor
@@ -950,18 +941,18 @@ For a very robust and effective setup for managing time-series data, you can run
950941
951942
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
952943
953-
1. Start writer nodes for ingest. Enterprise doesn’t designate a write-only mode, so assign them **`read_write`** mode.
954-
To achieve the benefits of workload isolation, you'll send _only write requests_ to these read-write nodes. Later, you'll configure the _read-only_ nodes.
944+
1. Start ingest nodes by assigning them the **`ingest`** mode.
945+
To achieve the benefits of workload isolation, you'll send _only write requests_ to these ingest nodes. Later, you'll configure the _read-only_ nodes.
955946
956947
```
957948
## NODE 1 — Writer Node #1
958949
959950
# Example variables
960951
# node-id: 'host01'
952+
# cluster-id: 'cluster01'
961953
# bucket: 'influxdb-3-enterprise-storage'
962954
963-
influxdb3 serve --node-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
964-
955+
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
965956
```
966957
967958
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
@@ -971,47 +962,45 @@ For a very robust and effective setup for managing time-series data, you can run
971962
972963
# Example variables
973964
# node-id: 'host02'
965+
# cluster-id: 'cluster01'
974966
# bucket: 'influxdb-3-enterprise-storage'
975967
976-
Usage: $ influxdb3 serve --node-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
968+
Usage: $ influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
977969
```
978970
979-
2. Start the dedicated Compactor node (`--mode=compactor`) and ensure it runs compactions on the specified `compaction-hosts`.
971+
2. Start the dedicated Compactor node with `--mode=compact`.
980972
981973
```
982974
## NODE 3 — Compactor Node
983975
984976
# Example variables
985977
# node-id: 'host03'
978+
# cluster-id: 'cluster01'
986979
# bucket: 'influxdb-3-enterprise-storage'
987980
988-
influxdb3 serve --node-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
981+
influxdb3 serve --node-id=host03 --cluster-id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
989982
```
990983
991-
3. Finally, start the query nodes as _read-only_.
992-
Include the following options:
993-
994-
- `--mode=read`: Sets the node to _read-only_
995-
- `--read-from-node-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
984+
3. Finally, start the query nodes as _read-only_ with `--mode=query`.
996985
997-
```bash
986+
```
998987
## NODE 4 — Read Node #1
999988
1000989
# Example variables
1001990
# node-id: 'host04'
991+
# cluster-id: 'cluster01'
1002992
# bucket: 'influxdb-3-enterprise-storage'
1003993
1004-
influxdb3 serve --node-id=host04 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
994+
influxdb3 serve --node-id=host04 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
1005995
```
1006996
1007997
```
1008998
## NODE 5 — Read Node #2
1009999
10101000
# Example variables
10111001
# node-id: 'host05'
1012-
# bucket: 'influxdb-3-enterprise-storage'
10131002
1014-
influxdb3 serve --node-id=host05 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
1003+
influxdb3 serve --node-id=host05 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
10151004
```
10161005
10171006
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.

0 commit comments

Comments
 (0)