Skip to content

Commit 9a73763

Browse files
authored
Apply suggestions from code review
1 parent 41ecca8 commit 9a73763

File tree

2 files changed

+21
-13
lines changed

2 files changed

+21
-13
lines changed

content/shared/v3-core-plugins/_index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -468,14 +468,14 @@ the trigger-spec you define must be unique across all plugins configured for a s
468468
regardless of which database they are associated with.
469469

470470

471-
## In-Memory Cache
471+
## In-memory cache
472472

473473
The Processing Engine provides a powerful in-memory cache system that enables plugins to persist and retrieve data between executions. This cache system is essential for maintaining state, tracking metrics over time, and optimizing performance when working with external data sources.
474474

475475
### Key Benefits
476476

477-
- **State Persistence**: Maintain counters, timestamps, and other state variables across plugin executions.
478-
- **Performance and Cost Optimization**: Store frequently used data to avoid expensive recalculations, and minimize external API calls by caching responses and avoiding rate limits.
477+
- **State persistence**: Maintain counters, timestamps, and other state variables across plugin executions.
478+
- **Performance and cost optimization**: Store frequently used data to avoid expensive recalculations. Minimize external API calls by caching responses and avoiding rate limits.
479479
- **Data Enrichment**: Cache lookup tables, API responses, or reference data to enrich data efficiently.
480480

481481
### Cache API
@@ -607,7 +607,7 @@ Prime the cache at startup for critical data. This can be especially useful for
607607

608608
```python
609609
# Check if cache needs to be initialized
610-
if not influxdb3_local.cache.get("lookup_table"):
610+
if not influxdb3_local.cache.get("lookup_table"):
611611
influxdb3_local.cache.put("lookup_table", load_lookup_data())
612612
```
613613

content/shared/v3-enterprise-get-started/_index.md

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,11 @@ influxdb3 serve \
204204
# Specify the object store type and associated options
205205
206206
```bash
207-
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=s3 --bucket=BUCKET \
207+
influxdb3 serve \
208+
--node-id=host01 \
209+
--cluster-id=cluster01 \
210+
--object-store=s3 \
211+
--bucket=BUCKET \
208212
--aws-access-key=AWS_ACCESS_KEY \
209213
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
210214
--aws-endpoint=ENDPOINT \
@@ -847,7 +851,7 @@ In a basic HA setup:
847851
> Compacted data is meant for a single writer, and many readers.
848852
849853
The following examples show how to configure and start two nodes
850-
for a basic HA setup. Node 1 is configured as the compactor as it's also passed the `compact` mode.
854+
for a basic HA setup. _Node 1_ is configured as the compactor (`--mode` includes `compact`).
851855
852856
```bash
853857
## NODE 1
@@ -873,20 +877,24 @@ influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --ob
873877
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
874878
```
875879
876-
After the nodes have started, querying either node returns data for both nodes, and `NODE 1` runs compaction.
880+
After the nodes have started, querying either node returns data for both nodes, and _NODE 1_ runs compaction.
877881
To add nodes to this setup, start more read replicas with the same cluster ID:
878882
879883
> [!Note]
880884
> To run this setup for testing, you can start nodes in separate terminals and pass a different `--http-bind` value for each--for example:
881885
>
882886
> ```bash
883887
> # In terminal 1
884-
> influxdb3 serve --node-id=host01 --cluster-id=cluster01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
888+
> influxdb3 serve --node-id=host01 \
889+
> --cluster-id=cluster01 \
890+
> --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
885891
> ```
886892
>
887893
> ```bash
888894
> # In terminal 2
889-
> influxdb3 serve --node-id=host01 --cluster-id=cluster01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
895+
> influxdb3 serve --node-id=host01 \
896+
> --cluster-id=cluster01 \
897+
> --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
890898
891899
### High availability with a dedicated Compactor
892900
@@ -944,7 +952,7 @@ For a very robust and effective setup for managing time-series data, you can run
944952
1. Start ingest nodes by assigning them the **`ingest`** mode.
945953
To achieve the benefits of workload isolation, you'll send _only write requests_ to these ingest nodes. Later, you'll configure the _read-only_ nodes.
946954
947-
```
955+
```bash
948956
## NODE 1 — Writer Node #1
949957
950958
# Example variables
@@ -970,7 +978,7 @@ For a very robust and effective setup for managing time-series data, you can run
970978
971979
2. Start the dedicated Compactor node with `--mode=compact`.
972980
973-
```
981+
```bash
974982
## NODE 3 — Compactor Node
975983
976984
# Example variables
@@ -983,7 +991,7 @@ For a very robust and effective setup for managing time-series data, you can run
983991
984992
3. Finally, start the query nodes as _read-only_ with `--mode=query`.
985993
986-
```
994+
```bash
987995
## NODE 4 — Read Node #1
988996
989997
# Example variables
@@ -994,7 +1002,7 @@ For a very robust and effective setup for managing time-series data, you can run
9941002
influxdb3 serve --node-id=host04 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
9951003
```
9961004
997-
```
1005+
```bash
9981006
## NODE 5 — Read Node #2
9991007
10001008
# Example variables

0 commit comments

Comments
 (0)