You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/shared/v3-core-get-started/_index.md
+26-42Lines changed: 26 additions & 42 deletions
Original file line number
Diff line number
Diff line change
@@ -135,8 +135,12 @@ source ~/.zshrc
135
135
To start your InfluxDB instance, use the `influxdb3 serve` command
136
136
and provide the following:
137
137
138
-
-`--object-store`: Specifies the type of Object store to use. InfluxDB supports the following: local file system (`file`), `memory`, S3 (and compatible services like Ceph or Minio) (`s3`), Google Cloud Storage (`google`), and Azure Blob Storage (`azure`).
139
-
-`--node-id`: A string identifier that determines the server's storage path within the configured storage location
138
+
-`--object-store`: Specifies the type of Object store to use.
139
+
InfluxDB supports the following: local file system (`file`), `memory`,
140
+
S3 (and compatible services like Ceph or Minio) (`s3`),
141
+
Google Cloud Storage (`google`), and Azure Blob Storage (`azure`).
142
+
-`--node-id`: A string identifier that determines the server's storage path
143
+
within the configured storage location, and, in a multi-node setup, is used to reference the node.
140
144
141
145
The following examples show how to start InfluxDB 3 with different object store configurations:
142
146
@@ -216,7 +220,7 @@ InfluxDB is a schema-on-write database. You can start writing data and InfluxDB
216
220
After a schema is created, InfluxDB validates future write requests against it before accepting the data.
217
221
Subsequent requests can add new fields on-the-fly, but can't add new tags.
218
222
219
-
InfluxDB 3 Core is optimized for recent data, but accepts writes from any time period. It persists that data in Parquet files for access by third-party systems for longer term historical analysis and queries. If you require longer historical queries with a compactor that optimizes data organization, consider using [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/).
223
+
{{% product-name %}} is optimized for recent data, but accepts writes from any time period. It persists that data in Parquet files for access by third-party systems for longer term historical analysis and queries. If you require longer historical queries with a compactor that optimizes data organization, consider using [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/).
220
224
221
225
222
226
The database has three write API endpoints that respond to HTTP `POST` requests:
@@ -278,7 +282,7 @@ With `accept_partial=true`:
278
282
```
279
283
280
284
Line `1` is written and queryable.
281
-
The response is an HTTP error (`400`) status, and the response body contains `partial write of line protocol occurred`and details about the problem line.
285
+
The response is an HTTP error (`400`) status, and the response body contains the error message `partial write of line protocol occurred`with details about the problem line.
282
286
283
287
##### Parsing failed for write_lp endpoint
284
288
@@ -323,7 +327,7 @@ For more information, see [diskless architecture](#diskless-architecture).
323
327
> Because InfluxDB sends a write response after the WAL file has been flushed to the configured object store (default is every second), individual write requests might not complete quickly, but you can make many concurrent requests to achieve higher total throughput.
324
328
> Future enhancements will include an API parameter that lets requests return without waiting for the WAL flush.
325
329
326
-
#### Create a database or Table
330
+
#### Create a database or table
327
331
328
332
To create a database without writing data, use the `create` subcommand--for example:
329
333
@@ -340,9 +344,10 @@ influxdb3 create -h
340
344
### Query the database
341
345
342
346
InfluxDB 3 now supports native SQL forquerying,in addition to InfluxQL, an
343
-
SQL-like language customized fortime series queries. {{< product-name >}} limits
344
-
query time ranges to 72 hours (both recent and historical) to ensure query performance.
347
+
SQL-like language customized fortime series queries.
345
348
349
+
{{< product-name >}} limits
350
+
query time ranges to 72 hours (both recent and historical) to ensure query performance.
346
351
For more information about the 72-hour limitation, see the
347
352
[update on InfluxDB 3 Core’s 72-hour limitation](https://www.influxdata.com/blog/influxdb3-open-source-public-alpha-jan-27/).
348
353
@@ -400,7 +405,7 @@ $ influxdb3 query --database=servers "SELECT DISTINCT usage_percent, time FROM c
400
405
401
406
### Querying using the CLI for InfluxQL
402
407
403
-
[InfluxQL](/influxdb3/core/reference/influxql/) is an SQL-like language developed by InfluxData with specific features tailored for leveraging and working with InfluxDB. It’s compatible with all versions of InfluxDB, making it a good choice for interoperability across different InfluxDB installations.
408
+
[InfluxQL](/influxdb3/version/reference/influxql/) is an SQL-like language developed by InfluxData with specific features tailored for leveraging and working with InfluxDB. It’s compatible with all versions of InfluxDB, making it a good choice for interoperability across different InfluxDB installations.
404
409
405
410
To query using InfluxQL, enter the `influxdb3 query` subcommand and specify `influxql`in the language option--for example:
406
411
@@ -499,7 +504,7 @@ You can use the `influxdb3` CLI to create a last value cache.
The InfluxDB 3 Processing engine is an embedded Python VM for running code inside the database to process and transform data.
571
576
572
-
To use the Processing engine, you create [plugins](#plugin) and [triggers](#trigger).
577
+
To activate the Processing engine, pass the `--plugin-dir <PLUGIN_DIR>` option when starting the {{% product-name %}} server.
578
+
`PLUGIN_DIR` is your filesystem location for storing [plugin](#plugin) files for the Processing engine to run.
573
579
574
580
#### Plugin
575
581
576
-
A plugin is a Python functionthat has a signature compatible with one of the [trigger types](#trigger-types).
577
-
The [`influxdb3 create plugin`](/influxdb3/core/reference/cli/influxdb3/create/plugin/) command loads a Python plugin file into the server.
582
+
A plugin is a Python functionthat has a signature compatible with a Processing engine [trigger](#trigger).
578
583
579
584
#### Trigger
580
585
581
-
After you load a plugin into an InfluxDB 3 server, you can create one or more
582
-
triggers associated with the plugin.
583
-
When you create a trigger, you specify a plugin, a database, optional runtime arguments,
584
-
and a trigger-spec, which specifies `all_tables` or `table:my_table_name` (for filtering data sent to the plugin).
585
-
When you _enable_ a trigger, the server executes the plugin code according to the
586
-
plugin signature.
586
+
When you create a trigger, you specify a [plugin](#plugin), a database, optional arguments,
587
+
and a _trigger-spec_, which defines when the plugin is executed and what data it receives.
587
588
588
589
##### Trigger types
589
590
590
-
InfluxDB 3 provides the following types of triggers:
591
-
592
-
- **On WAL flush**: Sends the batch of write data to a plugin once a second (configurable).
591
+
InfluxDB 3 provides the following types of triggers, each with specific trigger-specs:
593
592
594
-
> [!Note]
595
-
> Currently, only the **WAL flush** trigger is supported, but more are on the way:
596
-
>
597
-
> - **On Snapshot**: Sends metadata to a plugin for further processing against the Parquet data or to send the information elsewhere (for example, to an Iceberg Catalog). _Not yet available._
598
-
> - **On Schedule**: Executes a plugin on a user-configured schedule, useful for data collection and deadman monitoring. _Not yet available._
599
-
> - **On Request**: Binds a plugin to an HTTP endpoint at `/api/v3/plugins/<name>`. _Not yet available._
593
+
- **On WAL flush**: Sends a batch of written data (for a specific table or all tables) to a plugin (by default, every second).
594
+
> - **On Schedule**: Executes a plugin on a user-configured schedule (using a crontab or a duration); useful for data collection and deadman monitoring.
595
+
> - **On Request**: Binds a plugin to a custom HTTP API endpoint at `/api/v3/engine/<PATH>`.
600
596
> The plugin receives the HTTP request headers and content, and can then parse, process, and send the data into the database or to third-party services.
601
597
602
598
### Test, create, and trigger plugin code
@@ -686,7 +682,7 @@ Test your InfluxDB 3 plugin safely without affecting written data. During a plug
686
682
To test a plugin, do the following:
687
683
688
684
1. Create a _plugin directory_--for example, `/path/to/.influxdb/plugins`
689
-
2. [Start the InfluxDB server](#start-influxdb) and include the `--plugin-dir` option with your plugin directory path.
685
+
2. [Start the InfluxDB server](#start-influxdb) and include the `--plugin-dir <PATH>` option.
690
686
3. Save the [preceding example code](#example-python-plugin) to a plugin file inside of the plugin directory. If you haven't yet written data to the table in the example, comment out the lines where it queries.
691
687
4. To run the test, enter the following command with the following options:
692
688
@@ -706,7 +702,7 @@ You can quickly see how the plugin behaves, what data it would have written to t
706
702
You can then edit your Python code in the plugins directory, and rerun the test.
707
703
The server reloads the file for every request to the `test` API.
708
704
709
-
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/core/reference/cli/influxdb3/test/wal_plugin/) or run `influxdb3 test wal_plugin -h`.
705
+
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/) or run `influxdb3 test wal_plugin -h`.
710
706
711
707
With the plugin code inside the server plugin directory, and a successful test,
712
708
you're ready to create a plugin and a trigger to run on the server.
Copy file name to clipboardExpand all lines: content/shared/v3-core-plugins/_index.md
+27-21Lines changed: 27 additions & 21 deletions
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,8 @@ A _trigger_ is an InfluxDB 3 resource you create to associate a database
25
25
event (for example, a WAL flush) with the plugin that should run.
26
26
When an event occurs, the trigger passes configuration details, optional arguments, and event data to the plugin.
27
27
28
-
The Processing engine provides four types of triggers—each type corresponds to an event type with event-specific configuration to let you handle events with targeted logic.
28
+
The Processing engine provides four types of triggers--each type corresponds to
29
+
an event type with event-specific configuration to let you handle events with targeted logic.
29
30
30
31
-**WAL Flush**: Triggered when the write-ahead log (WAL) is flushed to the object store (default is every second).
31
32
-**Scheduled Tasks**: Triggered on a schedule you specify using cron syntax.
@@ -36,15 +37,15 @@ The Processing engine provides four types of triggers—each type corresponds to
36
37
37
38
### Activate the Processing engine
38
39
39
-
To enable the Processing engine, start the {{% product-name %}} server with the `--plugin-dir` option and a path to your plugins directory. If the directory doesn’t exist, the server creates it.
40
+
To enable the Processing engine, start the {{% product-name %}} server with the
41
+
`--plugin-dir` option and a path to your plugins directory.
42
+
If the directory doesn’t exist, the server creates it.
40
43
41
44
```bash
42
45
influxdb3 serve --node-id node0 --object-store [OBJECT STORE TYPE] --plugin-dir /path/to/plugins
43
46
```
44
47
45
-
46
-
47
-
## The Shared API
48
+
## Shared API
48
49
49
50
All plugin types provide the InfluxDB 3 _shared API_ for interacting with the database.
50
51
The shared API provides access to the following:
@@ -194,11 +195,11 @@ The shared API `query` function executes an SQL query with optional parameters (
194
195
The following examples show how to use the `query` function:
195
196
196
197
```python
197
-
influxdb3_local.query("SELECT * from foo where bar = 'baz' and time > now() - 'interval 1 hour'")
198
+
influxdb3_local.query("SELECT * from foo where bar = 'baz' and time > now() - INTERVAL '1 hour'")
198
199
199
200
# Or using parameterized queries
200
201
args = {"bar": "baz"}
201
-
influxdb3_local.query("SELECT * from foo where bar = $bar and time > now() - 'interval 1 hour'", args)
202
+
influxdb3_local.query("SELECT * from foo where bar = $bar and time > now() - INTERVAL '1 hour'", args)
202
203
```
203
204
204
205
### Logging
@@ -220,13 +221,20 @@ influxdb3_local.info("This is an info message with an object", obj_to_log)
220
221
221
222
### Trigger arguments
222
223
223
-
Every plugin type can receive arguments from the configuration of the trigger that runs it.
224
+
A plugin can receive arguments from the trigger that runs it.
224
225
You can use this to provide runtime configuration and drive behavior of a plugin—for example:
225
226
226
227
- threshold values for monitoring
227
228
- connection properties for connecting to third-party services
228
229
229
-
The arguments are passed as a `Dict[str, str]` where the key is the argument name and the value is the argument value.
230
+
To pass arguments to a plugin, specify argument key-value pairs in the trigger--for example, using the CLI:
231
+
232
+
```bash
233
+
influxdb3 create trigger
234
+
--trigger-arguments <TRIGGER_ARGUMENTS>
235
+
Comma separated list of key/value pairs to use as trigger arguments. Example: key1=val1,key2=val2
236
+
The arguments are passed to the plugin as a `Dict[str, str]` where the key is
237
+
the argument name and the value is the argument value.
230
238
231
239
The following example shows how to use an argument in a WAL plugin:
232
240
@@ -369,11 +377,14 @@ influxdb3 create trigger \
369
377
370
378
### On Request trigger
371
379
372
-
On Request plugins are triggered by a request to a specific endpoint under `/api/v3/engine`. The plugin receives the shared API, query parameters `Dict[str, str]`, request headers `Dict[str, str]`, the request body (as bytes), and any arguments passed in the trigger definition.
380
+
On Request plugins are triggered by a request to an endpoint that you define
381
+
under `/api/v3/engine`.
382
+
When triggered, the plugin receives the shared API, query parameters `Dict[str, str]`,
383
+
request headers `Dict[str, str]`, the request body (as bytes),
384
+
and any arguments passed in the trigger definition.
**On Request** plugins are defined using the `request:<endpoint>` trigger-spec.
413
+
Define an On Request plugin using the `request:<endpoint>` trigger-spec.
403
414
404
-
For example, the following command creates an `/api/v3/engine/my_plugin` endpoint that runs a `<plugin-directory>/examples/my-on-request.py` plugin:
415
+
For example, the following command creates an `/api/v3/engine/my_plugin` endpoint
416
+
that runs a `<plugin-directory>/examples/my-on-request.py` plugin:
405
417
406
418
```bash
407
419
influxdb3 create trigger \
408
420
--trigger-spec "request:my_plugin" \
409
421
--plugin-filename "examples/my-on-request.py" \
410
422
--database mydb my-plugin
411
423
412
-
Because all On Request plugins share the same root URL, trigger specs must be unique across all plugins configured for a server, regardless of which database they are associated with.
413
-
414
-
```shell
415
-
influxdb3 create trigger \
416
-
--trigger-spec "request:hello-world" \
417
-
--plugin-filename "hello/hello_world.py" \
418
-
--database mydb hello-world
419
-
```
424
+
Because all On Request plugins share the same root URL, trigger specs must be
425
+
unique across all plugins configured for a server, regardless of which database they are associated with.
0 commit comments