Export Teleport Events to Splunk HEC from Fluentd #56918
programmerq
started this conversation in
Show and tell
Replies: 1 comment
-
Hi @programmerq. I'm curious if either of these have been considered:
Many orgs do have logging infrastructure in place, but do not have FluentD. In this case, Teleport Audit Event Logging requires us to install and manage FluentD in addition to our existing infra. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The approach that we previously recommended was to skip running fluentd altogether, and instead configure an HEC listener on the Splunk universal forwarder. Splunk is deprecating HEC listeners on the universal forwarder. Teleport's event handler isn't really a Splunk HEC client. It just POSTS json data to an endpoint following the Fluentd protocol. Splunk's HEC listener accepts arbitrary data and can parse from there.
Teleport's event handler doesn't support token auth. Splunk's best practice is to use an
Authorization: Splunk <token>
header. For the Teleport event handler fluentd client to work with a Splunk HEC endpoint, passing the token as a GET parameter is the only option. This is explicitly disabled by default, and can't be enabled in Splunk Cloud.This new approach will use a fluentd component that sits between the Teleport Event Handler and Splunk. This is the same role that the Universal forwarder played. Instead of a universal forwarding HEC listener, we'll have a full fluentd instance.
Start by following the Export Events with Fluentd guide.
https://goteleport.com/docs/admin-guides/management/export-audit-events/fluentd/
Run through steps 1-5. That will give you an example
fluent.conf
that only writes tostdout
. We'll adjust that later.Step x/y: Configure Splunk
Create an index for your Teleport audit logs by visiting the home page of the Splunk UI and navigating to Settings > Indexes. Click New Index. Name your index teleport-audit-logs and assign the Index Data Type field to "Events".
The values of the remaining fields, Max raw data size and Searchable retention (days) depend on your organization's resources and practices for log management.
Click Save
(Optional) Create a source type for Teleport's Audit Events.
By default, Splunk's
_json
source type expects the"time"
field to be in a different format than Teleport generates. This means that the event will show up in splunk at the time it was ingested, not at the time it was generated. Parsing the timestamp will keep consistency when searching. Pauses or lags in ingestion won't cause "gaps" in Splunk.Navigate to Settings -> Source Types. Find the
_json
Source Type and click "Clone". Name it_json-gotime
. Under "Timestamp" click "Advanced". Input%Y-%m-%dT%H:%M:%S.%3NZ
and click Save.Create a token for the HTTP Event Collector
To generate a token, visit the home page of the Splunk UI. Navigate to Settings > Data inputs In the Local inputs table, find the HTTP Event Collector row and click Add new
Enter a name you can use to recognize the token later so you can manage it, e.g., Teleport Audit Events. Click Next.
In the Input Settings view (above), next to the Source type field, click Select. In the Select Source Type dropdown menu, click Structured, then choose the
_json-gotime
you created earlier. If you skipped that optional step, choose_json
. Splunk will index incoming logs as JSON, which is the format the Event Handler uses to send logs to Splunk.In the Index section, select the teleport-audit-logs index you created earlier. Click Review then view the summary and click Submit. Copy the Token Value field and keep it somewhere safe so you can use it later in this guide.
Connect FluentD to your Splunk Http Event Collector
If you are still running your
fluentd
container from step 5, go ahead and stop it. Make the following changes tofluent.conf
.Add
keep_time_key true
to the<parse>
section. This will preserve the"time"
field in the JSON when it is sent to Splunk. Some fluentd output plugins require having a parsed time available.Remove the
@type stdout
output, and add the following section in its place:Replace the
11111111-1111-1111-1111-111111111111
placeholder with the actual token that you created in a previous step.Note: Splunk used to maintain a dedicated output plugin at https://github.com/splunk/fluent-plugin-splunk-hec. This has been deprecated. The HEC protocol is very simple, and we can still connect to it using Fluentd's built-in HTTP output plugin.
Once you've made those changes, you can test them by running the same command from the end of "Step 5/6. Start the Fluentd forwarder".
Complete Step 6/6. Run the Event Handler plugin from the Export Events with Fluentd guide like normal.
That's the end of this draft.
Some thoughts on implementation before reworking the current (at time of writing, 17 July 2025) Splunk guide.
Timestamp format
I opted to create a new Source Type in Splunk to handle the timestamp format. In theory, FluentD would be able to do this transformation. I believe Splunk wants to see the "time" field in the Unix epoch format with milliseconds
<sec>.<ms>
, for example"time": 1752782909.334
. I opted to preserve Teleport's audit event format since Splunk is able to parse timestamps with a custom Source Type, which is standard practice.I don't believe that the old instructions took the time parsing into account, which is why I was inclined to mark that step as "optional".
Deprecated fluent Splunk HEC plugin
I considered including instructions for the now-deprecated plugin, but decided to go with the built-in
http
output plugin. I did not test it and don't know how well it would have worked.Beta Was this translation helpful? Give feedback.
All reactions