-
Notifications
You must be signed in to change notification settings - Fork 2
4. Alerts
Vincenzo Reina edited this page Apr 21, 2025
·
9 revisions
ServerPulse includes a robust alert system that can notify you about important server events through Discord. This guide covers setting up and customizing alerts.
- Open your Discord server settings
- Navigate to "Integrations" → "Webhooks"
- Click "Create Webhook"
- Choose a name and channel for alerts
- Copy the webhook URL
- Open
infra/grafana/provisioning/alerting/discord_contact.yml
- Replace the example webhook URL with yours:
apiVersion: 1
contactPoints:
- orgId: 1
name: Discord contact point
receivers:
- uid: deiz0m4w2afpcb
type: discord
settings:
url: https://discord.com/api/webhooks/your-webhook # Replace this
message: '{{ template "discord.default.message" . }}'
title: '{{ template "default.title" . }}'
- Restart Docker containers to apply changes:
docker compose down
docker compose up -d
ServerPulse comes with pre-configured alerts:
The default alert triggers when TPS drops below 18, evaluating every 10 seconds with 5-minute historical context.
Most server administrators will find it easier to create and manage alerts directly through the Grafana user interface.
- Log in to your Grafana instance (typically http://localhost:3000)
- In the left sidebar, click on the bell icon (Alerting)
- This opens the Alerting page where you can manage all your alerts
- From the Alerting page, click on "Alert rules" in the sidebar
- Click the "New alert rule" button
- Configure your alert in the three sections:
- Select your InfluxDB data source
- Write your Flux query or use the query builder:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "used_memory") |> filter(fn: (r) => r.server == "your-server-name")
- In the "Threshold" section, set the alert trigger condition:
- Select "Is above" for memory alerts or "Is below" for TPS alerts
- Enter your threshold value (e.g., 18 for TPS, 80% for memory usage)
- Give your alert a descriptive name (e.g., "High Memory Usage")
- Set an appropriate evaluation interval (e.g., 10s for critical metrics, 1m for less critical ones)
- Optionally add a summary and description to provide more context
-
Select your Discord contact point
-
Configure the message template (or use the default)
-
Set notification timing:
- Group interval: How long to wait before sending an updated notification (e.g., 30s)
- Auto resolve: Toggle if alerts should automatically resolve
- Resolve timeout: How long before considering an alert resolved if no longer triggering
-
Click "Save and exit" to activate your alert
Here are some useful alerts you might want to set up:
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "tps_1m") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is below 18
- Name: "Low TPS Alert"
- Evaluation: Every 10s
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "used_memory") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is above your threshold (e.g., 80% of your server's allocated memory)
- Name: "High Memory Usage"
- Evaluation: Every 30s
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "usable_disk_space") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is below your threshold (e.g., 10GB)
- Name: "Low Disk Space"
- Evaluation: Every 5m
-
Simulate trigger conditions:
- For TPS: Use a plugin or command to stress test the server
- For memory: Load a lot of chunks or spawn many entities
- For disk space: Create large temporary files in your server directory
-
Verify Discord Integration:
- Check Discord channel for alert messages
- Confirm formatting and content
If alerts aren't working:
- Check Discord webhook URL for typos
- Verify Grafana can reach the Discord API
- Confirm your alert conditions are correctly configured
- Look for error messages in Grafana's alert history
- Test the contact point by sending a test notification