-
Notifications
You must be signed in to change notification settings - Fork 2
4. Alerts
ServerPulse includes a robust alert system that can notify you about important server events through Discord or Telegram. This guide covers setting up and customizing alerts.
ServerPulse supports multiple notification channels. You can choose either Discord or Telegram (or both) for your alerts.
-
Creating a Discord Webhook
- Open your Discord server settings
- Navigate to "Integrations" → "Webhooks"
- Click "Create Webhook"
- Choose a name and channel for alerts
- Copy the webhook URL
-
Configuring Discord Integration
- Open
infra/grafana/provisioning/alerting/discord_contact.yml
- Replace the example webhook URL with yours:
apiVersion: 1 contactPoints: - orgId: 1 name: Discord contact point receivers: - uid: deiz0m4w2afpcb type: discord settings: url: https://discord.com/api/webhooks/your-webhook # Replace this message: '{{ template "discord.default.message" . }}' title: '{{ template "default.title" . }}'
- Open
-
Creating a Telegram Bot
- Open Telegram and search for "@BotFather"
- Start a chat and send the command
/newbot
- Follow the instructions to create your bot
- Copy the bot token provided by BotFather
-
Getting Your Chat ID
- Add your bot to a group or start a private conversation with it
- Send any message to the bot
- Visit
https://api.telegram.org/bot<YourBOTToken>/getUpdates
- Look for the "chat" object and copy the "id" value
-
Configuring Telegram Integration
- Open
infra/grafana/provisioning/alerting/telegram_contact.yml
- Replace the example values with yours:
apiVersion: 1 contactPoints: - orgId: 1 name: Telegram contact point receivers: - uid: eejlr7re61og0e type: telegram settings: bottoken: your_bot_token # Replace with your bot token chatid: "your_chat_id" # Replace with your chat ID disable_notification: false disable_web_page_preview: false protect_content: false disableResolveMessage: false
- Open
After setting up your desired contact points (Discord, Telegram, or both), you need to select which one to use for your alerts:
-
Edit the Contact Policy
- Open
infra/grafana/provisioning/alerting/contact_policy.yml
- Change the
receiver
value to your preferred contact point:
apiVersion: 1 policies: - orgId: 1 receiver: Discord contact point # Change to "Telegram contact point" if preferred group_wait: 0s group_interval: 30s repeat_interval: 3m
- Open
-
Update Alert Rules
- Open
infra/grafana/provisioning/alerting/metrics.yml
- For each alert rule, update the
receiver
in the notification settings:
notification_settings: receiver: Discord contact point # Change to "Telegram contact point" if preferred
- Open
-
Apply Changes
docker compose down docker compose up -d
If you want to use different contact points for different alerts, you can create multiple policies:
-
Edit the Contact Policy
apiVersion: 1 policies: - orgId: 1 name: Critical Alerts # Add a name for clarity receiver: Discord contact point group_wait: 0s group_interval: 30s repeat_interval: 3m matcher: - name: severity value: critical - orgId: 1 name: Warning Alerts # Add a name for clarity receiver: Telegram contact point group_wait: 0s group_interval: 1m repeat_interval: 5m matcher: - name: severity value: warning
-
Set Alert Severity Labels When creating alerts, add a label "severity" with value "critical" or "warning" to route them to the correct contact point.
ServerPulse comes with pre-configured alerts:
The default alert triggers when TPS drops below 18, evaluating every 10 seconds with 5-minute historical context.
Most server administrators will find it easier to create and manage alerts directly through the Grafana user interface.
- Log in to your Grafana instance (typically http://localhost:3000)
- In the left sidebar, click on the bell icon (Alerting)
- This opens the Alerting page where you can manage all your alerts
- From the Alerting page, click on "Alert rules" in the sidebar
- Click the "New alert rule" button
- Configure your alert in the three sections:
- Select your InfluxDB data source
- Write your Flux query or use the query builder:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "used_memory") |> filter(fn: (r) => r.server == "your-server-name")
- In the "Threshold" section, set the alert trigger condition:
- Select "Is above" for memory alerts or "Is below" for TPS alerts
- Enter your threshold value (e.g., 18 for TPS, 80% for memory usage)
- Give your alert a descriptive name (e.g., "High Memory Usage")
- Set an appropriate evaluation interval (e.g., 10s for critical metrics, 1m for less critical ones)
- Optionally add a summary and description to provide more context
- Add any labels needed for routing (e.g.,
severity=critical
)
-
Select your preferred contact point (Discord or Telegram)
-
Configure the message template (or use the default)
-
Set notification timing:
- Group interval: How long to wait before sending an updated notification (e.g., 30s)
- Auto resolve: Toggle if alerts should automatically resolve
- Resolve timeout: How long before considering an alert resolved if no longer triggering
-
Click "Save and exit" to activate your alert
Here are some useful alerts you might want to set up:
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "tps_1m") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is below 18
- Name: "Low TPS Alert"
- Evaluation: Every 10s
- Label: severity=critical
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "used_memory") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is above your threshold (e.g., 80% of your server's allocated memory)
- Name: "High Memory Usage"
- Evaluation: Every 30s
- Label: severity=critical
-
Query:
from(bucket: "metrics_db") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "minecraft_stats") |> filter(fn: (r) => r._field == "usable_disk_space") |> filter(fn: (r) => r.server == "your-server-name")
- Condition: Is below your threshold (e.g., 10GB)
- Name: "Low Disk Space"
- Evaluation: Every 5m
- Label: severity=warning
-
Simulate trigger conditions:
- For TPS: Use a plugin or command to stress test the server
- For memory: Load a lot of chunks or spawn many entities
- For disk space: Create large temporary files in your server directory
-
Verify Integration:
- Check Discord channel or Telegram chat for alert messages
- Confirm formatting and content
If alerts aren't working:
- Check webhook URL or bot token for typos
- Verify Grafana can reach the Discord/Telegram API
- Confirm your alert conditions are correctly configured
- Look for error messages in Grafana's alert history
- Test the contact point by sending a test notification
- Ensure the correct contact point is selected in your alert rules and policies