Skip to content

Alerts

The Alerts resource allows you to define automatic notifications based on real-time application metrics. You can choose any tracked resource (requests, queries, jobs, etc.), specify conditions (e.g., average duration, error counts), and select notification channels (Slack, Email, Webhook, Discord, Teams). Once configured, Laritor continuously evaluates your criteria and dispatches alerts when thresholds are met.

Navigation: Access Alerts via the sidebar under the “Alerts” section.


The Alerts index displays all configured alerts in a table. Each row shows:

  • Alert Name
  • Resource (e.g., Requests, Queries, Queued Jobs)
  • Status (ENABLED or DISABLED)
  • Notification Channel (e.g., Slack, Email, Webhook)
  • Edit Icon (✏️)
  • Delete Icon (🗑️)

In the top-right corner, click Create Alert to open the Alert Creation form.


Click Create Alert to define a new alert. The form includes the following fields:

  • A switch to turn the alert ON (ENABLED) or OFF (DISABLED).
  • Disabled alerts do not evaluate or send notifications until re-enabled.
  • A descriptive title for the alert (required).
  • Select the resource on which this alert is based. Choices match those available for Charts:
    • Requests
    • Queries
    • Queued Jobs
    • Exceptions
    • Outbound Requests
    • Scheduled Tasks
    • Health Checks
    • Servers

Depending on the selected Resource, choose the metric or event to monitor:

  • Requests:

    • Request Average Duration
    • Request Count
    • 4xx Request Count
    • 5xx Request Count
    • 4xx or 5xx Request Count
  • Queries:

    • Query Average Duration
    • Query Count
    • N+1 Query Count
  • Queued Jobs:

    • Pending Jobs Count
    • Failed Jobs Count
    • Average Job Duration
    • Average Wait Time
  • Exceptions:

    • Exception Count
  • Outbound Requests:

    • Request Average Duration
    • Request Count
    • 4xx Request Count
    • 5xx Request Count
    • 4xx or 5xx Request Count
  • Scheduled Tasks:

    • Scheduled Task Status (alerts on a FAILED or SKIPPED run)
  • Health Checks:

    • Health Check Status (alerts on any FAILED check)
  • Servers:

    • CPU Usage
    • Memory Usage
    • Disk Usage

The Is field consists of:

  1. A comparator dropdown:

    • >=
    • <=
  2. A value input:

    • For numeric metrics (Request Count, Query Average Duration, etc.), a numeric input (integer or decimal).
    • For status-based metrics (Scheduled Task Status, Health Check Status), a dropdown of allowed status values (e.g., FAILED, SKIPPED, etc.).
    • For resource availability (e.g., Servers → CPU Usage), a numeric input representing percentage (0–100).

This combination defines the trigger condition: e.g., “Request Count >= 100” or “CPU Usage >= 90.”

  • Choose how long the condition must hold before the alert fires:
    • 1 Minute
    • 5 Minutes

Laritor evaluates the metric continuously; the condition must be true for the entire duration you specify. For example, 5 Minutes means only trigger if the threshold is breached continuously for five minutes.

Toggle one or more notification channels to receive alerts:

  • Slack
  • Email
  • Webhook
  • Discord
  • Microsoft Teams

When the alert fires, Laritor sends a notification payload to all enabled channels according to your organization’s integration settings.

  • Click Save to create the alert.
  • The new alert appears on the Alerts index and begins evaluating immediately (if enabled).

  1. Evaluation Loop

    • Laritor continuously samples the selected resource’s metric at regular intervals.
    • If the When condition remains satisfied for the entire For window, the alert transitions to FIRING.
  2. Notification Dispatch

    • Upon firing, Laritor sends a notification to all channels enabled under Notify On.
    • Notifications include:
      • Alert Name
      • Resource & Metric
      • Condition (e.g., Request Count >= 100)
      • Actual Value (e.g., Request Count = 125)
      • Timestamp
  3. Recovery & Repeat

    • Once the metric falls below (or above, depending on comparator) the threshold for an equivalent For window, Laritor marks the alert as RESOLVED and, if supported, sends a recovery notification.
    • The alert then resets, ready to fire again if the condition reoccurs.

  • Choose Appropriate Resources

    • Use Requests → 5xx Request Count to catch server errors quickly.
    • Monitor Servers → CPU Usage to detect resource saturation before it degrades performance.
    • Leverage Queries → N+1 Query Count to identify problematic database patterns early.
  • Set Realistic Thresholds

    • Analyze historical data (via Charts) before configuring. If your average daily request count is 500, an alert at >= 100 may only catch low-volume spikes—consider a threshold closer to peak load (e.g., >= 5000).
  • Balance Window Duration

    • Short evaluation windows (1 minute) catch quick spikes but can generate noise. Longer windows (5 minutes) reduce false positives but may delay alerting.
  • Use Multiple Channels

    • Combine Slack for immediate team notifications and Email for archiving. Webhooks can integrate with incident-management platforms (PagerDuty, Opsgenie).
  • Name Alerts Clearly

    • Include both resource and condition in the name (e.g., “High 5xx Rate (Requests ≥ 10 for 5m)” or “DB Job Failures > 5 (Queued Jobs)”). This helps stakeholders understand context at a glance.
  • Review & Tune Periodically

    • As traffic patterns evolve, thresholds may become obsolete. Revisit alert definitions monthly to adjust thresholds, add new metrics, or disable unnecessary alerts.

By leveraging the Alerts resource, you can proactively monitor critical application metrics, automatically detect anomalies, and ensure the right team members are notified promptly—keeping your application resilient and performant.