Update index.md (#2513)

Add a small note about the trailing slash for the OnCall Integration
URL.

# What this PR does
A user contacted Support because they were confused by the need for a
trailing slash for the alertmanager oncall integration url. This PR is
an attempt to briefly call out the trailing slash is required in an
effort to prevent user confusion in the future.

## Which issue(s) this PR fixes

## Checklist

- [ ] Unit, integration, and e2e (if applicable) tests updated
- [ ] Documentation added (or `pr:no public docs` PR label added if not
required)
- [ ] `CHANGELOG.md` updated (or `pr:no changelog` PR label added if not
required)

---------

Co-authored-by: Ildar Iskhakov <Ildar.iskhakov@grafana.com>
Co-authored-by: GitHub Actions <actions@github.com>
Co-authored-by: Vadim Stepanov <vadimkerr@gmail.com>
Co-authored-by: mallettjared <110853992+mallettjared@users.noreply.github.com>
Co-authored-by: Joey Orlando <joey.orlando@grafana.com>
Co-authored-by: Wei-Chin Call <wei-chin.call@grafana.com>
Co-authored-by: Joey Orlando <joseph.t.orlando@gmail.com>
This commit is contained in:
Zach Day 2023-07-31 10:35:40 -05:00 committed by GitHub
parent 9c13acb9f5
commit 655ecd3aef
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 42 additions and 42 deletions

View file

@ -12,7 +12,7 @@ weight: 300
# Get started with Grafana OnCall
Grafana OnCall is an incident response tool built to help DevOps and SRE teams improve their collaboration, and resolve incidents faster.
Grafana OnCall was built to help DevOps and SRE teams improve their on-call management process and resolve incidents faster. With OnCall, users can create and manage on-call schedules, automate escalations, and monitor incident response from a central view, right within the Grafana UI. Teams no longer have to manage separate alerts from Grafana, Prometheus, and Alertmanager, lowering the risk of missing an important update and limiting the time spent receiving and responding to notifications.
With a centralized view of all your alerts and alert groups, automated escalations and grouping, and on-call scheduling, Grafana
OnCall helps ensure that alert notifications reach the right people, at the right time using the right notification method.

View file

@ -30,7 +30,7 @@ Read more about Jinja2 templating used in OnCall [here][jinja2-templating].
[Behaviour Templates][behavioral-template]
1. The Alert Group is available in Web, and can be published to messengers, based on the Route's **Publish to Chatops** configuration.
It is rendered using [Appearance Templates][appearance-template]
1. The Alert Group is escalated to uers based on the Escalation Chains selected for the Route
1. The Alert Group is escalated to users based on the Escalation Chains selected for the Route
1. Users can perform actions listed in [Learn Alert Workflow][learn-alert-workflow] section
## Configure and manage integrations

View file

@ -28,15 +28,16 @@ This integration is the recommended way to send alerts from Prometheus deployed
2. Select **Alertmanager Prometheus** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
You will need it when configuring Alertmanager.
You will need it when configuring Alertmanager.
<!--![123](../_images/connect-new-monitoring.png)-->
## Configuring Alertmanager to Send Alerts to Grafana OnCall
1. Add a new [Webhook](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config) receiver to `receivers`
section of your Alertmanager configuration
section of your Alertmanager configuration
2. Set `url` to the **OnCall Integration URL** from previous section
- **Note:** The url has a trailing slash that is required for it to work properly.
3. Set `send_resolved` to `true`, so Grafana OnCall can autoresolve alert groups when they are resolved in Alertmanager
4. It is recommended to set `max_alerts` to less than `300` to avoid rate-limiting issues
5. Use this receiver in your route configuration
@ -71,7 +72,7 @@ Grafana OnCall will notify you about that.
1. Go to **Integration Page**, click on three dots on top right, click **Heartbeat settings**
2. Copy **OnCall Heartbeat URL**, you will need it when configuring Alertmanager
3. Set up **Heartbeat Interval**, time period after which Grafana OnCall will start a new alert group if it
doesn't receive a heartbeat request
doesn't receive a heartbeat request
### Configuring Alertmanager to send heartbeats to Grafana OnCall Heartbeat
@ -80,37 +81,36 @@ generator to `prometheus.yaml`. It will always return true and act like always f
Grafana OnCall once in a given period of time:
```yaml
groups:
- name: meta
rules:
- alert: heartbeat
expr: vector(1)
labels:
severity: none
annotations:
description: This is a heartbeat alert for Grafana OnCall
summary: Heartbeat for Grafana OnCall
groups:
- name: meta
rules:
- alert: heartbeat
expr: vector(1)
labels:
severity: none
annotations:
description: This is a heartbeat alert for Grafana OnCall
summary: Heartbeat for Grafana OnCall
```
Add receiver configuration to `prometheus.yaml` with the **OnCall Heartbeat URL**:
```yaml
...
route:
...
routes:
- match:
alertname: heartbeat
receiver: 'grafana-oncall-heartbeat'
group_wait: 0s
group_interval: 1m
repeat_interval: 50s
receivers:
- name: 'grafana-oncall-heartbeat'
webhook_configs:
- url: https://oncall-dev-us-central-0.grafana.net/oncall/integrations/v1/alertmanager/1234567890/heartbeat/
send_resolved: false
...
route:
...
routes:
- match:
alertname: heartbeat
receiver: 'grafana-oncall-heartbeat'
group_wait: 0s
group_interval: 1m
repeat_interval: 50s
receivers:
- name: 'grafana-oncall-heartbeat'
webhook_configs:
- url: https://oncall-dev-us-central-0.grafana.net/oncall/integrations/v1/alertmanager/1234567890/heartbeat/
send_resolved: false
```
{{% docs/reference %}}

View file

@ -39,13 +39,13 @@ This integration is available for Grafana Cloud OnCall. You must have an Admin r
-d zabbix/zabbix-appliance:latest
```
1. Establish an ssh connection to a Zabbix server.
2. Establish an ssh connection to a Zabbix server.
```bash
docker exec -it zabbix-appliance bash
```
1. Place the [grafana_oncall.sh](#grafana_oncallsh-script) script in the `AlertScriptsPath` directory specified within
3. Place the [grafana_oncall.sh](#grafana_oncallsh-script) script in the `AlertScriptsPath` directory specified within
the Zabbix server configuration file (zabbix_server.conf).
```bash
@ -66,10 +66,10 @@ Within Zabbix web interface, do the following:
1. In a browser, open localhost:80.
1. Navigate to **Adminitstration > Media Types > Create Media Type**.
2. Navigate to **Adminitstration > Media Types > Create Media Type**.
<!--![](../_images/zabbix-1.png)-->
1. Create a Media Type with the following fields.
3. Create a Media Type with the following fields.
- Name: Grafana OnCall
- Type: script
@ -86,13 +86,13 @@ To send alerts to Grafana OnCall, the {ALERT.SEND_TO} value must be set in the [
1. In the web UI, navigate to **Administration > Users** and open the **user properties** form.
1. In the **Media** tab, click **Add** and copy the link from Grafana OnCall in the `Send to` field.
2. In the **Media** tab, click **Add** and copy the link from Grafana OnCall in the `Send to` field.
<!--![](../_images/zabbix-7.png)-->
1. Click **Test** in the last column to send a test alert to Grafana OnCall.
3. Click **Test** in the last column to send a test alert to Grafana OnCall.
<!--![](../_images/zabbix-3.png)-->
1. Specify **Send to** OnCall using the unique integration URL from the above step in the testing window that opens.
4. Specify **Send to** OnCall using the unique integration URL from the above step in the testing window that opens.
Create a test message with a body and optional subject and click **Test**.
<!--![](../_images/zabbix-4.png)
@ -106,11 +106,11 @@ Use the following procedure to configure grouping and auto-resolve.
1. Provide a parameter as an identifier for group differentiation to Grafana OnCall.
1. Append that variable to the subject of the action as `ONCALL_GROUP: ID`, where `ID` is any of the Zabbix [macros](https://www.zabbix.com/documentation/4.2/manual/appendix/macros/supported_by_location).
2. Append that variable to the subject of the action as `ONCALL_GROUP: ID`, where `ID` is any of the Zabbix [macros](https://www.zabbix.com/documentation/4.2/manual/appendix/macros/supported_by_location).
For example, `{EVENT.ID}`. The Grafana OnCall script [grafana_oncall.sh](#grafana_oncallsh-script) extracts this event
and passes the `alert_uid` to Grafana OnCall.
1. To enable auto-resolve within Grafana Oncall, the "Resolved" keyword is required in the **Default subject** field
3. To enable auto-resolve within Grafana Oncall, the "Resolved" keyword is required in the **Default subject** field
in **Recovered operations**.
<!--![](../_images/zabbix-6.png)-->

View file

@ -2,8 +2,8 @@ apiVersion: v2
name: oncall
description: Developer-friendly incident response with brilliant Slack integration
type: application
version: 1.3.17
appVersion: v1.3.17
version: 1.3.20
appVersion: v1.3.20
dependencies:
- name: cert-manager
version: v1.8.0