Merge pull request #2368 from grafana/dev

v1.3.2
This commit is contained in:
Ildar Iskhakov 2023-06-28 09:50:10 +08:00 committed by GitHub
commit cb2351d17d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
90 changed files with 2585 additions and 784 deletions

View file

@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## v1.3.2
### Changed
- Change permissions used during setup to better represent actions being taken by @mderynck ([#2242](https://github.com/grafana/oncall/pull/2242))
- Display 100000+ in stats when there are more than 100000 alert groups in the result ([#1901](https://github.com/grafana/oncall/pull/1901))
### Fixed
- For "You're Going OnCall" push notifications, show shift times in the user's configured timezone, otherwise UTC
by @joeyorlando ([#2351](https://github.com/grafana/oncall/pull/2351))
## v1.3.1 (2023-06-26)
### Fixed

View file

@ -6,79 +6,99 @@ weight: 600
# Escalation Chains and Routes
Escalation chains and routes for Grafana OnCall
Administrators can create escalation policies to automatically send alert group notifications to recipients.
These policies define how, where, and when to send notifications.
Escalation policies dictate how users and groups are notified when an alert notification is created. They can be very
simple, or very complex. You can define as many escalation configurations for an integration as you need, and you can
send notifications for certain alerts to a designated place when certain conditions are met, or not met.
Escalation policies have three main parts:
- User settings, where a user sets up their preferred or required notification method.
- An **escalation chain**, which can have one or more steps that are followed in order when a notification is triggered.
- A **route**, that allows administrators to manage notifications by flagging expressions in an alert payload.
## Escalation chains
An escalation chain can have many steps, or only one step. For example, steps can be configured to notify multiple users
in some order, notify users that are scheduled for on-call shifts, ping groups in Slack, use outgoing webhooks to
integrate with other services, such as JIRA, and do a number of other automated notification tasks.
Often alerts from monitoring systems need to be sent to different escalation chains and messaging channels, based on their severity, or other alert content.
## Routes
An escalation workflow can employ **routes** that administrators can configure to filter alerts by regular expressions
(outdated) or Jinja2 templates
in their payloads. Notifications for these alerts can be sent to individuals, or they can make use of a new
or existing escalation chain.
Routes are used to determine which escalation chain should be used for a specific alert
group. A route's ["Routing Templates"]({{< relref "jinja2-templating#routing-template" >}})
are evaluated for each alert and **the first matching route** is used to determine the
escalation chain and chatops channels.
## Configure escalation chains
> **Example:**
>
>
> * trigger escalation chain called `Database Critical` for alerts with `{{ payload.severity == "critical" and payload.service == "database" }}` in the payload
> * create a different route for alerts with the payload `{{ "synthetic-monitoring-dev-" in payload.namespace }}` and select a escalation chain called `Security`.
You can create and edit escalation chains in two places: within **Integrations**, by clicking on an integration tile,
and in **Escalation Chains**. The following steps are for the **Integrations** workflow, but are generally applicable
in both situations.
### Manage routes
You can use **escalation chains** and **routes** to determine ordered escalation procedures. Escalation chains allow
you to set up a series of alert group notification actions that trigger if certain conditions that you specify are
met or not met.
1. Open Integration page
2. Click **Add route** button to create a new route
3. Click **Edit** button to edit `Routing Template`. The routing template must evaluate to `True` for it to apply
4. Select channels in **Publish to Chatops** section
> **Note:** If **Publish to Chatops** section does not exist, connect Chatops integrations first, see more in [docs]({{< relref notify >}})
5. Select **Escalation Chain** from the list
6. If **Escalation Chain** does not exist, click **Add new escalation chain** button to create a new one, it will open in a new tab.
7. Once created, **Reload list**, and select the new escalation chain
8. Click **Arrow Up** and **Arrow Down** on the right to change the order of routes
9. Click **Three dots** and **Delete Route** to delete the route
1. Click on the integration tile for which you want to define escalation policies.
## Escalation Chains
The **Escalations** section for the notification is in the pane to the right of the list of notifications.
You can click **Change alert template and grouping** to customize the look of the alert. You can also do this by
clicking the **Settings** (gear) icon in the integration tile.
Once an alert group is created and assigned to the route with escalation chain, the
escalation chain will be executed. Until user performs an action, which stops the escalation
chain (e.g. acknowledge, resolve, silence etc), the escalation chain will continue to
execute.
1. Create an escalation chain.
Users can create escalation chains to configure different type of escalation workflows.
For example, you can create a chain that will notify on-call users with high priopity, and
another chain that will only send a message into a Slack channel.
In the escalation pane, click **Escalate to** to choose from previously added escalation chains, or create a new one
by clicking **Make a copy** or **Create a new chain**. This will be the name of the escalation policy you define.
Escalation chains determine Who and When to notify. [How to notify]({{< relref notify >}}) is set by the user, based on their own preferences.
1. Add escalation steps.
### Types of escalation steps
Click **Add escalation step** to choose from a set of actions and specify their triggering conditions. By default, the
first step is to notify a slack channel or user. Specify users or channels or toggle the switch to turn this step off.
* `Wait` - wait for a specified amount of time before proceeding to the next step. If you
need a larger time interval, use multiple wait steps in a row.
* `Notify users` - send a notification to a user or a group of users.
* `Notify users from on-call schedule` - send a notification to a user or a group of users
from an on-call schedule.
* `Resolve incident automatically` - resolve the alert group right now with status
`Resolved automatically`.
* `Notify whole slack channel` - send a notification to a slack channel (not recommended
to use as it will spam the channel).
* `Notify Slack User Group` - send a notification to a slack user group.
* `Trigger outgoing webhook` - trigger an [outgoing webhook]({{< relref outgoing-webhooks
To mark an escalation as **Important**, select the option from the step **Start** dropdown menu. User notification
policies can be separately defined for **Important** and **Default** escalations.
>}}).
<https://en.wikipedia.org/>
## Create a route
* `Notify users one by one (round robin)` - each notification will be sent to a group of
users one by one, in sequential order in [round robin fashion](<https://en.wikipedia.org/>
wiki/Round-robin_item_allocation).
* `Continue escalation if current time is in range` - continue escalation only if current
time is in specified range. It will wait for the specfied time to continue escalation.
Useful when you want to get escalation only during working hours
* `Continue escalation if >X alerts per Y minutes (beta)` - continue escalation only if it
passes some threshold
* `Repeat escalation from beginning (5 times max)` - loop the escalation chain
To add a route, click **Add Route**.
### Notification types
You can set up a single route and specify notification escalation steps, or you can add multiple routes, each with
its own configuration.
Each escalation step that notifies a user, does so by triggering their personal notification steps. These are configured in the Grafana
OnCall users page (by clicking "View my profile").
It will be executed for each user in the escalation step
User can configure two types of personal notification chains:
Each route added to an escalation policy follows an `IF`, `ELSE IF`, or `ELSE` path and depends on the type of alert you
specify using a Jinja template that matches content in the payload body of the first alert in alert group. You can also
specify where to send the notification for each route.
* **Default Notifications**
For example, you can send notifications for alerts with `{{ payload.severity == "critical" and payload.service ==
"database" }}` [(Check Jinja2 reference)]({{< relref "jinja2-templating" >}}) in the payload to an escalation chain
called `Bob_OnCall`. You can create a different route for alerts
with the payload `{{ "synthetic-monitoring-dev-" in payload.namespace }}` and select a escalation chain called
`NotifySecurity`.
* **Important Notifications**
> **NOTE:** When you modify an escalation chain or a route, it will modify that escalation chain across
> all integrations that use it.
In the escalation step, user can select which type of notification to use.
Check more information on [Personal Notification Preferences]({{< relref notify >}}) page.
### Manage Escalation Chains
1. Open **Escalation Chains** page
2. Click **New escalation chain** button to create a new escalation chain
3. Enter a name and assign it to a team
> **Note:** Name must be unique across organization
> **Note:** Alert Groups inherit the team from the Integration, not the Escalation Chain
4. Click **Add escalation step** button to add a new step
5. Click **Delete** to delete the Escalation Chain, and **Edit** to edit the name or the team.
> **Important:** Linked Integrations and Routes are displayed in the right panel. Any change in the Escalation Chain will
affect all linked Integrations and Routes.

View file

@ -12,10 +12,9 @@ weight: 300
# Get started with Grafana OnCall
Grafana OnCall is an incident response tool built to help DevOps and SRE teams improve their collaboration and resolve
incidents faster.
Grafana OnCall is an incident response tool built to help DevOps and SRE teams improve their collaboration, and resolve incidents faster.
With a centralized view of all your alerts, automated alert escalation and grouping, and on-call scheduling, Grafana
With a centralized view of all your alerts and alert groups, automated escalations and grouping, and on-call scheduling, Grafana
OnCall helps ensure that alert notifications reach the right people, at the right time using the right notification method.
The following diagram details an example alert workflow with Grafana OnCall:
@ -25,23 +24,34 @@ The following diagram details an example alert workflow with Grafana OnCall:
These procedures introduce you to initial Grafana OnCall configuration steps, including monitoring system integration,
how to set up escalation chains, and how to set up calendar for on-call scheduling.
## Before you begin
## Grafana Cloud OnCall vs Open Source Grafana OnCall
Grafana OnCall is available for Grafana Cloud as well as Grafana open source users. You must have a Grafana Cloud account
or use [Open Source Grafana OnCall]({{< relref "../open-source" >}})
Grafana OnCall is available both in Grafana Cloud and Grafana Open Source.
## Install Open Source Grafana OnCall
OnCall is available in Grafana Cloud automatically:
For Open Source Grafana OnCall installation guidance, refer to
[Open Source Grafana OnCall]({{< relref "../open-source" >}})
1. Create or log in into [Grafana Cloud account](https://grafana.com/auth/sign-up/create-user)
2. Sign in to your Grafana stack
3. Choose **Alerts and IRM** from the left menu
4. Click **OnCall** to access Grafana OnCall
> **Note:** If you are using Grafana OnCall with your Grafana Cloud instance there are no install steps. Access Grafana
> OnCall from your Grafana Cloud account and skip ahead to “Get alerts into Grafana OnCall”
Otherwise you'll need to install [Open Source Grafana OnCall]({{< relref "../open-source" >}}) on your own.
## Get alerts into Grafana OnCall
## How to configure Grafana OnCall
Once youve installed Grafana OnCall or accessed it from your Grafana Cloud instance, you can begin integrating with
monitoring systems, configuring escalation chains, and get alerts into Grafana OnCall.
* Users with [Admin role]({{< relref "user-and-team-management" >}}) can configure Alert rules (Integrations, Routes, etc)
to define **when and which users to notify**
* OnCall users with [Editor role]({{< relref "user-and-team-management" >}}) can work with Alerts Groups and set up personal settings,
e.g. **how to notify**.
> **Note:** If your role is **Editor**, you can skip to [**Learn Alert Workflow**]({{< relref "#learn-about-the-alert-workflow" >}}) section
of this doc
## Get alerts into Grafana OnCall and configure rules
Once youve installed Grafana OnCall, or accessed it from your Grafana Cloud instance, you can begin integrating with
monitoring systems to get alerts into Grafana OnCall. Additionally, you can configure when, and which, users get notified, by setting templates, routes,
escalation chains, etc.
### Integrate with a monitoring system
@ -51,47 +61,31 @@ send a demo alert.
#### Configure your first integration
1. In Grafana OnCall, navigate to the **Integrations** tab and click **+ New integration to receive alerts**.
1. In Grafana OnCall, navigate to the **Integrations** tab and click **+ New integration**.
2. Select an integration from the provided options, if the integration youre looking for isnt listed, then select Webhook.
3. Follow the configuration steps on the integration settings page.
4. Complete any necessary configurations in your monitoring system to send alerts to Grafana OnCall.
3. Click **How to connect** to view the instructions specific to your monitoring system
#### Send a demo alert
1. In the integration tab, click **Send demo alert** then navigate to the **Alert Groups** tab to see your test alert firing.
2. Explore the alert by clicking on the title of the alert.
3. Acknowledge and resolve the test alert.
1. In the integration tab, click **Send demo alert**, review and modify the alert payload as needed, and click **Send**
2. Navigate to the **Alert Groups** tab to see your test alert firing
3. Explore the Alert Group by clicking on the title
4. Acknowledge and resolve the test alert group
For more information on Grafana OnCall integrations and further configuration guidance, refer to
[Grafana OnCall integrations]({{< relref "../integrations" >}})
### Learn Alert Flow
### Review and modify alert templates
All Alerts in OnCall are grouped to Alert Groups ([read more about Grouping ID]({{< relref "../jinja2-templating" >}})). Alert Group could have mutually
exclusive states:
Review and customize templates to interpret monitoring alerts and minimize noise. Group alerts, enable auto-resolution,
customize visualizations and notifications by extracting data from alerts. See more details in the
[Jinja2 templating]({{< relref "../jinja2-templating" >}}) section.
- **Firing:** Once Alert Group is registered, Escalation Policy associated with it is getting started. Escalation policy will work while Alert Group is in this status.
- **Acknowledged:** Ongoing Escalation Chain will be interrupted. Unacknowledge will move Alert Group to the "Firing" state and will re-launch Escalation Chain.
- **Silenced:** Similar to "Acknowledged" but designed to be temporary with a timeout. Once time is out, will re-launch Escalation Chain and move Alert Group
to the "Firing" state.
- **Resolved:** Similar to "Acknowledged".
### Configure scalation Chains
Possible transitions:
Escalation chains are a set of steps that define who to notify, and when.
- Firing -> Acknowledged
- Firing -> Silenced
- Firing -> Resolved
- Silenced -> Firing
- Silenced -> Acknowledged
- Silenced -> Resolved
- Acknowledged -> Silenced
- Acknowledged -> Firing
- Acknowledged -> Resolved
- Resolved -> Firing
Transitions change trigger Escalation Chain launch with a few-seconds delay to avoid unexpected notifications.
### Configure Escalation Chains
See more details in the [Escalation Chains]({{< relref "../escalation-chains-and-routes#escalation-chains" >}}) section.
Escalation Chains are customizable automated alert routing steps that enable you to specify who is notified for a certain
alert. In addition to escalation chains, you can configure Routes to send alerts to different escalation chains depending
@ -111,8 +105,40 @@ To configure Escalation Chains:
Alerts from this integration will now follow the escalation steps configured in your Escalation Chain.
For more information on Escalation Chains and more ways to customize them, refer to
[Configure and manage Escalation Chains]({{< relref "escalation-chains-and-routes" >}})
Routes define which messenger channels and escalation chains to use for notifications. See more details in
the [Routes]({{< relref "../escalation-chains-and-routes#routes" >}}) section.
### Learn about the Alert Workflow
* All Alerts in OnCall are grouped into Alert Groups ([read more about Grouping ID]({{< relref "../jinja2-templating" >}})).
An Alert Group can have the following, mutually exclusive states:
* **Firing:** Once Alert Group is registered, Escalation Policy associated with it is getting started.
Escalation policy will work while Alert Group is in this status.
* **Acknowledged:** Ongoing Escalation Chain will be interrupted. Unacknowledge will move Alert Group to
the "Firing" state and will re-launch Escalation Chain.
* **Silenced:** Similar to "Acknowledged" but designed to be temporary with a timeout. Once time is out, will
re-launch Escalation Chain and move Alert Group
to the "Firing" state.
* **Resolved:** Similar to "Acknowledged".
*
* Possible transitions:
* Firing -> Acknowledged
* Firing -> Silenced
* Firing -> Resolved
* Silenced -> Firing
* Silenced -> Acknowledged
* Silenced -> Resolved
* Acknowledged -> Silenced
* Acknowledged -> Firing
* Acknowledged -> Resolved
* Resolved -> Firing
Transition changes trigger Escalation Chains to launch, with a few-second delay (to avoid unexpected notifications).
## Get notified of an alert
In order for Grafana OnCall to notify you of an alert, you must configure how you want to be notified. Personal notification
@ -121,7 +147,8 @@ policies, chatops integrations, and on-call schedules allow you to automate how
### Configure personal notification policies
Personal notification policies determine how a user is notified for a certain type of alert. Get notified by SMS,
phone call, Slack mentions, or mobile push notification. Administrators can configure how users receive notification for certain types of alerts.
phone call, Slack mentions, or mobile push notification. Administrators can configure how users receive notifications
for certain types of alerts.
For more information on personal notification policies, refer to
[Manage users and teams for Grafana OnCall]({{< relref "user-and-team-management" >}})

View file

@ -14,22 +14,24 @@ weight: 500
# Integrations
"Integration" is a main entry point for alerts being consumed by OnCall. Rendering, grouping and routing are configured
within integrations.
"Integration" is a set of Jinja2 templates which is transforming alert payload to the format suitable to OnCall.
You could check pre-configured templates in the list of avaliable integrations (Integrations ->
"New integration to receive alerts"), create your own or adjust existing.
An "Integration" is a main entry point for alerts being consumed by Grafana OnCall.
Integrations receive alerts on a unique API URL, interprets them using a set of templates tailored for the monitoring system, and starts
escalations.
Read more about Jinja2 templating used in OnCall [here]({{< relref "../jinja2-templating" >}}).
Alert flow within integration:
## Learn Alert Flow Within Integration
1. Alert is registered by unique integration url (or [e-mail]({{< relref "inbound-email" >}}) in case of inbound e-mail
integration)
2. If there is a non-resolved "alert group" with the same "grouping id", alert will be added to this "alert group".
3. If there is no non-resolved "alert group" with the same "grouping id", new "alert group" will be issued.
4. New "alert group" will be routed using routing engine and escalation chain will be started (TODO: link).
1. An Alert is received on an integration's **Unique URL** as an HTTP POST request with a JSON payload (or via
[e-mail]({{< relref "inbound-email" >}}), for inbound e-mail integrations)
1. Routing is determined for the incoming alert, by applying the [Routing Template]({{< relref "jinja2-templating#routing-template" >}})
1. Alert Grouping is determined based on [Grouping Id Template]({{< relref "jinja2-templating#behavioral-template" >}})
1. An Alert Group may be acknowledged or resolved with status `_ by source` based on
[Behaviour Templates]({{< relref "jinja2-templating#behavioral-template" >}})
1. The Alert Group is available in Web, and can be published to messengers, based on the Route's **Publish to Chatops** configuration.
It is rendered using [Appearance Templates]({{< relref "jinja2-templating#appearance-template" >}})
1. The Alert Group is escalated to uers based on the Escalation Chains selected for the Route
1. Users can perform actions listed in [Learn Alert Workflow]({{< relref "get-started#learn-alert-workflow" >}}) section
## Configure and manage integrations
@ -40,19 +42,69 @@ describe how to configure and customize your integrations to ensure alerts are t
To configure an integration for Grafana OnCall:
1. In Grafana OnCall, navigate to the **Integrations** tab and click **+ New integration to receive alerts**.
2. Select an integration from the provided options, if the integration you want isnt listed, then select **Webhook**.
3. Follow the configuration steps on the integration settings page.
4. Complete any necessary configurations in your tool to send alerts to Grafana OnCall.
1. In Grafana OnCall, navigate to the **Integrations** tab and click **+ New integration**.
1. Select an integration type from the [list of available integrations]({{< relref "#list-of-available-integrations" >}}).
If the integration you want isnt listed, then select **Webhook**.
1. Fill in a title and a description for your integration, assign it to a team, and click **Create Integration**.
1. The Integration page will open. Here you will see details about the Integration.
You can use the HTTP Endpoint url to send events from an external monitoring system.
Click the **How to connect** link for more information.
1. Complete any necessary configurations in your tool to send alerts to Grafana OnCall.
1. Click **Send demo alert** to send a test alert to Grafana OnCall.
### Complete the integration configuration
- Review and customise grouping, autoresolution, autoacknowledge, etc [templates]({{< relref "../jinja2-templating" >}})
if you want to customise alert behaviour for your team
- Review and customise [other templates]({{< relref "../jinja2-templating" >}}) to change how alert groups are displayed
in different parts of Grafana OnCall: UI, messengers, emails, notifications, etc.
- Add routes to your integration to route alerts to different users and teams based on labels or other data
- Connect your escalation chains to routes to notify the right people, at the right time
- Learn [how to start Maintenance Mode]({{< relref "#maintenance-mode" >}}) for an integration
- Send demo alerts to an integration to make sure routes, templates, and escalations, are working as expected. Consider using
`Debug Maintenance mode` to avoid sending real notifications to your team
### Manage integrations
To manage existing integrations, navigate to the **Integrations** tab in Grafana OnCall and select the integration
you want to manage.
#### Manage integration behaviour and rendering
#### Maintenance Mode
Start maintenance mode when performing scheduled maintenance or updates on your infrastructure, which may trigger false alarms.
There are two possible maintenance modes:
- **Debug** - test routing and escalations without real notifications. Alerts will be processed as usual, but no notifications
will be sent to users.
- **Maintenance** - group alerts into one during infrastructure work.
##### Manage maintenance Mode
1. Go to the Integration page and click **Three dots**
1. Select **Start Maintenance Mode**
1. Select **Debug** or **Maintenance** mode
1. Set the **Duration** of Maintenance Mode
1. Click **Start**
1. If you want to stop maintenance mode before it ends, click **Three dots** and select **Stop Maintenance Mode**
#### Heartbeat monitoring
An OnCall heartbeat acts as a healthcheck for alert group monitoring. You can configure you monitoring to regularly send alerts
to the heartbeat endpoint. If OnCall doen't receive one of these alerts, it will create an new alert group and escalate it
1. Go to Integration page and click **Three dots**
1. Select **Heartbeat Settings**
1. Set **Heartbeat interval**
1. Copy **Endpoint** into you monitoring system.
More specific instructions can be found in a specific integration's documentation.
#### Behaviour and rendering templates example
"Integration templates" are Jinja2 templates which are applied to each alert to define it's rendering and behaviour.
Read more in [Templates guide]({{< relref jinja2-templating>}})
For templates editor:
1. Navigate to the **Integrations** tab, select an integration from the list.
@ -80,14 +132,14 @@ template: `{{ payload.region }}}`
Should point to the most specific place in the alert source related to the alert group. Also rendering result will be
available in other templates as a variable `{{ source_link }}`.
Read more about Jinja2 (TODO: link) in a specific section.
#### Edit integration name
#### Edit integration name, description and assigned team
To edit the name of an integration:
1. Navigate to the **Integrations** tab, select an integration from the list of enabled integrations.
2. Click the **pencil icon** next to the integration name.
3. Provide a new name and click **Update**.
1. Click the **three dots** next to the integration name and select **Integration settings**.
1. Provide a new name, description, and team, and click **Save**.
## List of available integrations
{{< section >}}

View file

@ -15,24 +15,33 @@ weight: 300
# Alertmanager integration for Grafana OnCall
The Alertmanager integration for Grafana OnCall handles alerts sent by client applications such as the Prometheus server.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
Grafana OnCall provides<!--[grouping](#alertmanager-grouping-amp-oncall-grouping)--> grouping abilities when processing
alerts from Alertmanager, including initial deduplicating, grouping, and routing the alerts to Grafana OnCall.
The Alertmanager integration handles alerts from [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/).
This integration is the recommended way to send alerts from Prometheus deployed in your infrastructure, to Grafana OnCall.
## Configure Alertmanager integration for Grafana OnCall
> **Pro tip:** Create one integration per team, and configure alertmanager labels selector to send alerts only related to that team
You must have an Admin role to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Prometheus Alertmanager
1. In the **Integrations** tab, click **+ New integration to receive alerts**.
2. Select **Alertmanager** from the list of available integrations.
3. Follow the instructions in the **How to connect** window to get your unique integration URL and identify next steps.
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Alertmanager Prometheus** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
You will need it when configuring Alertmanager.
<!--![123](../_images/connect-new-monitoring.png)-->
## Configure Alertmanager
## Configuring Alertmanager to Send Alerts to Grafana OnCall
Update the `receivers` section of your Alertmanager configuration to use a unique integration URL:
1. Add a new [Webhook](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config) receiver to `receivers`
section of your Alertmanager configuration
2. Set `url` to the **OnCall Integration URL** from previous section
3. Set `send_resolved` to `true`, so Grafana OnCall can autoresolve alert groups when they are resolved in Alertmanager
4. It is recommended to set `max_alerts` to less than `300` to avoid rate-limiting issues
5. Use this receiver in your route configuration
Here is the example of final configuration:
```yaml
route:
@ -44,28 +53,63 @@ receivers:
webhook_configs:
- url: <integation-url>
send_resolved: true
max_alerts: 300
```
## Configure grouping with Alertmanager and Grafana OnCall
## Complete the Integration Configuration
You can use the alert grouping mechanics of Alertmanager and Grafana OnCall to configure your alert grouping preferences.
Complete configuration by setting routes, templates, maintenances, etc. Read more in
[this section]({{< relref "../../integrations/#complete-the-integration-configuration" >}})
Alertmanager offers three alert grouping options:
## Configuring OnCall Heartbeats (optional)
- `group_by` provides two options, `instance` or `job`.
- `group_wait` sets the length of time to initially wait before sending a notification for a particular group of alerts.
For example, `group_wait` can be set to 45s.
An OnCall heartbeat acts as a monitoring for monitoring systems. If your monitoring is down and stop sending alerts,
Grafana OnCall will notify you about that.
Setting a high value for `group_wait` reduces alert noise and minimizes interruption, but it may introduce delays in
receiving alert notifications. To set an appropriate wait time, consider whether the group of alerts will be the same
as those previously sent.
### Configuring Grafana OnCall Heartbeat
- `group_interval` sets the length of time to wait before sending notifications about new alerts that have been added to
a group of alerts that have been previously alerted on. This setting is usually set to five minutes or more.
1. Go to **Integration Page**, click on three dots on top right, click **Heartbeat settings**
2. Copy **OnCall Heartbeat URL**, you will need it when configuring Alertmanager
3. Set up **Heartbeat Interval**, time period after which Grafana OnCall will start a new alert group if it
doesn't receive a heartbeat request
During high alert volume periods, Alertmanager will send alerts at each `group_interval`, which can mean a lot of
distraction. Grafana OnCall grouping will help manage this in the following ways:
### Configuring Alertmanager to send heartbeats to Grafana OnCall Heartbeat
- Grafana OnCall groups alerts based on the first label of each alert.
- Grafana OnCall marks an alert group as resolved only when there are fewer than 500 grouped
alerts, and every `firing` alert with the same labels has a corresponding `resolved` alert.
You can configure Alertmanager to regularly send alerts to the heartbeat endpoint. Add `vector(1)` as a heartbeat
generator to `prometheus.yaml`. It will always return true and act like always firing alert, which will be sent to
Grafana OnCall once in a given period of time:
```yaml
groups:
- name: meta
rules:
- alert: heartbeat
expr: vector(1)
labels:
severity: none
annotations:
description: This is a heartbeat alert for Grafana OnCall
summary: Heartbeat for Grafana OnCall
```
Add receiver configuration to `prometheus.yaml` with the **OnCall Heartbeat URL**:
```yaml
...
route:
...
routes:
- match:
alertname: heartbeat
receiver: 'grafana-oncall-heartbeat'
group_wait: 0s
group_interval: 1m
repeat_interval: 50s
receivers:
- name: 'grafana-oncall-heartbeat'
webhook_configs:
- url: https://oncall-dev-us-central-0.grafana.net/oncall/integrations/v1/alertmanager/1234567890/heartbeat/
send_resolved: false
```

View file

@ -0,0 +1,37 @@
---
aliases:
- add-amazon-sns/
- /docs/oncall/latest/integrations/available-integrations/configure-amazon-sns/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-amazon-sns/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- amazon-sns
title: Amazon SNS
weight: 500
---
# Amazon SNS integration for Grafana OnCall
The Amazon SNS integration for Grafana OnCall handles ticket events sent from Amazon SNS webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations
in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Amazon SNS
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Amazon SNS** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from
**HTTP Endpoint** section.
## Configuring Amazon SNS to Send Alerts to Grafana OnCall
1. Create a new Topic in <https://console.aws.amazon.com/sns>
2. Open this topic, then create a new subscription
3. Choose the protocol HTTPS
4. Add the **OnCall Integration URL** to the Amazon SNS Endpoint

View file

@ -18,19 +18,91 @@ weight: 500
The AppDynamics integration for Grafana OnCall handles health rule violation events sent from AppDynamics actions.
The integration provides grouping and auto-resolve logic via customizable alert templates.
## Configure AppDynamics integration for Grafana OnCall
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
You must have an Admin role to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from AppDynamics
1. In the **Integrations** tab, click **+ New integration to receive alerts**.
1. In the **Integrations** tab, click **+ New integration**.
2. Select **AppDynamics** from the list of available integrations.
3. Follow the instructions in the **How to connect** window to get your unique integration URL and review next steps.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
You will need it when configuring AppDynamics.
## Grouping and auto-resolve
## Configuring AppDynamics to Send Alerts to Grafana OnCall
Create a new HTTP Request Template in AppDynamics to send events to Grafana OnCall using the integration URL above.
Refer to
[AppDynamics documentation]
(<https://docs.appdynamics.com/appd/23.x/latest/en/appdynamics-essentials/alert-and-respond/actions/http-request-actions-and-templates>)
for more information on **how to create HTTP Request Templates**:
Use the following values when configuring a new HTTP Request Template:
* Request URL:
* Method: POST
* Raw URL: **OnCall Integration URL** from previous section
* Authentication:
* Type: None
* Payload:
* MIME Type: application/json
* Template:
```json
{
"event": {
"eventType": "${latestEvent.eventType}",
"id": "${latestEvent.id}",
"guid": "${latestEvent.guid}",
"eventTypeKey": "${latestEvent.eventTypeKey}",
"eventTime": "${latestEvent.eventTime}",
"displayName": "${latestEvent.displayName}",
"summaryMessage": "${latestEvent.summaryMessage}",
"eventMessage": "${latestEvent.eventMessage}",
"application": {
"name": "${latestEvent.application.name}"
},
"node": {
"name": "${latestEvent.node.name}"
},
"severity": "${latestEvent.severity}",
"deepLink": "${latestEvent.deepLink}"
}
}
```
* Response Handling Criteria:
* Success Criteria: Status Code 200
* Settings:
* One Request Per Event: Enabled
After setting up a template, create a new action in AppDynamics and select the template you created earlier.
Now you can configure policies to trigger the action when certain events occur in AppDynamics.
When configuring a policy, select the following events to trigger the action:
```plain
Health Rule Violation Started - Warning
Health Rule Violation Started - Critical
Health Rule Violation Continues - Warning
Health Rule Violation Continues - Critical
Health Rule Violation Upgraded - Warning to Critical
Health Rule Violation Downgraded - Critical to Warning
Health Rule Violation Ended - Warning
Health Rule Violation Ended - Critical
Health Rule Violation Canceled - Warning
Health Rule Violation Canceled - Critical
```
After setting up the connection, you can test it by sending a test request from the AppDynamics UI.
## Understanding How Alerts Grouped and Auto-resolved
Grafana OnCall provides grouping and auto-resolve logic for the AppDynamics integration:
- Alerts created from health rule violation events are grouped by application and node name
- Alert groups are auto-resolved when the health rule violation is ended or canceled
* Alerts created from health rule violation events are grouped by application and node name
* Alert groups are auto-resolved when the health rule violation is ended or canceled
To customize this behaviour, consider modifying alert templates in integration settings.
## Complete the Integration Configuration
Complete configuration by setting routes, templates, maintenances, etc. Read more in
[this section]({{< relref "../../integrations/#complete-the-integration-configuration" >}})

View file

@ -0,0 +1,38 @@
---
aliases:
- add-datadog/
- /docs/oncall/latest/integrations/available-integrations/configure-datadog/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-datadog/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- datadog
title: Datadog
weight: 500
---
# Datadog integration for Grafana OnCall
The Datadog integration for Grafana OnCall handles ticket events sent from Datadog webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Datadog
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Datadog** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Datadog to Send Alerts to Grafana OnCall
1. Navigate to the Integrations page from the sidebar
2. Search for webhook in the search bar
3. Enter a name for the integration, for example: grafana-oncall-alerts
4. Paste the **OnCall Integration URL**, then save
5. Navigate to the Events page from the sidebar to send the test alert
6. Type @webhook-grafana-oncall-alerts test alert
7. Click the post button

View file

@ -0,0 +1,76 @@
---
aliases:
- add-elastalert/
- /docs/oncall/latest/integrations/available-integrations/configure-elastalert/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-elastalert/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- elastalert
title: ElastAlert
weight: 500
---
# ElastAlert integration for Grafana OnCall
The ElastAlert integration for Grafana OnCall handles ticket events sent from ElastAlert webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from ElastAlert
1. In the **Integrations** tab, click **+ New integration**.
2. Select **ElastAlert** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring ElastAlert to Send Alerts to Grafana OnCall
To send an alert from ElastAlert to a webhook, follow these steps:
> Refer to [ElastAlert http-post docs](https://elastalert.readthedocs.io/en/latest/ruletypes.html#http-post) for more details
1. Open your ElastAlert configuration file (e.g., `config.yaml`).
2. Locate the `alert` section.
3. Add the following configuration for the webhook alert:
```yaml
alert: post
http_post_url: "http://example.com/api"
http_post_static_payload:
title: abc123
```
Replace `"abc123"` with a suitable name for your alert, and `"http://example.com/api"` with **OnCall Integration URL**.
4. Save the configuration file.
After configuring the webhook, ElastAlert will send alerts to the specified endpoint when triggered.
Make sure your webhook endpoint is configured to receive and process the incoming alerts.
## Grouping, auto-acknowledge and auto-resolve
Grafana OnCall provides grouping, auto-acknowledge and auto-resolve logic for the ElastAlert integration:
- Alerts created from ticket events are grouped by ticket ID
- Alert groups are auto-acknowledged when the ticket status is set to "Pending"
- Alert groups are auto-resolved when the ticket status is set to "Solved"
To customize this behaviour, consider modifying alert templates in integration settings.
### Configuring Elastalert to send heartbeats to Grafana OnCall Heartbeat
Add the following rule to ElastAlert
```yaml
index: elastalert_status
type: any
alert: post
http_post_url: {{ heartbeat_url }}
realert:
minutes: 1
alert_text: elastalert is still running
alert_text_type: alert_text_only
```

View file

@ -0,0 +1,37 @@
---
aliases:
- add-fabric/
- /docs/oncall/latest/integrations/available-integrations/configure-fabric/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-fabric/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- fabric
title: Fabric
weight: 500
---
# Fabric integration for Grafana OnCall
The Fabric integration for Grafana OnCall handles ticket events sent from Fabric webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Fabric
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Fabric** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Fabric to Send Alerts to Grafana OnCall
1. Go to <https://www.fabric.io/settings/apps>
2. Choose your application
3. Navigate to Service Hooks -> WebHook
4. Enter URL: **OnCall Integration URL**
5. Click Verify
6. Choose "SEND IMPACT CHANGE ALERTS" and "ALSO SEND NON-FATAL ALERTS"

View file

@ -17,8 +17,7 @@ weight: 100
Grafana Alerting for Grafana OnCall can be set up using two methods:
- Grafana Alerting: Grafana OnCall is connected to the same Grafana instance being used to manage Grafana OnCall.
- Grafana (Other Grafana): Grafana OnCall is connected to one or more Grafana instances separate from the one being
used to manage Grafana OnCall.
- Grafana (Other Grafana): Grafana OnCall is connected to one or more Grafana instances, separate from the one being used to manage Grafana OnCall.
## Configure Grafana Alerting for Grafana OnCall

View file

@ -22,14 +22,14 @@ You must have an Admin role to create integrations in Grafana OnCall.
1. In the **Integrations** tab, click **+ New integration to receive alerts**.
2. Select **Inbound Email** from the list of available integrations.
3. Get your dedicated email address in the **How to connect** window.
3. Get your dedicated email address in the **Integration email** section and use it to send your emails.
## Grouping and auto-resolve
Alert groups will be grouped by email subject and auto-resolved if the email message text equals "OK".
This behaviour can be modified via [custom templates]({{< relref "jinja2-templating" >}}).
Alerts from Inbound Email integration have followng payload:
Alerts from Inbound Email integration have the following payload:
```json
{

View file

@ -18,13 +18,27 @@ weight: 500
The Jira integration for Grafana OnCall handles issue events sent from Jira webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
## Configure Jira integration for Grafana OnCall
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
You must have an Admin role to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Jira
1. In the **Integrations** tab, click **+ New integration to receive alerts**.
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Jira** from the list of available integrations.
3. Follow the instructions in the **How to connect** window to get your unique integration URL and review next steps.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section. You will need it when configuring Jira.
## Configuring Jira to Send Alerts to Grafana OnCall
Create a new webhook connection in Jira to send events to Grafana OnCall using the integration URL above.
Refer to [Jira documentation](https://developer.atlassian.com/server/jira/platform/webhooks/) for more information on how to create and manage webhooks
When creating a webhook in Jira, select the following events to be sent to Grafana OnCall:
1. Issue - created
2. Issue - updated
3. Issue - deleted
After setting up the connection, you can test it by creating a new issue in Jira. You should see a new alert group in Grafana OnCall.
## Grouping, auto-acknowledge and auto-resolve

View file

@ -0,0 +1,61 @@
---
aliases:
- add-kapacitor/
- /docs/oncall/latest/integrations/available-integrations/configure-kapacitor/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-kapacitor/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- kapacitor
title: Kapacitor
weight: 500
---
# Kapacitor integration for Grafana OnCall
The Kapacitor integration for Grafana OnCall handles ticket events sent from Kapacitor webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Kapacitor
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Kapacitor** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Kapacitor to Send Alerts to Grafana OnCall
To send an alert from Kapacitor, you can follow these steps:
1. Create a Kapacitor TICKscript or modify an existing one to define the conditions for triggering the alert.
The TICKscript specifies the data source, data processing, and the alert rule. Here's an example of a simple TICKscript:
```tickscript
stream
|from()
.measurement('measurement_name')
.where(lambda: <condition>)
|alert()
.webhook('<webhook_url>')
```
Replace `'measurement_name'` with the name of the measurement you want to monitor, `<condition>`
with the condition that triggers the alert, and `'<webhook_url>'` with **OnCall Integration URL**
2. Save the TICKscript file with a `.tick` extension, for example, `alert_script.tick`.
3. Start the Kapacitor service using the TICKscript:
```bash
kapacitor define <alert_name> -tick /path/to/alert_script.tick
kapacitor enable <alert_name>
kapacitor reload
```
Replace `<alert_name>` with a suitable name for your alert.
4. Ensure that the Kapacitor service is running and actively monitoring the data.
When the condition defined in the TICKscript is met, Kapacitor will trigger the alert and send
a POST request to the specified webhook URL with the necessary information. Make sure your webhook
endpoint is configured to receive and process the incoming alerts from Kapacitor.

View file

@ -0,0 +1,36 @@
---
aliases:
- add-newrelic/
- /docs/oncall/latest/integrations/available-integrations/configure-newrelic/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-newrelic/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- newrelic
title: New Relic
weight: 500
---
# New Relic integration for Grafana OnCall
The New Relic integration for Grafana OnCall handles ticket events sent from New Relic webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from New Relic
1. In the **Integrations** tab, click **+ New integration**.
2. Select **New Relic** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring New Relic to Send Alerts to Grafana OnCall
1. Go to "Alerts".
2. Go to "Notification Channels".
3. Create "Webhook" notification channel.
4. Set the following URL: **OnCall Integration URL**
5. Check "Payload type" is JSON.

View file

@ -0,0 +1,37 @@
---
aliases:
- add-pingdom/
- /docs/oncall/latest/integrations/available-integrations/configure-pingdom/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-pingdom/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- pingdom
title: Pingdom
weight: 500
---
# Pingdom integration for Grafana OnCall
The Pingdom integration for Grafana OnCall handles ticket events sent from Pingdom webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Pingdom
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Pingdom** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Pingdom to Send Alerts to Grafana OnCall
1. Go to <https://my.pingdom.com/integrations/settings>
2. Click "Add Integration".
3. Type: Webhook. Name: `Grafana OnCall`. URL: **OnCall Integration URL**
4. Go to "Reports" -> "Uptime" -> "Edit Check".
5. Select `Grafana OnCall` integration in the bottom.
6. Click "Modify Check" to save.

View file

@ -0,0 +1,113 @@
---
aliases:
- add-prtg/
- /docs/oncall/latest/integrations/available-integrations/configure-prtg/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-prtg/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- prtg
title: PRTG
weight: 500
---
# PRTG integration for Grafana OnCall
The PRTG integration for Grafana OnCall handles ticket events sent from PRTG webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from PRTG
1. In the **Integrations** tab, click **+ New integration**.
2. Select **PRTG** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring PRTG to Send Alerts to Grafana OnCall
PRTG can use the script to send the alerts to Grafana OnCall. Please use the format below
Body Fields Format:
```plaintext
alert_uid [char][not required] - unique alert ID for grouping;
title [char][not required] - title;
image_url [char][not required] - url for image attached to alert;
state [char][not required] - could be "ok" or "alerting", helpful for auto-resolving;
link_to_upstream_details [char][not required] - link back to your monitoring system;
message [char][not required] - alert details;
```
ps1 script example:
```ps1
# This script sends alerts from PRTG to Grafana OnCall
Param(
[string]$sensorid,
[string]$date,
[string]$device,
[string]$shortname,
[string]$status,
[string]$message,
[string]$datetime,
[string]$linksensor,
[string]$url
)
# PRTG Server
$PRTGServer = "localhost:8080"
$PRTGUsername = "amixr"
$PRTGPasshash = *****
#Directory for logging
$LogDirectory = "C:\temp\prtg-notifications-msteam.log"
#Acknowledgement Message for alerts ack'd via Teams
$ackmessage = "Problem has been acknowledged via Amixr."
# the acknowledgement URL
$ackURL = [string]::Format("{0}/api/acknowledgealarm.htm?id={1}&ackmsg={2}&username={3}&passhash={4}",
$PRTGServer,$sensorID,$ackmessage,$PRTGUsername,$PRTGPasshash);
# Autoresolve an alert in Amixr
if($status -eq "Up")
{ $state = "ok" }
ElseIf($status -match "now: Up")
{ $state = "ok" }
ElseIf($status -match "Up (was:")
{ $state = "ok" }
Else
{ $state = "alerting" }
$image_datetime = [datetime]::parse($datetime)
$sdate = $image_datetime.AddHours(-1).ToString("yyyy-MM-dd-HH-mm-ss")
$edate = $image_datetime.ToString("yyyy-MM-dd-HH-mm-ss")
$image_url = "$PRTGServer/chart.png?type=graph&graphid=-1&avg=0&width=1000&height=400
&username=$PRTGUsername&passhash=$PRTGPasshash&id=$sensorid&sdate=$sdate&edate=$edate"
$Body = @{
"alert_uid"="$sensorid $date";
"title"="$device $shortname $status at $datetime ";
"image_url"=$image_url;
"state"=$state;
"link_to_upstream_details"="$linksensor";
"message"="$message";
"ack_url_get"="$ackURL"
} | ConvertTo-Json
$Body
try
{ Invoke-RestMethod -uri $url -Method Post -body $Body -ContentType 'application/json; charset=utf-8'; exit 0; }
Catch
{
$ErrorMessage = $_.Exception.Message
(Get-Date).ToString() +" - "+ $ErrorMessage | Out-File -FilePath $LogDirectory -Append
exit 2;
}
```

View file

@ -0,0 +1,56 @@
---
aliases:
- add-sentry/
- /docs/oncall/latest/integrations/available-integrations/configure-Sentry/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-sentry/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- sentry
title: Sentry
weight: 500
---
# Sentry integration for Grafana OnCall
The Sentry integration for Grafana OnCall handles ticket events sent from Sentry webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Sentry
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Sentry** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Sentry to Send Alerts to Grafana OnCall
To send a webhook alert from Sentry, you can follow these steps:
1. Log in to your Sentry account.
2. Navigate to your project's settings.
3. Click on "Alerts" in the sidebar menu.
4. Click on "New Alert Rule" to create a new alert rule.
5. Configure the conditions for the alert rule based on your requirements. For example, you can set conditions based on issue
level, event frequency, or specific tags.
6. In the "Actions" section, select "Webhook" as the action type.
7. Provide the necessary details for the webhook:
- **URL**: **OnCall Integration URL**
- **Method**: POST
- **Payload**: Define the payload structure and content that you want to send to the webhook endpoint. You can use Sentry's
dynamic variables to include relevant information in the payload.
8. Save the alert rule.
Once the alert conditions are met, Sentry will trigger the webhook action and send a request to the Grafana OnCall.

View file

@ -0,0 +1,34 @@
---
aliases:
- add-stackdriver/
- /docs/oncall/latest/integrations/available-integrations/configure-stackdriver/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-stackdriver/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- stackdriver
title: Stackdriver
weight: 500
---
# Stackdriver integration for Grafana OnCall
The Stackdriver integration for Grafana OnCall handles ticket events sent from Stackdriver webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Stackdriver
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Stackdriver** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Stackdriver to Send Alerts to Grafana OnCall
1. Create a notification channel in Stackdriver by navigating to Workspace Settings -> WEBHOOKS -> Add Webhook **OnCall Integration URL**
2. Create and alert in Stackdriver by navigating to Alerting -> Policies -> Add Policy -> Choose Notification Channel using the channel set up in step 1

View file

@ -0,0 +1,61 @@
---
aliases:
- add-uptimerobot/
- /docs/oncall/latest/integrations/available-integrations/configure-uptimerobot/
canonical: https://grafana.com/docs/oncall/latest/integrations/available-integrations/configure-uptimerobot/
keywords:
- Grafana Cloud
- Alerts
- Notifications
- on-call
- uptimerobot
title: UptimeRobot
weight: 500
---
# UptimeRobot integration for Grafana OnCall
The UptimeRobot integration for Grafana OnCall handles ticket events sent from UptimeRobot webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from UptimeRobot
1. In the **Integrations** tab, click **+ New integration**.
2. Select **UptimeRobot** from the list of available integrations.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring UptimeRobot to Send Alerts to Grafana OnCall
1. Open <https://uptimerobot.com> and log in
1. Go to My Settings > Add Alert Contact and set the following fields:
1. Alert Contact Type: Webhook
1. Friendly Name: Grafana OnCall
1. URL to Notify: **OnCall Integration URL**
POST Value (JSON Format):
```yaml
{
"monitorURL": "monitorURL",
"monitorFriendlyName": "monitorFriendlyName",
"alertType": "alertType",
"alertTypeFriendlyName": "alertTypeFriendlyName",
"alertDetails": "alertDetails",
"alertDuration": "alertDuration",
"sslExpiryDate": "sslExpiryDate",
"sslExpiryDaysLeft": "sslExpiryDaysLeft"
}
```
1. Flag Send as JSON
1. Click Save Changes and Close
1. Send Test Alert to Grafana OnCall
1. Click Add New Monitor
1. Monitor Type HTTP(s)
1. Friendly Name Test Amixr
1. Set URL to <http://devnull.amixr.io> or any other non-existent domain
1. Click Checkbox next to Amixr Alert Contact (created in the previous step)
1. Click Create Monitor

View file

@ -18,13 +18,42 @@ weight: 500
The Zendesk integration for Grafana OnCall handles ticket events sent from Zendesk webhooks.
The integration provides grouping, auto-acknowledge and auto-resolve logic via customizable alert templates.
## Configure Zendesk integration for Grafana OnCall
> You must have the [role of Admin]({{< relref "user-and-team-management" >}}) to be able to create integrations in Grafana OnCall.
You must have an Admin role to create integrations in Grafana OnCall.
## Configuring Grafana OnCall to Receive Alerts from Zendesk
1. In the **Integrations** tab, click **+ New integration to receive alerts**.
1. In the **Integrations** tab, click **+ New integration**.
2. Select **Zendesk** from the list of available integrations.
3. Follow the instructions in the **How to connect** window to get your unique integration URL and review next steps.
3. Enter a name and description for the integration, click **Create**
4. A new page will open with the integration details. Copy the **OnCall Integration URL** from **HTTP Endpoint** section.
## Configuring Zendesk to Send Alerts to Grafana OnCall
Create a new "Trigger or automation" webhook connection in Zendesk to send events to Grafana OnCall using the integration URL above.
Refer to [Zendesk documentation]
(<https://support.zendesk.com/hc/en-us/articles/4408839108378-Creating-webhooks-to-interact-with-third-party-systems>
) for more information on how to create and manage webhooks.
After setting up a webhook in Zendesk, create a new trigger with the following condition:
`Meet ANY of the following conditions: "Ticket Is Created", "Ticket status Changed"`
Set `Notify webhook` as the trigger action and select the webhook you created earlier.
In the JSON body field, use the following JSON template:
```json
{
"ticket": {
"id": "{{ticket.id}}",
"url": "{{ticket.url}}",
"status": "{{ticket.status}}",
"title": "{{ticket.title}}",
"description": "{{ticket.description}}"
}
}
```
After setting up the connection, you can test it by creating a new ticket in Zendesk. You should see a new alert group in Grafana OnCall.
## Grouping, auto-acknowledge and auto-resolve

View file

@ -6,10 +6,12 @@ weight: 1000
## Jinja2 templating
Grafana OnCall can integrate with any monitoring systems that can send alerts using webhooks with JSON payloads. By
default, webhooks deliver raw JSON payloads. When Grafana OnCall receives an alert and parses its payload, a default
pre-configured alert template is applied to modify the alert payload to be more human-readable. These alert templates
are customizable for any integration.
Grafana OnCall can integrate with any monitoring system that can send alerts via
webhooks with JSON payloads. By default, webhooks deliver raw JSON payloads. When Grafana
OnCall receives an alert and parses its payload, a default pre-configured alert template
is applied to modify the alert payload to be more human-readable. These alert templates
are customizable for any integration. Templates are also used to notify different
escalation chains based on the content of the alert payload.
<iframe width="560" height="315" src="https://www.youtube.com/embed/S6Is8hhyCos" title="YouTube video player"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture;
@ -17,8 +19,9 @@ web-share" allowfullscreen></iframe>
## Alert payload
Alerts received by Grafana OnCall contain metadata as keys and values in a JSON object. The following is an example of
an alert received by Grafana OnCall initiated by Grafana Alerting:
Alerts received by Grafana OnCall contain metadata as keys and values in a JSON object.
The following is an example of an alert which was initiated by Grafana Alerting, and
received by Grafana OnCall:
```json
{
@ -45,67 +48,106 @@ an alert received by Grafana OnCall initiated by Grafana Alerting:
}
```
In Grafana OnCall every alert and alert group has the following fields:
In Grafana OnCall every alert and alert group have the following fields:
- `Title`, `message` and `image url`
- `Grouping Id`
- `Resolve Signal`
- `Title`, `Message` and `Image Url` for each notification method (Web, Slack, Ms Teams, SMS, Phone, Email, etc.)
- `Grouping Id` - unique identifier for each non-resolved alert group
- `Resolved by source`
- `Acknowledged by source`
- `Source link`
The JSON payload is converted. For example:
The JSON payload is converted to OnCall fields. For example:
- `{{ payload.title }}` -> Title
- `{{ payload.message }}` -> Message
- `{{ payload.imageUrl }}` -> Image Url
- `{{ payload.title }}` -> `Title`
- `{{ payload.message }}` -> `Message`
- `{{ payload.imageUrl }}` -> `Image Url`
The result is that each field of the alert in OnCall is now mapped to the JSON payload keys. This also true for the
The result is that each field of the alert in OnCall is now mapped to the JSON payload
keys. This also true for the
alert behavior:
- `{{ payload.ruleId }}` -> Grouping Id
- `{{ 1 if payload.state == 'OK' else 0 }}` -> Resolve Signal
Grafana OnCall provides a pre configured default Jinja template for supported integrations. If your monitoring system is
not in the Grafana OnCall integrations list, you can create a generic `webhook` integration, send an alert, and configure
Grafana OnCall provides pre-configured default Jinja templates for supported
integrations. If your monitoring system is
not in the Grafana OnCall integrations list, you can create a generic `webhook`
integration, send an alert, and configure
your templates.
## Customize alerts with alert templates
## Types of templates
Alert templates allow you to format any alert fields recognized by Grafana OnCall. You can customize default alert
templates for all the different ways you receive your alerts such as web, slack, SMS, and email. For more advanced
Alert templates allow you to format any alert fields recognized by Grafana OnCall. You can
customize default alert
templates for all the different notification methods. For more advanced
customization, use Jinja templates.
As a best practice, add _Playbooks_, _Useful links_, or _Checklists_ to the alert message.
### Routing template
To customize alert templates in Grafana OnCall:
- `Routing Template` - used to route alerts to different Escalation Chains based on alert content (conditional template, output should be `True`)
1. Navigate to the **Integrations** tab, select the integration, then click **Change alert template and grouping**.
> **Note:** For conditional templates, the output should be `True` to be applied, for example `{{ True if payload.state == 'OK' else False }}`
2. In Alert Templates, select a template from the **Edit template for** dropdown.
#### Appearance templates
3. Edit the Appearances template as needed:
How alerts are displayed in the UI, messengers, and notifications
- `Title`, `Message`, `Image url` for Web
- `Title`, `Message`, `Image url` for Slack
- `Title` used for SMS
- `Title` used for Phone
- `Title`, `Message` used for Email
- `Title`, `Message`, `Image url` for Web
- `Title`, `Message`, `Image url` for Slack
- `Title`, `Message`, `Image url` for MS Teams
- `Title`, `Message`, `Image url` for Telegram
- `Title` for SMS
- `Title` for Phone Call
- `Title`, `Message` for Email
4. Edit the alert behavior as needed:
- `Grouping Id` - Alerts with the same `Grouping Id` will be grouped into the same Alert Group
if Alert Group in the state "Firing", "Acked" or "Silenced" exists. If previous Alert Group is in the state "Resolved",
a new Alert Group will be issued.
- `Acknowledge Condition` - The output should be `ok`, `true`, or `1` to auto-acknowledge the alert group.
For example, `{{ 1 if payload.state == 'OK' else 0 }}`.
- `Resolve Condition` - The output should be `ok`, `true` or `1` to auto-resolve the alert group.
For example, `{{ 1 if payload.state == 'OK' else 0 }}`.
- `Source Link` - Used to customize the URL link to provide as the "source" of the alert.
#### Behavioral templates
- `Grouping Id` - applied to every incoming alert payload after the `Routing Template`. It
can be based on time, alert content, or both. If the resulting grouping id matches an
existing non-resolved alert group grouping id, the alert will be grouped accordingly.
Otherwise, a new alert group will be created
- `Autoresolution` - used to auto-resolve alert groups with status `Resolved by source`
(Conditional template, output should be `True`)
- `Auto acknowledge` - used to auto-acknowledge alert groups with status `Acknowledged by
source` (Conditional template, output should be `True`)
- `Source link` - Used to customize the URL link to provide as the "source" of the alert.
> **Note:** For conditional templates, the output should be `True` to be applied, for
example `{{ True if payload.state == 'OK' else False }}`
> **Pro Tip:** As a best practice, add _Playbooks_, _Useful links_, or _Checklists_ to the
alert message.
#### How to edit templates
1. Open the **Integration** page for the integration you want to edit
1`. Click the **Edit** button for the Templates Section. Now you can see previews of all
templates for the Integration
1. Select the template you want to edit and click the **Edit** button to the right to the template
name. The template editor will open. The first column is the example alert payload, second
column is the Template itself, and third column is used to view rendered result.
1. Select one of the **Recent Alert groups** for the integration to see its `latest alert
payload`. If you want to edit this payload, click the **Edit** button right to the Alert Group
Name.
1. Alternatively, you can click **Use custom payload** and write your own payload to see
how it will be rendered
1. Press `Control + Enter` in the editor to see suggestions
1. Click **Cheatsheet** in the second column to get some inspiration.
1. If you edit Messenger templates, click **Save and open Alert Group in ChatOps** to see
how the alert will be rendered in the messenger, right in the messenger (Only works for
an Alert Group that exists in the messenger)
1. Click **Save** to save the template
## Advanced Jinja templates
Grafana OnCall uses [Jinja templating language](http://jinja.pocoo.org/docs/2.10/) to format alert groups for the Web,
Slack, phone calls, SMS messages, and more because the JSON format is not easily readable by humans. As a result, you
can decide what you want to see when an alert group is triggered as well as how it should be presented.
Grafana OnCall uses the [Jinja templating language](http://jinja.pocoo.org/docs/2.10/) to
format alert groups for the Web,
Slack, phone calls, SMS messages, and more. As a result, you
can decide what you want to see when an alert group is triggered, as well as how it should
be presented.
Jinja2 offers simple but multi-faceted functionality by using loops, conditions, functions, and more.
Jinja2 offers simple but multi-faceted functionality by using loops, conditions,
functions, and more.
> **NOTE:** Every alert from a monitoring system comes in the key/value format.
@ -113,7 +155,8 @@ Grafana OnCall has rules about which of the keys match to: `__title`, `message`,
### Loops
Monitoring systems can send an array of values. In this example, you can use Jinja to iterate and format the alert
Monitoring systems can send an array of values. In this example, you can use Jinja to
iterate and format the alert
using a Grafana example:
```.jinja2

View file

@ -1,6 +1,6 @@
import datetime
import logging
from typing import Optional
import typing
import pytz
from celery import uuid as celery_uuid
@ -17,6 +17,9 @@ from apps.alerts.escalation_snapshot.snapshot_classes import (
)
from apps.alerts.tasks import escalate_alert_group
if typing.TYPE_CHECKING:
from apps.alerts.models import ChannelFilter
logger = logging.getLogger(__name__)
# Is a delay to prevent intermediate activity by system in case user is doing some multi-step action.
@ -29,6 +32,11 @@ class EscalationSnapshotMixin:
Mixin for AlertGroup. It contains methods related with alert group escalation
"""
# TODO: add stricter typing
# TODO: should this class actually be an AbstractBaseClass instead?
raw_escalation_snapshot: dict | None
channel_filter: typing.Optional["ChannelFilter"]
def build_raw_escalation_snapshot(self) -> dict:
"""
Builds new escalation chain in a json serializable format (dict).
@ -91,7 +99,7 @@ class EscalationSnapshotMixin:
data = {}
if self.escalation_chain_exists:
channel_filter = self.channel_filter
channel_filter: "ChannelFilter" = self.channel_filter
escalation_chain = channel_filter.escalation_chain
escalation_policies = escalation_chain.escalation_policies.all()
@ -116,7 +124,7 @@ class EscalationSnapshotMixin:
return self.escalation_chain_snapshot or (self.channel_filter.escalation_chain if self.channel_filter else None)
@cached_property
def channel_filter_snapshot(self) -> Optional[ChannelFilterSnapshot]:
def channel_filter_snapshot(self) -> typing.Optional[ChannelFilterSnapshot]:
"""
in some cases we need only channel filter and don't want to serialize whole escalation
"""
@ -132,7 +140,7 @@ class EscalationSnapshotMixin:
return ChannelFilterSnapshot(**channel_filter_snapshot)
@cached_property
def escalation_chain_snapshot(self) -> Optional[EscalationChainSnapshot]:
def escalation_chain_snapshot(self) -> typing.Optional[EscalationChainSnapshot]:
"""
in some cases we need only escalation chain and don't want to serialize whole escalation
escalation_chain_snapshot_object = None
@ -149,7 +157,7 @@ class EscalationSnapshotMixin:
return EscalationChainSnapshot(**escalation_chain_snapshot)
@cached_property
def escalation_snapshot(self) -> Optional[EscalationSnapshot]:
def escalation_snapshot(self) -> typing.Optional[EscalationSnapshot]:
raw_escalation_snapshot = self.raw_escalation_snapshot
if raw_escalation_snapshot:
try:
@ -207,7 +215,7 @@ class EscalationSnapshotMixin:
return self.raw_escalation_snapshot.get("pause_escalation", False)
@property
def next_step_eta(self) -> Optional[datetime.datetime]:
def next_step_eta(self) -> typing.Optional[datetime.datetime]:
"""
get next_step_eta field directly to avoid serialization overhead
"""

View file

@ -117,7 +117,7 @@ class EscalationPolicySnapshot:
return next_user
def execute(self, alert_group: "AlertGroup", reason) -> StepExecutionResultData:
action_map: typing.Dict[typing.Union[int, None], EscalationPolicySnapshot.StepExecutionFunc] = {
action_map: typing.Dict[typing.Optional[int], EscalationPolicySnapshot.StepExecutionFunc] = {
EscalationPolicy.STEP_WAIT: self._escalation_step_wait,
EscalationPolicy.STEP_FINAL_NOTIFYALL: self._escalation_step_notify_all,
EscalationPolicy.STEP_REPEAT_ESCALATION_N_TIMES: self._escalation_step_repeat_escalation_n_times,

View file

@ -92,7 +92,7 @@ class EscalationSnapshot:
return [self.escalation_policies_snapshots[0]]
return self.escalation_policies_snapshots[: self.last_active_escalation_policy_order]
def next_step_eta_is_valid(self) -> typing.Union[None, bool]:
def next_step_eta_is_valid(self) -> typing.Optional[bool]:
"""
`next_step_eta` should never be less than the current time (with a 5 minute buffer provided)
as this field should be updated as the escalation policy is executed over time. If it is, this means that
@ -109,7 +109,8 @@ class EscalationSnapshot:
self.alert_group.raw_escalation_snapshot = self.convert_to_dict()
self.alert_group.save(update_fields=["raw_escalation_snapshot"])
def convert_to_dict(self) -> dict:
# TODO: update the typing here, be more strict about what this returns
def convert_to_dict(self):
return self.serializer(self).data
def execute_actual_escalation_step(self) -> None:

View file

@ -11,7 +11,8 @@ from apps.grafana_plugin.helpers import GrafanaAPIClient
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from apps.alerts.models import GrafanaAlertingContactPoint
from apps.alerts.models import AlertReceiveChannel, GrafanaAlertingContactPoint
from apps.user_management.models import Organization
class GrafanaAlertingSyncManager:
@ -24,7 +25,7 @@ class GrafanaAlertingSyncManager:
ALERTING_DATASOURCE = "alertmanager"
IS_GRAFANA_VERSION_GRE_9 = None
def __init__(self, alert_receive_channel):
def __init__(self, alert_receive_channel: "AlertReceiveChannel") -> None:
self.alert_receive_channel = alert_receive_channel
self.client = GrafanaAPIClient(
api_url=self.alert_receive_channel.organization.grafana_url,
@ -33,7 +34,7 @@ class GrafanaAlertingSyncManager:
self.receiver_name = self.alert_receive_channel.emojized_verbal_name
@classmethod
def check_for_connection_errors(cls, organization) -> Optional[str]:
def check_for_connection_errors(cls, organization: "Organization") -> Optional[str]:
"""Check if it possible to connect to alerting, otherwise return error message"""
client = GrafanaAPIClient(api_url=organization.grafana_url, api_token=organization.api_token)
recipient = cls.GRAFANA_CONTACT_POINT
@ -561,7 +562,7 @@ class GrafanaAlertingSyncManager:
break
return name_in_alerting
def get_datasource_name(self, contact_point) -> str:
def get_datasource_name(self, contact_point: "GrafanaAlertingContactPoint") -> str:
datasource_id = contact_point.datasource_id
datasource_uid = contact_point.datasource_uid
datasource, response_info = self.client.get_datasource(datasource_uid)

View file

@ -65,10 +65,10 @@ class TemplateLoader:
@dataclass
class TemplatedAlert:
title: str = None
message: str = None
image_url: str = None
source_link: str = None
title: str | None = None
message: str | None = None
image_url: str | None = None
source_link: str | None = None
class AlertTemplater(ABC):
@ -160,7 +160,7 @@ class AlertTemplater(ABC):
return templated_alert
def _render_attribute_with_template(self, attr, data, channel, templated_alert):
def _render_attribute_with_template(self, attr, data, channel, templated_alert: TemplatedAlert) -> str | None:
"""
Get attr template and then apply it.
If attr template is None or invalid will return None.
@ -212,5 +212,5 @@ class AlertTemplater(ABC):
return None
@abstractmethod
def _render_for(self):
def _render_for(self) -> str:
raise NotImplementedError

View file

@ -1,10 +1,10 @@
import datetime
import logging
import typing
import urllib
from collections import namedtuple
from typing import Optional, TypedDict
from urllib.parse import urljoin
from uuid import uuid1
from uuid import UUID, uuid1
from celery import uuid as celery_uuid
from django.apps import apps
@ -33,6 +33,11 @@ from common.utils import clean_markup, str_or_backup
from .alert_group_counter import AlertGroupCounter
if typing.TYPE_CHECKING:
from django.db.models.manager import RelatedManager
from apps.alerts.models import AlertGroupLogRecord
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
@ -51,9 +56,9 @@ def generate_public_primary_key_for_alert_group():
return new_public_primary_key
class Permalinks(TypedDict):
slack: Optional[str]
telegram: Optional[str]
class Permalinks(typing.TypedDict):
slack: typing.Optional[str]
telegram: typing.Optional[str]
web: str
@ -133,6 +138,8 @@ class AlertGroupSlackRenderingMixin:
class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.Model):
log_records: "RelatedManager['AlertGroupLogRecord']"
all_objects = AlertGroupQuerySet.as_manager()
unarchived_objects = UnarchivedAlertGroupQuerySet.as_manager()
@ -324,7 +331,9 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
cached_render_for_web = models.JSONField(default=dict)
active_cache_for_web_calculation_id = models.CharField(max_length=100, null=True, default=None)
last_unique_unacknowledge_process_id = models.CharField(max_length=100, null=True, default=None)
# NOTE: we should probably migrate this field to models.UUIDField as it's ONLY ever being
# set to the result of uuid.uuid1
last_unique_unacknowledge_process_id: UUID | None = models.CharField(max_length=100, null=True, default=None)
is_archived = models.BooleanField(default=False)
wiped_at = models.DateTimeField(null=True, default=None)
@ -457,11 +466,11 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
raise NotImplementedError
@property
def slack_permalink(self) -> Optional[str]:
def slack_permalink(self) -> typing.Optional[str]:
return None if self.slack_message is None else self.slack_message.permalink
@property
def telegram_permalink(self) -> Optional[str]:
def telegram_permalink(self) -> typing.Optional[str]:
"""
This property will attempt to access an attribute, `prefetched_telegram_messages`, representing a list of
prefetched telegram messages. If this attribute does not exist, it falls back to performing a query.
@ -529,7 +538,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
started_at=self.started_at,
)
def acknowledge_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def acknowledge_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
initial_state = self.state
logger.debug(f"Started acknowledge_by_user for alert_group {self.pk}")
@ -611,7 +620,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
for dependent_alert_group in self.dependent_alert_groups.all():
dependent_alert_group.acknowledge_by_source()
def un_acknowledge_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def un_acknowledge_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
initial_state = self.state
logger.debug(f"Started un_acknowledge_by_user for alert_group {self.pk}")
@ -639,7 +648,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
dependent_alert_group.un_acknowledge_by_user(user, action_source=action_source)
logger.debug(f"Finished un_acknowledge_by_user for alert_group {self.pk}")
def resolve_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def resolve_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
initial_state = self.state
@ -786,7 +795,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
for dependent_alert_group in self.dependent_alert_groups.all():
dependent_alert_group.resolve_by_disable_maintenance()
def un_resolve_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def un_resolve_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
if self.wiped_at is None:
@ -815,7 +824,9 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
for dependent_alert_group in self.dependent_alert_groups.all():
dependent_alert_group.un_resolve_by_user(user, action_source=action_source)
def attach_by_user(self, user: User, root_alert_group: "AlertGroup", action_source: Optional[str] = None) -> None:
def attach_by_user(
self, user: User, root_alert_group: "AlertGroup", action_source: typing.Optional[str] = None
) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
if root_alert_group.root_alert_group is None and not root_alert_group.resolved:
@ -891,10 +902,10 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
action_source=action_source,
)
def un_attach_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def un_attach_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
root_alert_group = self.root_alert_group
root_alert_group: AlertGroup = self.root_alert_group
self.root_alert_group = None
self.save(update_fields=["root_alert_group"])
@ -963,7 +974,9 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
action_source=None,
)
def silence_by_user(self, user: User, silence_delay: Optional[int], action_source: Optional[str] = None) -> None:
def silence_by_user(
self, user: User, silence_delay: typing.Optional[int], action_source: typing.Optional[str] = None
) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
initial_state = self.state
@ -1020,7 +1033,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
for dependent_alert_group in self.dependent_alert_groups.all():
dependent_alert_group.silence_by_user(user, silence_delay, action_source)
def un_silence_by_user(self, user: User, action_source: Optional[str] = None) -> None:
def un_silence_by_user(self, user: User, action_source: typing.Optional[str] = None) -> None:
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
initial_state = self.state
@ -1322,7 +1335,10 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
if not root_alert_groups_to_resolve.exists():
return
organization = root_alert_groups_to_resolve.first().channel.organization
# we know this is an AlertGroup because of the .exists() check just above
first_alert_group: AlertGroup = root_alert_groups_to_resolve.first()
organization = first_alert_group.channel.organization
if organization.is_resolution_note_required:
root_alert_groups_to_resolve = root_alert_groups_to_resolve.filter(
Q(resolution_notes__isnull=False, resolution_notes__deleted_at=None)

View file

@ -1,4 +1,5 @@
import logging
import typing
from functools import cached_property
from urllib.parse import urljoin
@ -37,6 +38,11 @@ from common.insight_log import EntityEvent, write_resource_insight_log
from common.jinja_templater import jinja_template_env
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
if typing.TYPE_CHECKING:
from django.db.models.manager import RelatedManager
from apps.alerts.models import GrafanaAlertingContactPoint
logger = logging.getLogger(__name__)
@ -108,6 +114,8 @@ class AlertReceiveChannel(IntegrationOptionsMixin, MaintainableObject):
Channel generated by user to receive Alerts to.
"""
contact_points: "RelatedManager['GrafanaAlertingContactPoint']"
objects = AlertReceiveChannelManager()
objects_with_maintenance = AlertReceiveChannelManagerWithMaintenance()
objects_with_deleted = models.Manager()
@ -609,7 +617,9 @@ class AlertReceiveChannel(IntegrationOptionsMixin, MaintainableObject):
@receiver(post_save, sender=AlertReceiveChannel)
def listen_for_alertreceivechannel_model_save(sender, instance, created, *args, **kwargs):
def listen_for_alertreceivechannel_model_save(
sender: AlertReceiveChannel, instance: AlertReceiveChannel, created: bool, *args, **kwargs
) -> None:
ChannelFilter = apps.get_model("alerts", "ChannelFilter")
IntegrationHeartBeat = apps.get_model("heartbeat", "IntegrationHeartBeat")

View file

@ -1,7 +1,8 @@
import datetime
from django.conf import settings
from django.core.validators import MinLengthValidator
from django.db import models
from django.utils import timezone
from ordered_model.models import OrderedModel
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
@ -271,13 +272,13 @@ class EscalationPolicy(OrderedModel):
null=True,
)
ONE_MINUTE = timezone.timedelta(minutes=1)
FIVE_MINUTES = timezone.timedelta(minutes=5)
FIFTEEN_MINUTES = timezone.timedelta(minutes=15)
THIRTY_MINUTES = timezone.timedelta(minutes=30)
HOUR = timezone.timedelta(minutes=60)
ONE_MINUTE = datetime.timedelta(minutes=1)
FIVE_MINUTES = datetime.timedelta(minutes=5)
FIFTEEN_MINUTES = datetime.timedelta(minutes=15)
THIRTY_MINUTES = datetime.timedelta(minutes=30)
HOUR = datetime.timedelta(minutes=60)
DEFAULT_WAIT_DELAY = timezone.timedelta(minutes=5)
DEFAULT_WAIT_DELAY = datetime.timedelta(minutes=5)
DURATION_CHOICES = (
(ONE_MINUTE, "1 min"),

View file

@ -1,3 +1,4 @@
import datetime
from uuid import uuid4
import humanize
@ -14,11 +15,11 @@ class MaintainableObject(models.Model):
class Meta:
abstract = True
DURATION_ONE_HOUR = timezone.timedelta(hours=1)
DURATION_THREE_HOURS = timezone.timedelta(hours=3)
DURATION_SIX_HOURS = timezone.timedelta(hours=6)
DURATION_TWELVE_HOURS = timezone.timedelta(hours=12)
DURATION_TWENTY_FOUR_HOURS = timezone.timedelta(hours=24)
DURATION_ONE_HOUR = datetime.timedelta(hours=1)
DURATION_THREE_HOURS = datetime.timedelta(hours=3)
DURATION_SIX_HOURS = datetime.timedelta(hours=6)
DURATION_TWELVE_HOURS = datetime.timedelta(hours=12)
DURATION_TWENTY_FOUR_HOURS = datetime.timedelta(hours=24)
MAINTENANCE_DURATION_CHOICES = (
(DURATION_ONE_HOUR, "1 hour"),
@ -97,7 +98,7 @@ class MaintainableObject(models.Model):
maintenance_uuid = _self.start_disable_maintenance_task(maintenance_duration)
_self.maintenance_duration = timezone.timedelta(seconds=maintenance_duration)
_self.maintenance_duration = datetime.timedelta(seconds=maintenance_duration)
_self.maintenance_uuid = maintenance_uuid
_self.maintenance_mode = mode
_self.maintenance_started_at = timezone.now()

View file

@ -1,7 +1,7 @@
from django.apps import apps
from django.conf import settings
from django.db import transaction
from kombu import uuid as celery_uuid
from kombu.utils.uuid import uuid as celery_uuid
from common.custom_celery_tasks import shared_dedicated_queue_retry_task

View file

@ -4,7 +4,7 @@ from django.apps import apps
from django.conf import settings
from django.db import transaction
from django.utils import timezone
from kombu import uuid as celery_uuid
from kombu.utils.uuid import uuid as celery_uuid
from apps.alerts.constants import NEXT_ESCALATION_DELAY
from apps.alerts.signals import user_notification_action_triggered_signal

View file

@ -2,6 +2,7 @@ import enum
import typing
from django.conf import settings
from django.contrib.auth.models import AbstractUser
from rest_framework import permissions
from rest_framework.authentication import BasicAuthentication, SessionAuthentication
from rest_framework.request import Request
@ -10,6 +11,9 @@ from rest_framework.viewsets import ViewSet, ViewSetMixin
from common.utils import getattrd
if typing.TYPE_CHECKING:
from apps.user_management.models import User
ACTION_PREFIX = "grafana-oncall-app"
RBAC_PERMISSIONS_ATTR = "rbac_permissions"
RBAC_OBJECT_PERMISSIONS_ATTR = "rbac_object_permissions"
@ -17,6 +21,31 @@ RBAC_OBJECT_PERMISSIONS_ATTR = "rbac_object_permissions"
ViewSetOrAPIView = typing.Union[ViewSet, APIView]
class AuthenticatedRequest(Request):
"""
Use this for typing, instead of rest_framework.request.Request, when you KNOW that the user is authenticated.
ex. In the RBACPermission class below, we know that the user is authenticated because this is handled by the
`authentication_classes` attribute on views.
https://github.com/typeddjango/django-stubs#how-can-i-create-a-httprequest-thats-guaranteed-to-have-an-authenticated-user
"""
# see comment above, this is safe. without the type-ignore comment, mypy complains
# expression has type "User", base class "Request" defined the type as "Union[AbstractBaseUser, AnonymousUser]"
user: "User" # type: ignore[assignment]
class AuthenticatedDjangoAdminRequest(Request):
"""
Use this for typing, instead of rest_framework.request.Request, when you KNOW that the user is authenticated via
Django admin user authentication.
https://github.com/typeddjango/django-stubs#how-can-i-create-a-httprequest-thats-guaranteed-to-have-an-authenticated-user
"""
user: AbstractUser
class GrafanaAPIPermission(typing.TypedDict):
action: str
@ -62,9 +91,12 @@ class LegacyAccessControlCompatiblePermission:
self.fallback_role = fallback_role
def get_most_authorized_role(
permissions: typing.List[LegacyAccessControlCompatiblePermission],
) -> LegacyAccessControlRole:
LegacyAccessControlCompatiblePermissions = typing.List[LegacyAccessControlCompatiblePermission]
RBACPermissionsAttribute = typing.Dict[str, LegacyAccessControlCompatiblePermissions]
RBACObjectPermissionsAttribute = typing.Dict[permissions.BasePermission, typing.List[str]]
def get_most_authorized_role(permissions: LegacyAccessControlCompatiblePermissions) -> LegacyAccessControlRole:
if not permissions:
return LegacyAccessControlRole.VIEWER
@ -72,22 +104,18 @@ def get_most_authorized_role(
return min({p.fallback_role for p in permissions}, key=lambda r: r.value)
def user_is_authorized(user, required_permissions: typing.List[LegacyAccessControlCompatiblePermission]) -> bool:
def user_is_authorized(user: "User", required_permissions: LegacyAccessControlCompatiblePermissions) -> bool:
"""
This function checks whether `user` has all permissions in `required_permissions`. RBAC permissions are used
if RBAC is enabled for the organization, otherwise the fallback basic role is checked.
Parameters
----------
user : apps.user_management.models.user.User
The user to check permissions for
required_permissions : typing.List[LegacyAccessControlCompatiblePermission]
A list of permissions that a user must have to be considered authorized
user - The user to check permissions for
required_permissions - A list of permissions that a user must have to be considered authorized
"""
if user.organization.is_rbac_permissions_enabled:
user_permissions = [u["action"] for u in user.permissions]
required_permissions = [p.value for p in required_permissions]
return all(permission in user_permissions for permission in required_permissions)
required_permission_values = [p.value for p in required_permissions]
return all(permission in user_permissions for permission in required_permission_values)
return user.role <= get_most_authorized_role(required_permissions).value
@ -187,15 +215,18 @@ class RBACPermission(permissions.BasePermission):
)
@staticmethod
def _get_view_action(request: Request, view: ViewSetOrAPIView) -> str:
def _get_view_action(request: AuthenticatedRequest, view: ViewSetOrAPIView) -> str:
"""
For right now this needs to support being used in both a ViewSet as well as APIView, we use both interchangably
Note: `request.method` is returned uppercase
"""
return view.action if isinstance(view, ViewSetMixin) else request.method.lower()
return view.action if isinstance(view, ViewSetMixin) else (request.method or "").lower()
def has_permission(self, request: Request, view: ViewSetOrAPIView) -> bool:
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_permission(self, request: AuthenticatedRequest, view: ViewSetOrAPIView) -> bool: # type: ignore[override]
# the django-debug-toolbar UI makes OPTIONS calls. Without this statement the debug UI can't gather the
# necessary info it needs to work properly
if settings.DEBUG and request.method == "OPTIONS":
@ -203,14 +234,14 @@ class RBACPermission(permissions.BasePermission):
action = self._get_view_action(request, view)
rbac_permissions: RBACPermissionsAttribute = getattr(view, RBAC_PERMISSIONS_ATTR, None)
rbac_permissions: typing.Optional[RBACPermissionsAttribute] = getattr(view, RBAC_PERMISSIONS_ATTR, None)
# first check that the rbac_permissions dict attribute is defined
assert (
rbac_permissions is not None
), f"Must define a {RBAC_PERMISSIONS_ATTR} dict on the ViewSet that is consuming the RBACPermission class"
action_required_permissions: typing.Union[None, typing.List] = rbac_permissions.get(action, None)
action_required_permissions: typing.Optional[typing.List] = rbac_permissions.get(action, None)
# next check that the action in question is defined within the rbac_permissions dict attribute
assert (
@ -220,8 +251,13 @@ class RBACPermission(permissions.BasePermission):
return user_is_authorized(request.user, action_required_permissions)
def has_object_permission(self, request: Request, view: ViewSetOrAPIView, obj: typing.Any) -> bool:
rbac_object_permissions: RBACObjectPermissionsAttribute = getattr(view, RBAC_OBJECT_PERMISSIONS_ATTR, None)
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_object_permission(self, request: AuthenticatedRequest, view: ViewSetOrAPIView, obj: typing.Any) -> bool: # type: ignore[override]
rbac_object_permissions: typing.Optional[RBACObjectPermissionsAttribute] = getattr(
view, RBAC_OBJECT_PERMISSIONS_ATTR, None
)
if rbac_object_permissions:
action = self._get_view_action(request, view)
@ -250,35 +286,45 @@ def get_permission_from_permission_string(perm: str) -> typing.Optional[LegacyAc
for permission_class in ALL_PERMISSION_CLASSES:
if permission_class.value == perm:
return permission_class
return None
class IsOwner(permissions.BasePermission):
def __init__(self, ownership_field: typing.Optional[str] = None) -> None:
self.ownership_field = ownership_field
def has_object_permission(self, request: Request, _view: ViewSet, obj: typing.Any) -> bool:
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_object_permission(self, request: AuthenticatedRequest, _view: ViewSetOrAPIView, obj: typing.Any) -> bool: # type: ignore[override]
owner = obj if self.ownership_field is None else getattrd(obj, self.ownership_field)
return owner == request.user
class HasRBACPermissions(permissions.BasePermission):
def __init__(self, required_permissions: typing.List[LegacyAccessControlCompatiblePermission]) -> None:
def __init__(self, required_permissions: LegacyAccessControlCompatiblePermissions) -> None:
self.required_permissions = required_permissions
def has_object_permission(self, request: Request, _view: ViewSetOrAPIView, _obj: typing.Any) -> bool:
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_object_permission(self, request: AuthenticatedRequest, _view: ViewSetOrAPIView, _obj: typing.Any) -> bool: # type: ignore[override]
return user_is_authorized(request.user, self.required_permissions)
class IsOwnerOrHasRBACPermissions(permissions.BasePermission):
def __init__(
self,
required_permissions: typing.List[LegacyAccessControlCompatiblePermission],
required_permissions: LegacyAccessControlCompatiblePermissions,
ownership_field: typing.Optional[str] = None,
) -> None:
self.IsOwner = IsOwner(ownership_field)
self.HasRBACPermissions = HasRBACPermissions(required_permissions)
def has_object_permission(self, request: Request, view: ViewSetOrAPIView, obj: typing.Any) -> bool:
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_object_permission(self, request: AuthenticatedRequest, view: ViewSetOrAPIView, obj: typing.Any) -> bool: # type: ignore[override]
return self.IsOwner.has_object_permission(request, view, obj) or self.HasRBACPermissions.has_object_permission(
request, view, obj
)
@ -287,14 +333,13 @@ class IsOwnerOrHasRBACPermissions(permissions.BasePermission):
class IsStaff(permissions.BasePermission):
STAFF_AUTH_CLASSES = [BasicAuthentication, SessionAuthentication]
def has_permission(self, request: Request, _view: ViewSet) -> bool:
# mypy complains about "Liskov substitution principle" here because request is `AuthenticatedRequest` object
# and not rest_framework.request.Request
# https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
def has_permission(self, request: AuthenticatedDjangoAdminRequest, _view: ViewSet) -> bool: # type: ignore[override]
user = request.user
if not any(isinstance(request._authenticator, x) for x in self.STAFF_AUTH_CLASSES):
return False
if user and user.is_authenticated:
return user.is_staff
return False
RBACPermissionsAttribute = typing.Dict[str, typing.List[LegacyAccessControlCompatiblePermission]]
RBACObjectPermissionsAttribute = typing.Dict[permissions.BasePermission, typing.List[str]]

View file

@ -101,7 +101,7 @@ class EscalationPolicySerializer(EagerLoadingMixin, serializers.ModelSerializer)
"notify_to_group",
"important",
]
read_only_fields = ("order",)
read_only_fields = ["order"]
SELECT_RELATED = [
"escalation_chain",
@ -199,7 +199,7 @@ class EscalationPolicySerializer(EagerLoadingMixin, serializers.ModelSerializer)
class EscalationPolicyCreateSerializer(EscalationPolicySerializer):
class Meta(EscalationPolicySerializer.Meta):
read_only_fields = ("order",)
read_only_fields = ["order"]
extra_kwargs = {"escalation_chain": {"required": True, "allow_null": False}}
def create(self, validated_data):
@ -212,7 +212,7 @@ class EscalationPolicyUpdateSerializer(EscalationPolicySerializer):
escalation_chain = serializers.CharField(read_only=True, source="escalation_chain.public_primary_key")
class Meta(EscalationPolicySerializer.Meta):
read_only_fields = ("order", "escalation_chain")
read_only_fields = ["order", "escalation_chain"]
def update(self, instance, validated_data):
step = validated_data.get("step", instance.step)

View file

@ -213,7 +213,7 @@ class OnCallShiftUpdateSerializer(OnCallShiftSerializer):
type = serializers.ReadOnlyField()
class Meta(OnCallShiftSerializer.Meta):
read_only_fields = ("schedule", "type")
read_only_fields = ["schedule", "type"]
def update(self, instance, validated_data):
validated_data = self._correct_validated_data(instance.type, validated_data)

View file

@ -16,9 +16,9 @@ class TeamSerializer(serializers.ModelSerializer):
"is_sharing_resources_to_all",
)
read_only_fields = (
read_only_fields = [
"id",
"name",
"email",
"avatar_url",
)
]

View file

@ -100,7 +100,7 @@ class UserNotificationPolicyUpdateSerializer(UserNotificationPolicyBaseSerialize
)
class Meta(UserNotificationPolicyBaseSerializer.Meta):
read_only_fields = ("order", "user", "important")
read_only_fields = ["order", "user", "important"]
def update(self, instance, validated_data):
self_or_admin = instance.user.self_or_admin(

View file

@ -400,20 +400,13 @@ class AlertGroupView(
@action(detail=False)
def stats(self, *args, **kwargs):
alert_groups = self.filter_queryset(self.get_queryset())
# Only count field is used, other fields left just in case for the backward compatibility
MAX_COUNT = 100001
alert_groups = self.filter_queryset(self.get_queryset())[:MAX_COUNT]
count = alert_groups.count()
count = f"{MAX_COUNT-1}+" if count == MAX_COUNT else str(count)
return Response(
{
"count": alert_groups.filter().count(),
"count_previous_same_period": 0,
"alert_group_rate_to_previous_same_period": 1,
"count_escalations": 0,
"count_escalations_previous_same_period": 0,
"escalation_rate_to_previous_same_period": 1,
"average_response_time": None,
"average_response_time_to_previous_same_period": None,
"average_response_time_rate_to_previous_same_period": 0,
"prev_period_in_days": 1,
"count": count,
}
)

View file

@ -1,4 +1,4 @@
from typing import Tuple
import typing
from django.db import models
@ -25,8 +25,8 @@ class ScheduleExportAuthToken(BaseAuthToken):
@classmethod
def create_auth_token(
cls, user: User, organization: Organization, schedule: OnCallSchedule = None
) -> Tuple["ScheduleExportAuthToken", str]:
cls, user: User, organization: Organization, schedule: typing.Optional[OnCallSchedule] = None
) -> typing.Tuple["ScheduleExportAuthToken", str]:
token_string = crypto.generate_schedule_token_string()
digest = crypto.hash_token_string(token_string)

View file

@ -163,7 +163,7 @@ class UserNotificationPolicy(OrderedModel):
return f"{self.pk}: {self.short_verbal}"
@classmethod
def get_short_verbals_for_user(cls, user: User) -> Tuple[Tuple[str], Tuple[str]]:
def get_short_verbals_for_user(cls, user: User) -> Tuple[Tuple[str, ...], Tuple[str, ...]]:
is_wait_step = Q(step=cls.Step.WAIT)
is_wait_step_configured = Q(wait_delay__isnull=False)

View file

@ -1,20 +1,19 @@
import json
import logging
import time
from typing import Dict, List, Optional, Tuple, TypedDict
import typing
from urllib.parse import urljoin
import requests
from django.conf import settings
from rest_framework import status
from rest_framework.response import Response
from apps.api.permissions import ACTION_PREFIX, GrafanaAPIPermission
logger = logging.getLogger(__name__)
class GrafanaUser(TypedDict):
class GrafanaUser(typing.TypedDict):
orgId: int
userId: int
email: str
@ -27,18 +26,22 @@ class GrafanaUser(TypedDict):
class GrafanaUserWithPermissions(GrafanaUser):
permissions: List[GrafanaAPIPermission]
permissions: typing.List[GrafanaAPIPermission]
class GCOMInstanceInfoConfigFeatureToggles(TypedDict):
GrafanaUsersWithPermissions = typing.List[GrafanaUserWithPermissions]
UserPermissionsDict = typing.Dict[str, typing.List[GrafanaAPIPermission]]
class GCOMInstanceInfoConfigFeatureToggles(typing.TypedDict):
accessControlOnCall: str
class GCOMInstanceInfoConfig(TypedDict):
class GCOMInstanceInfoConfig(typing.TypedDict):
feature_toggles: GCOMInstanceInfoConfigFeatureToggles
class GCOMInstanceInfo(TypedDict):
class GCOMInstanceInfo(typing.TypedDict):
id: int
orgId: int
slug: str
@ -47,26 +50,66 @@ class GCOMInstanceInfo(TypedDict):
url: str
status: str
clusterSlug: str
config: Optional[GCOMInstanceInfoConfig]
config: GCOMInstanceInfoConfig | None
class ApiClientResponseCallStatus(typing.TypedDict):
url: str
connected: bool
status_code: int
message: str
# TODO: come back and make the typing.Dict strongly typed once we switch to Python 3.12
# which has better support for generics
_APIClientResponse = typing.Optional[typing.Dict | typing.List]
APIClientResponse = typing.Tuple[_APIClientResponse, ApiClientResponseCallStatus]
# can't define this using class syntax because one of the keys contains a dash
# https://docs.python.org/3/library/typing.html#typing.TypedDict:~:text=The%20functional%20syntax%20should%20also%20be%20used%20when%20any%20of%20the%20keys%20are%20not%20valid%20identifiers%2C%20for%20example%20because%20they%20are%20keywords%20or%20contain%20hyphens.%20Example%3A
APIRequestHeaders = typing.TypedDict(
"APIRequestHeaders",
{
"User-Agent": str,
"Authorization": str,
},
)
class HttpMethod(typing.Protocol):
"""
TODO: can probably replace this with something from the requests library?
https://github.com/psf/requests/blob/main/requests/api.py#L14
"""
@property
def __name__(self) -> str:
...
def __call__(self, *args, **kwargs) -> requests.Response:
...
class APIClient:
def __init__(self, api_url: str, api_token: str):
def __init__(self, api_url: str, api_token: str) -> None:
self.api_url = api_url
self.api_token = api_token
def api_head(self, endpoint: str, body: dict = None, **kwargs) -> Tuple[Optional[Response], dict]:
def api_head(self, endpoint: str, body: typing.Optional[typing.Dict] = None, **kwargs) -> APIClientResponse:
return self.call_api(endpoint, requests.head, body, **kwargs)
def api_get(self, endpoint: str, **kwargs) -> Tuple[Optional[Response], dict]:
def api_get(self, endpoint: str, **kwargs) -> APIClientResponse:
return self.call_api(endpoint, requests.get, **kwargs)
def api_post(self, endpoint: str, body: dict = None, **kwargs) -> Tuple[Optional[Response], dict]:
def api_post(self, endpoint: str, body: typing.Optional[typing.Dict] = None, **kwargs) -> APIClientResponse:
return self.call_api(endpoint, requests.post, body, **kwargs)
def call_api(self, endpoint: str, http_method, body: dict = None, **kwargs) -> Tuple[Optional[Response], dict]:
def call_api(
self, endpoint: str, http_method: HttpMethod, body: typing.Optional[typing.Dict] = None, **kwargs
) -> APIClientResponse:
request_start = time.perf_counter()
call_status = {
call_status: ApiClientResponseCallStatus = {
"url": urljoin(self.api_url, endpoint),
"connected": False,
"status_code": status.HTTP_503_SERVICE_UNAVAILABLE,
@ -108,20 +151,20 @@ class APIClient:
return None, call_status
@property
def request_headers(self) -> dict:
def request_headers(self) -> APIRequestHeaders:
return {"User-Agent": settings.GRAFANA_COM_USER_AGENT, "Authorization": f"Bearer {self.api_token}"}
class GrafanaAPIClient(APIClient):
USER_PERMISSION_ENDPOINT = f"api/access-control/users/permissions/search?actionPrefix={ACTION_PREFIX}"
def __init__(self, api_url: str, api_token: str):
def __init__(self, api_url: str, api_token: str) -> None:
super().__init__(api_url, api_token)
def check_token(self) -> Tuple[Optional[Response], dict]:
def check_token(self) -> APIClientResponse:
return self.api_head("api/org")
def get_users_permissions(self, rbac_is_enabled_for_org: bool) -> Dict[str, List[GrafanaAPIPermission]]:
def get_users_permissions(self, rbac_is_enabled_for_org: bool) -> UserPermissionsDict:
"""
It is possible that this endpoint may not be available for certain Grafana orgs.
Ex: for Grafana Cloud orgs whom have pinned their Grafana version to an earlier version
@ -141,11 +184,15 @@ class GrafanaAPIClient(APIClient):
"""
if not rbac_is_enabled_for_org:
return {}
data, _ = self.api_get(self.USER_PERMISSION_ENDPOINT)
if data is None:
response, _ = self.api_get(self.USER_PERMISSION_ENDPOINT)
if response is None:
return {}
elif isinstance(response, list):
return {}
all_users_permissions = {}
data: typing.Dict[str, typing.Dict[str, typing.List[str]]] = response
all_users_permissions: UserPermissionsDict = {}
for user_id, user_permissions in data.items():
all_users_permissions[user_id] = [GrafanaAPIPermission(action=key) for key, _ in user_permissions.items()]
@ -155,11 +202,15 @@ class GrafanaAPIClient(APIClient):
_, resp_status = self.api_head(self.USER_PERMISSION_ENDPOINT)
return resp_status["connected"]
def get_users(self, rbac_is_enabled_for_org: bool, **kwargs) -> List[GrafanaUserWithPermissions]:
users, _ = self.api_get("api/org/users", **kwargs)
def get_users(self, rbac_is_enabled_for_org: bool, **kwargs) -> GrafanaUsersWithPermissions:
users_response, _ = self.api_get("api/org/users", **kwargs)
if not users:
if not users_response:
return []
elif isinstance(users_response, dict):
return []
users: GrafanaUsersWithPermissions = users_response
user_permissions = self.get_users_permissions(rbac_is_enabled_for_org)
@ -168,32 +219,32 @@ class GrafanaAPIClient(APIClient):
user["permissions"] = user_permissions.get(str(user["userId"]), [])
return users
def get_teams(self, **kwargs):
def get_teams(self, **kwargs) -> APIClientResponse:
return self.api_get("api/teams/search?perpage=1000000", **kwargs)
def get_team_members(self, team_id):
def get_team_members(self, team_id: int) -> APIClientResponse:
return self.api_get(f"api/teams/{team_id}/members")
def get_datasources(self):
def get_datasources(self) -> APIClientResponse:
return self.api_get("api/datasources")
def get_datasource_by_id(self, datasource_id):
def get_datasource_by_id(self, datasource_id) -> APIClientResponse:
# This endpoint is deprecated for Grafana version >= 9. Use get_datasource instead
return self.api_get(f"api/datasources/{datasource_id}")
def get_datasource(self, datasource_uid):
def get_datasource(self, datasource_uid) -> APIClientResponse:
return self.api_get(f"api/datasources/uid/{datasource_uid}")
def get_alertmanager_status_with_config(self, recipient):
def get_alertmanager_status_with_config(self, recipient) -> APIClientResponse:
return self.api_get(f"api/alertmanager/{recipient}/api/v2/status")
def get_alerting_config(self, recipient):
def get_alerting_config(self, recipient: str) -> APIClientResponse:
return self.api_get(f"api/alertmanager/{recipient}/config/api/v1/alerts")
def update_alerting_config(self, recipient, config):
def update_alerting_config(self, recipient, config) -> APIClientResponse:
return self.api_post(f"api/alertmanager/{recipient}/config/api/v1/alerts", config)
def get_grafana_plugin_settings(self, recipient):
def get_grafana_plugin_settings(self, recipient: str) -> APIClientResponse:
return self.api_get(f"api/plugins/{recipient}/settings")
@ -203,10 +254,12 @@ class GcomAPIClient(APIClient):
STACK_STATUS_DELETED = "deleted"
STACK_STATUS_ACTIVE = "active"
def __init__(self, api_token: str):
def __init__(self, api_token: str) -> None:
super().__init__(settings.GRAFANA_COM_API_URL, api_token)
def get_instance_info(self, stack_id: str, include_config_query_param: bool = False) -> Optional[GCOMInstanceInfo]:
def get_instance_info(
self, stack_id: str, include_config_query_param: bool = False
) -> typing.Optional[GCOMInstanceInfo]:
"""
NOTE: in order to use ?config=true, an "Admin" GCOM token must be used to make the API call
"""
@ -222,7 +275,11 @@ class GcomAPIClient(APIClient):
there are two ways that feature toggles can be enabled, this method takes into account both
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#enable
"""
instance_feature_toggles = instance_info.get("config", {}).get("feature_toggles", {})
instance_info_config = instance_info.get("config", {})
if not instance_info_config:
return False
instance_feature_toggles = instance_info_config.get("feature_toggles", {})
if not instance_feature_toggles:
return False
@ -251,8 +308,8 @@ class GcomAPIClient(APIClient):
instance_infos, _ = self.api_get(url)
return instance_infos["items"] and instance_infos["items"][0].get("status") == self.STACK_STATUS_DELETED
def post_active_users(self, body):
def post_active_users(self, body) -> APIClientResponse:
return self.api_post("app-active-users", body)
def get_stack_regions(self):
def get_stack_regions(self) -> APIClientResponse:
return self.api_get("stack-regions")

View file

@ -5,7 +5,7 @@ from rest_framework.response import Response
from rest_framework.views import APIView
from apps.grafana_plugin.helpers import GrafanaAPIClient
from apps.user_management.models.organization import Organization, ProvisionedPlugin
from apps.user_management.models.organization import Organization
from apps.user_management.sync import sync_organization
from common.api_helpers.mixins import GrafanaHeadersMixin
@ -23,7 +23,7 @@ class SelfHostedInstallView(GrafanaHeadersMixin, APIView):
grafana_url = settings.SELF_HOSTED_SETTINGS["GRAFANA_API_URL"]
grafana_api_token = self.instance_context["grafana_token"]
provisioning_info: ProvisionedPlugin = {"error": None}
provisioning_info = {"error": None}
if settings.LICENSE != settings.OPEN_SOURCE_LICENSE_NAME:
provisioning_info["error"] = f"License type not authorized"

View file

@ -1,7 +1,6 @@
import datetime
import typing
from django.utils import timezone
class AlertGroupsTotalMetricsDict(typing.TypedDict):
integration_name: str
@ -39,7 +38,7 @@ class RecalculateOrgMetricsDict(typing.TypedDict):
ALERT_GROUPS_TOTAL = "oncall_alert_groups_total"
ALERT_GROUPS_RESPONSE_TIME = "oncall_alert_groups_response_time_seconds"
METRICS_RESPONSE_TIME_CALCULATION_PERIOD = timezone.timedelta(days=7)
METRICS_RESPONSE_TIME_CALCULATION_PERIOD = datetime.timedelta(days=7)
METRICS_CACHE_LIFETIME = 93600 # 26 hours. Should be higher than METRICS_RECALCULATE_CACHE_TIMEOUT

View file

@ -1,3 +1,4 @@
import datetime
import random
import typing
@ -20,6 +21,9 @@ from apps.metrics_exporter.constants import (
AlertGroupsTotalMetricsDict,
)
if typing.TYPE_CHECKING:
from apps.alerts.models import AlertReceiveChannel
def get_organization_ids_from_db():
AlertReceiveChannel = apps.get_model("alerts", "AlertReceiveChannel")
@ -42,12 +46,12 @@ def get_organization_ids():
return organizations_ids
def get_response_time_period():
def get_response_time_period() -> datetime.datetime:
"""Returns period for response time calculation"""
return timezone.now() - METRICS_RESPONSE_TIME_CALCULATION_PERIOD
def get_metrics_recalculation_timeout():
def get_metrics_recalculation_timeout() -> int:
"""
Returns timeout when metrics should be recalculated.
Add some dispersion to avoid starting recalculation tasks for all organizations at the same time.
@ -66,7 +70,7 @@ def get_metrics_cache_timeout(organization_id):
return metrics_cache_timeout
def get_metrics_cache_timer_key(organization_id):
def get_metrics_cache_timer_key(organization_id) -> str:
return f"{METRICS_CACHE_TIMER}_{organization_id}"
@ -75,15 +79,15 @@ def get_metrics_cache_timer_for_organization(organization_id):
return cache.get(key)
def get_metric_alert_groups_total_key(organization_id):
def get_metric_alert_groups_total_key(organization_id) -> str:
return f"{ALERT_GROUPS_TOTAL}_{organization_id}"
def get_metric_alert_groups_response_time_key(organization_id):
def get_metric_alert_groups_response_time_key(organization_id) -> str:
return f"{ALERT_GROUPS_RESPONSE_TIME}_{organization_id}"
def metrics_update_integration_cache(integration):
def metrics_update_integration_cache(integration: "AlertReceiveChannel") -> None:
"""Update integration data in metrics cache"""
metrics_cache_timeout = get_metrics_cache_timeout(integration.organization_id)
metric_alert_groups_total_key = get_metric_alert_groups_total_key(integration.organization_id)
@ -105,7 +109,7 @@ def metrics_update_integration_cache(integration):
cache.set(metric_key, metric_cache, timeout=metrics_cache_timeout)
def metrics_remove_deleted_integration_from_cache(integration):
def metrics_remove_deleted_integration_from_cache(integration: "AlertReceiveChannel"):
"""Remove data related to deleted integration from metrics cache"""
metrics_cache_timeout = get_metrics_cache_timeout(integration.organization_id)
metric_alert_groups_total_key = get_metric_alert_groups_total_key(integration.organization_id)
@ -118,7 +122,7 @@ def metrics_remove_deleted_integration_from_cache(integration):
cache.set(metric_key, metric_cache, timeout=metrics_cache_timeout)
def metrics_add_integration_to_cache(integration):
def metrics_add_integration_to_cache(integration: "AlertReceiveChannel"):
"""Add new integration data to metrics cache"""
metrics_cache_timeout = get_metrics_cache_timeout(integration.organization_id)
metric_alert_groups_total_key = get_metric_alert_groups_total_key(integration.organization_id)

View file

@ -6,6 +6,7 @@ import typing
from enum import Enum
import humanize
import pytz
import requests
from celery.utils.log import get_task_logger
from django.conf import settings
@ -25,6 +26,7 @@ from apps.user_management.models import User
from common.api_helpers.utils import create_engine_url
from common.custom_celery_tasks import shared_dedicated_queue_retry_task
from common.l10n import format_localized_datetime, format_localized_time
from common.timezones import is_valid_timezone
if typing.TYPE_CHECKING:
from apps.mobile_app.models import MobileAppUserSettings
@ -227,23 +229,33 @@ def _get_alert_group_escalation_fcm_message(
def _get_youre_going_oncall_notification_title(seconds_until_going_oncall: int) -> str:
time_until_going_oncall = humanize.naturaldelta(seconds_until_going_oncall)
return f"Your on-call shift starts in {time_until_going_oncall}"
return f"Your on-call shift starts in {humanize.naturaldelta(seconds_until_going_oncall)}"
def _get_youre_going_oncall_notification_subtitle(
schedule: OnCallSchedule,
schedule_event: ScheduleEvent,
mobile_app_user_settings: "MobileAppUserSettings",
user_timezone: typing.Optional[str],
) -> str:
shift_start = schedule_event["start"]
shift_end = schedule_event["end"]
shift_starts_and_ends_on_same_day = shift_start.date() == shift_end.date()
dt_formatter_func = format_localized_time if shift_starts_and_ends_on_same_day else format_localized_datetime
def _format_datetime(dt):
return dt_formatter_func(dt, mobile_app_user_settings.locale)
def _format_datetime(dt: datetime.datetime) -> str:
"""
1. Convert the shift datetime to the user's configured timezone, if set, otherwise fallback to UTC
2. Display the timezone aware datetime as a formatted string that is based on the user's configured mobile
app locale, otherwise fallback to "en"
"""
if user_timezone is None or not is_valid_timezone(user_timezone):
user_tz = "UTC"
else:
user_tz = user_timezone
localized_dt = dt.astimezone(pytz.timezone(user_tz))
return dt_formatter_func(localized_dt, mobile_app_user_settings.locale)
formatted_shift = f"{_format_datetime(shift_start)} - {_format_datetime(shift_end)}"
@ -266,7 +278,7 @@ def _get_youre_going_oncall_fcm_message(
notification_title = _get_youre_going_oncall_notification_title(seconds_until_going_oncall)
notification_subtitle = _get_youre_going_oncall_notification_subtitle(
schedule, schedule_event, mobile_app_user_settings
schedule, schedule_event, mobile_app_user_settings, user.timezone
)
data: FCMMessageData = {

View file

@ -96,9 +96,11 @@ def test_get_youre_going_oncall_notification_title(make_organization_and_user, m
# same day shift
##################
same_day_shift_title = tasks._get_youre_going_oncall_notification_title(seconds_until_going_oncall)
same_day_shift_subtitle = tasks._get_youre_going_oncall_notification_subtitle(schedule, same_day_shift, maus)
same_day_shift_subtitle = tasks._get_youre_going_oncall_notification_subtitle(
schedule, same_day_shift, maus, user.timezone
)
same_day_shift_no_locale_subtitle = tasks._get_youre_going_oncall_notification_subtitle(
schedule, same_day_shift, maus_no_locale
schedule, same_day_shift, maus_no_locale, user2.timezone
)
assert same_day_shift_title == f"Your on-call shift starts in {humanized_time_until_going_oncall}"
@ -110,10 +112,10 @@ def test_get_youre_going_oncall_notification_title(make_organization_and_user, m
##################
multiple_day_shift_title = tasks._get_youre_going_oncall_notification_title(seconds_until_going_oncall)
multiple_day_shift_subtitle = tasks._get_youre_going_oncall_notification_subtitle(
schedule, multiple_day_shift, maus
schedule, multiple_day_shift, maus, user.timezone
)
multiple_day_shift_no_locale_subtitle = tasks._get_youre_going_oncall_notification_subtitle(
schedule, multiple_day_shift, maus_no_locale
schedule, multiple_day_shift, maus_no_locale, user2.timezone
)
assert multiple_day_shift_title == f"Your on-call shift starts in {humanized_time_until_going_oncall}"
@ -124,6 +126,47 @@ def test_get_youre_going_oncall_notification_title(make_organization_and_user, m
)
@pytest.mark.parametrize(
"user_timezone,expected_shift_times",
[
(None, "9:00AM - 5:00PM"),
("Europe/Amsterdam", "11:00AM - 7:00PM"),
("asdfasdfasdf", "9:00AM - 5:00PM"),
],
)
@pytest.mark.django_db
def test_get_youre_going_oncall_notification_subtitle(
make_organization, make_user_for_organization, make_schedule, user_timezone, expected_shift_times
):
schedule_name = "asdfasdfasdfasdf"
organization = make_organization()
user = make_user_for_organization(organization, _timezone=user_timezone)
user_pk = user.public_primary_key
maus = MobileAppUserSettings.objects.create(user=user)
schedule = make_schedule(organization, name=schedule_name, schedule_class=OnCallScheduleWeb)
shift_start = timezone.datetime(2023, 7, 8, 9, 0, 0)
shift_end = timezone.datetime(2023, 7, 8, 17, 0, 0)
shift = _create_schedule_event(
shift_start,
shift_end,
"asdfasdfasdf",
[
{
"pk": user_pk,
},
],
)
assert (
tasks._get_youre_going_oncall_notification_subtitle(schedule, shift, maus, user.timezone)
== f"{expected_shift_times}\nSchedule {schedule_name}"
)
@mock.patch("apps.mobile_app.tasks._get_youre_going_oncall_notification_subtitle")
@mock.patch("apps.mobile_app.tasks._get_youre_going_oncall_notification_title")
@mock.patch("apps.mobile_app.tasks._construct_fcm_message")
@ -140,7 +183,8 @@ def test_get_youre_going_oncall_fcm_message(
mock_construct_fcm_message,
mock_get_youre_going_oncall_notification_title,
mock_get_youre_going_oncall_notification_subtitle,
make_organization_and_user,
make_organization,
make_user_for_organization,
make_schedule,
):
mock_fcm_message = "mncvmnvcmnvcnmvcmncvmn"
@ -153,7 +197,9 @@ def test_get_youre_going_oncall_fcm_message(
mock_get_youre_going_oncall_notification_title.return_value = mock_notification_title
mock_get_youre_going_oncall_notification_subtitle.return_value = mock_notification_subtitle
organization, user = make_organization_and_user()
organization = make_organization()
user_tz = "Europe/Amsterdam"
user = make_user_for_organization(organization, _timezone=user_tz)
user_pk = user.public_primary_key
schedule = make_schedule(organization, schedule_class=OnCallScheduleWeb)
notification_thread_id = f"{schedule.public_primary_key}:{user_pk}:going-oncall"
@ -203,7 +249,7 @@ def test_get_youre_going_oncall_fcm_message(
)
mock_apns_payload.assert_called_once_with(aps=mock_aps.return_value)
mock_get_youre_going_oncall_notification_subtitle.assert_called_once_with(schedule, schedule_event, maus)
mock_get_youre_going_oncall_notification_subtitle.assert_called_once_with(schedule, schedule_event, maus, user_tz)
mock_get_youre_going_oncall_notification_title.assert_called_once_with(seconds_until_going_oncall)
mock_construct_fcm_message.assert_called_once_with(

View file

@ -1,4 +1,5 @@
import logging
import typing
from urllib.parse import urljoin
import requests
@ -51,7 +52,7 @@ class CloudConnector(models.Model):
return sync_status, error_msg
def sync_users_with_cloud(self) -> tuple[bool, str]:
def sync_users_with_cloud(self) -> typing.Tuple[bool, typing.Optional[str]]:
sync_status = False
error_msg = None

View file

@ -276,7 +276,7 @@ class EscalationPolicyUpdateSerializer(EscalationPolicySerializer):
type = EscalationPolicyTypeField(required=False, source="step", allow_null=True)
class Meta(EscalationPolicySerializer.Meta):
read_only_fields = ("route_id",)
read_only_fields = ["route_id"]
def update(self, instance, validated_data):
if "step" in validated_data:

View file

@ -175,7 +175,7 @@ class ChannelFilterSerializer(BaseChannelFilterSerializer):
"telegram",
"manual_order",
]
read_only_fields = ("is_the_last_route",)
read_only_fields = ["is_the_last_route"]
def create(self, validated_data):
validated_data = self._correct_validated_data(validated_data)

View file

@ -13,6 +13,7 @@ from django.apps import apps
from django.db.models import Q
from django.utils import timezone
from icalendar import Calendar
from icalendar import Event as IcalEvent
from apps.api.permissions import RBACPermission
from apps.schedules.constants import (
@ -37,7 +38,8 @@ This is a hack to allow us to load models for type checking without circular dep
This module likely needs to refactored to be part of the OnCallSchedule module.
"""
if TYPE_CHECKING:
from apps.schedules.models import OnCallSchedule
from apps.schedules.models import CustomOnCallShift, OnCallSchedule
from apps.schedules.models.on_call_schedule import OnCallScheduleQuerySet
from apps.user_management.models import Organization, User
from apps.user_management.models.user import UserQuerySet
@ -45,14 +47,26 @@ logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
EmptyShift = namedtuple(
"EmptyShift",
["start", "end", "summary", "description", "attendee", "all_day", "calendar_type", "calendar_tz", "shift_pk"],
)
EmptyShifts = typing.List[EmptyShift]
DatetimeInterval = namedtuple("DatetimeInterval", ["start", "end"])
DatetimeIntervals = typing.List[DatetimeInterval]
IcalEvents = typing.List[IcalEvent]
def users_in_ical(
usernames_from_ical: typing.List[str],
organization: Organization,
organization: "Organization",
include_viewers=False,
users_to_filter: typing.Optional[UserQuerySet] = None,
) -> UserQuerySet:
users_to_filter: typing.Optional["UserQuerySet"] = None,
) -> typing.Sequence["User"]:
"""
This method returns a `UserQuerySet`, filtered by users whose username, or case-insensitive e-mail,
This method returns a sequence of `User` objects, filtered by users whose username, or case-insensitive e-mail,
is present in `usernames_from_ical`. If `include_viewers` is set to `True`, users are further filtered down
based on their granted permissions.
@ -95,21 +109,23 @@ def users_in_ical(
@timed_lru_cache(timeout=100)
def memoized_users_in_ical(usernames_from_ical: typing.List[str], organization: Organization) -> UserQuerySet:
def memoized_users_in_ical(
usernames_from_ical: typing.List[str], organization: "Organization"
) -> typing.Sequence["User"]:
# using in-memory cache instead of redis to avoid pickling python objects
return users_in_ical(usernames_from_ical, organization)
# used for display schedule events on web
def list_of_oncall_shifts_from_ical(
schedule,
date,
user_timezone="UTC",
with_empty_shifts=False,
with_gaps=False,
days=1,
filter_by=None,
from_cached_final=False,
schedule: "OnCallSchedule",
date: datetime.date,
user_timezone: str = "UTC",
with_empty_shifts: bool = False,
with_gaps: bool = False,
days: int = 1,
filter_by: str | None = None,
from_cached_final: bool = False,
):
"""
Parse the ical file and return list of events with users
@ -130,14 +146,19 @@ def list_of_oncall_shifts_from_ical(
# get list of iCalendars from current iCal files. If there is more than one calendar, primary calendar will always
# be the first
calendars: typing.Tuple[typing.Optional[Calendar], ...]
if from_cached_final:
calendars = [Calendar.from_ical(schedule.cached_ical_final_schedule)]
calendars = (Calendar.from_ical(schedule.cached_ical_final_schedule),)
else:
calendars = schedule.get_icalendars()
# TODO: Review offset usage
pytz_tz = pytz.timezone(user_timezone)
user_timezone_offset = datetime.datetime.now().astimezone(pytz_tz).utcoffset()
# utcoffset can technically return None, but we're confident it is a timedelta here
user_timezone_offset: datetime.timedelta = datetime.datetime.now().astimezone(pytz_tz).utcoffset() # type: ignore[assignment]
datetime_min = datetime.datetime.combine(date, datetime.time.min) + datetime.timedelta(milliseconds=1)
datetime_start = (datetime_min - user_timezone_offset).astimezone(pytz.UTC)
datetime_end = datetime_start + datetime.timedelta(days=days - 1, hours=23, minutes=59, seconds=59)
@ -147,6 +168,8 @@ def list_of_oncall_shifts_from_ical(
for idx, calendar in enumerate(calendars):
if calendar is not None:
calendar_type: str | int
if from_cached_final:
calendar_type = CALENDAR_TYPE_FINAL
elif idx == 0:
@ -193,7 +216,14 @@ def list_of_oncall_shifts_from_ical(
return result or None
def get_shifts_dict(calendar, calendar_type, schedule, datetime_start, datetime_end, with_empty_shifts=False):
def get_shifts_dict(
calendar: Calendar,
calendar_type: str | int,
schedule: "OnCallSchedule",
datetime_start: datetime.datetime,
datetime_end: datetime.datetime,
with_empty_shifts: bool = False,
):
events = ical_events.get_events_from_ical_between(calendar, datetime_start, datetime_end)
result_datetime = []
result_date = []
@ -244,22 +274,15 @@ def get_shifts_dict(calendar, calendar_type, schedule, datetime_start, datetime_
return result_datetime, result_date
EmptyShift = namedtuple(
"EmptyShift",
["start", "end", "summary", "description", "attendee", "all_day", "calendar_type", "calendar_tz", "shift_pk"],
)
def list_of_empty_shifts_in_schedule(schedule, start_date, end_date):
"""
Parse the ical file and return list of EmptyShift.
"""
def list_of_empty_shifts_in_schedule(
schedule: "OnCallSchedule", start_date: datetime.date, end_date: datetime.date
) -> EmptyShifts:
# Calculate lookup window in schedule's tz
# If we can't get tz from ical use UTC
OnCallSchedule = apps.get_model("schedules", "OnCallSchedule")
calendars = schedule.get_icalendars()
empty_shifts = []
empty_shifts: EmptyShifts = []
for idx, calendar in enumerate(calendars):
if calendar is not None:
if idx == 0:
@ -269,7 +292,9 @@ def list_of_empty_shifts_in_schedule(schedule, start_date, end_date):
calendar_tz = get_icalendar_tz_or_utc(calendar)
schedule_timezone_offset = datetime.datetime.now().astimezone(calendar_tz).utcoffset()
# utcoffset can technically return None, but we're confident it is a timedelta here
schedule_timezone_offset: datetime.timedelta = datetime.datetime.now().astimezone(calendar_tz).utcoffset() # type: ignore[assignment]
start_datetime = datetime.datetime.combine(start_date, datetime.time.min) + datetime.timedelta(
milliseconds=1
)
@ -322,8 +347,11 @@ def list_of_empty_shifts_in_schedule(schedule, start_date, end_date):
def list_users_to_notify_from_ical(
schedule, events_datetime=None, include_viewers=False, users_to_filter=None
) -> UserQuerySet:
schedule: "OnCallSchedule",
events_datetime: typing.Optional[datetime.datetime] = None,
include_viewers: bool = False,
users_to_filter: typing.Optional["UserQuerySet"] = None,
) -> typing.Sequence["User"]:
"""
Retrieve on-call users for the current time
"""
@ -338,24 +366,25 @@ def list_users_to_notify_from_ical(
def list_users_to_notify_from_ical_for_period(
schedule,
start_datetime,
end_datetime,
schedule: "OnCallSchedule",
start_datetime: datetime.datetime,
end_datetime: datetime.datetime,
include_viewers=False,
users_to_filter=None,
) -> UserQuerySet:
) -> typing.Sequence["User"]:
# get list of iCalendars from current iCal files. If there is more than one calendar, primary calendar will always
# be the first
calendars = schedule.get_icalendars()
# reverse calendars to make overrides calendar the first, if schedule is iCal
calendars = calendars[::-1]
users_found_in_ical = []
users_found_in_ical: typing.Sequence["User"] = []
# at first check overrides calendar and return users from it if it exists and on-call users are found
for calendar in calendars:
if calendar is None:
continue
events = ical_events.get_events_from_ical_between(calendar, start_datetime, end_datetime)
parsed_ical_events = {} # event info where key is event priority and value list of found usernames {0:["alex"]}
parsed_ical_events: typing.Dict[int, typing.List[str]] = {}
for event in events:
current_usernames, current_priority = get_usernames_from_ical_event(event)
parsed_ical_events.setdefault(current_priority, []).extend(current_usernames)
@ -373,8 +402,8 @@ def list_users_to_notify_from_ical_for_period(
def get_oncall_users_for_multiple_schedules(
schedules, events_datetime=None
) -> typing.Dict[OnCallSchedule, typing.List[User]]:
schedules: "OnCallScheduleQuerySet", events_datetime=None
) -> typing.Dict["OnCallSchedule", typing.List[User]]:
from apps.user_management.models import User
if events_datetime is None:
@ -418,7 +447,7 @@ def get_oncall_users_for_multiple_schedules(
return oncall_users
def parse_username_from_string(string):
def parse_username_from_string(string: str) -> str:
"""
Parse on-call shift user from the given string
Example input:
@ -429,7 +458,7 @@ def parse_username_from_string(string):
return re.sub(RE_PRIORITY, "", string.strip(), 1).strip()
def parse_priority_from_string(string):
def parse_priority_from_string(string: str) -> int:
"""
Parse on-call shift priority from the given string
Example input:
@ -437,17 +466,16 @@ def parse_priority_from_string(string):
Example output:
1
"""
priority = re.findall(RE_PRIORITY, string.strip())
if len(priority) > 0:
priority = int(priority[0])
priority = 0
priority_matches = re.findall(RE_PRIORITY, string.strip())
if len(priority_matches) > 0:
priority = int(priority_matches[0])
if priority < 1:
priority = 0
else:
priority = 0
return priority
def parse_event_uid(string):
def parse_event_uid(string: str):
pk = None
source = None
source_verbal = None
@ -467,8 +495,8 @@ def parse_event_uid(string):
if source is not None:
source = int(source)
CustomOnCallShift = apps.get_model("schedules", "CustomOnCallShift")
source_verbal = CustomOnCallShift.SOURCE_CHOICES[source][1]
OnCallShift: "CustomOnCallShift" = apps.get_model("schedules", "CustomOnCallShift")
source_verbal = OnCallShift.SOURCE_CHOICES[source][1]
return pk, source_verbal
@ -489,7 +517,7 @@ def get_usernames_from_ical_event(event):
return usernames_found, priority
def get_missing_users_from_ical_event(event, organization):
def get_missing_users_from_ical_event(event, organization: "Organization"):
all_usernames, _ = get_usernames_from_ical_event(event)
users = list(get_users_from_ical_event(event, organization))
found_usernames = [u.username for u in users]
@ -497,7 +525,7 @@ def get_missing_users_from_ical_event(event, organization):
return [u for u in all_usernames if u != "" and u not in found_usernames and u.lower() not in found_emails]
def get_users_from_ical_event(event, organization):
def get_users_from_ical_event(event, organization: "Organization") -> typing.Sequence["User"]:
usernames_from_ical, _ = get_usernames_from_ical_event(event)
users = []
if len(usernames_from_ical) != 0:
@ -587,9 +615,9 @@ def get_icalendar_tz_or_utc(icalendar):
return pytz.timezone(converted_timezone)
def fetch_ical_file_or_get_error(ical_url):
cached_ical_file = None
ical_file_error = None
def fetch_ical_file_or_get_error(ical_url: str) -> typing.Tuple[str | None, str | None]:
cached_ical_file: str | None = None
ical_file_error: str | None = None
try:
new_ical_file = fetch_ical_file(ical_url)
Calendar.from_ical(new_ical_file)
@ -602,13 +630,12 @@ def fetch_ical_file_or_get_error(ical_url):
return cached_ical_file, ical_file_error
def fetch_ical_file(ical_url):
def fetch_ical_file(ical_url: str) -> str:
# without user-agent header google calendar sometimes returns text/html instead of text/calendar
headers = {"User-Agent": "Grafana OnCall"}
r = requests.get(ical_url, headers=headers, timeout=10)
logger.info(f"fetch_ical_file: content-type={r.headers.get('Content-Type')}")
ical_file = r.text
return ical_file
return r.text
def create_base_icalendar(name: str) -> Calendar:
@ -624,77 +651,56 @@ def create_base_icalendar(name: str) -> Calendar:
return cal
def get_events_from_calendars(ical_obj: Calendar, calendars: tuple) -> None:
for calendar in calendars:
if calendar:
for component in calendar.walk():
if component.name == "VEVENT":
def get_user_events_from_calendars(
ical_obj: Calendar, calendar: Calendar, user: User, name: typing.Optional[str] = None
) -> None:
if calendar:
for component in calendar.walk():
if component.name == "VEVENT":
event_user = get_usernames_from_ical_event(component)
event_user_value = event_user[0][0]
if event_user_value == user.username or event_user_value.lower() == user.email.lower():
if name:
component["SUMMARY"] = "{}: {}".format(name, component["SUMMARY"])
ical_obj.add_component(component)
def get_user_events_from_calendars(ical_obj: Calendar, calendars: tuple, user: User, name: str = None) -> None:
for calendar in calendars:
if calendar:
for component in calendar.walk():
if component.name == "VEVENT":
event_user = get_usernames_from_ical_event(component)
event_user_value = event_user[0][0]
if event_user_value == user.username or event_user_value.lower() == user.email.lower():
if name:
component["SUMMARY"] = "{}: {}".format(name, component["SUMMARY"])
ical_obj.add_component(component)
def _is_final_export_enabled(schedule: OnCallSchedule) -> bool:
DynamicSetting = apps.get_model("base", "DynamicSetting")
enabled_final_export = DynamicSetting.objects.get_or_create(
name="enabled_final_schedule_export",
defaults={
"json_value": {
"schedule_ids": [],
}
},
)[0]
return schedule.public_primary_key in enabled_final_export.json_value["schedule_ids"]
def _get_ical_data_final_schedule(schedule: OnCallSchedule) -> str:
def _get_ical_data_final_schedule(schedule: "OnCallSchedule") -> str | None:
ical_data = schedule.cached_ical_final_schedule
if ical_data is None:
schedule.refresh_ical_final_schedule()
ical_data = schedule.cached_ical_final_schedule
# typing is safe here. cached_ical_final_schedule is updated inside of refresh_ical_final_schedule
ical_data: str = schedule.cached_ical_final_schedule
return ical_data
def ical_export_from_schedule(schedule: OnCallSchedule) -> bytes:
def ical_export_from_schedule(schedule: "OnCallSchedule") -> bytes:
ical_data = _get_ical_data_final_schedule(schedule)
return ical_data.encode()
def user_ical_export(user: User, schedules: list[OnCallSchedule]) -> bytes:
def user_ical_export(user: "User", schedules: "OnCallScheduleQuerySet") -> bytes:
schedule_name = "On-Call Schedule for {0}".format(user.username)
ical_obj = create_base_icalendar(schedule_name)
for schedule in schedules:
name = schedule.name
ical_data = _get_ical_data_final_schedule(schedule)
calendars = [Calendar.from_ical(ical_data)]
get_user_events_from_calendars(ical_obj, calendars, user, name=name)
get_user_events_from_calendars(ical_obj, Calendar.from_ical(ical_data), user, name=name)
return ical_obj.to_ical()
DatetimeInterval = namedtuple("DatetimeInterval", ["start", "end"])
def list_of_gaps_in_schedule(schedule, start_date, end_date):
def list_of_gaps_in_schedule(
schedule: "OnCallSchedule", start_date: datetime.date, end_date: datetime.date
) -> DatetimeIntervals:
calendars = schedule.get_icalendars()
intervals = []
intervals: DatetimeIntervals = []
start_datetime = datetime.datetime.combine(start_date, datetime.time.min) + datetime.timedelta(milliseconds=1)
start_datetime = start_datetime.astimezone(pytz.UTC)
end_datetime = datetime.datetime.combine(end_date, datetime.time.max).astimezone(pytz.UTC)
for idx, calendar in enumerate(calendars):
for calendar in calendars:
if calendar is not None:
calendar_tz = get_icalendar_tz_or_utc(calendar)
events = ical_events.get_events_from_ical_between(
@ -708,8 +714,8 @@ def list_of_gaps_in_schedule(schedule, start_date, end_date):
return detect_gaps(intervals, start_datetime, end_datetime)
def detect_gaps(intervals, start, end):
gaps = []
def detect_gaps(intervals: DatetimeIntervals, start: datetime.datetime, end: datetime.datetime) -> DatetimeIntervals:
gaps: DatetimeIntervals = []
intervals = sorted(intervals, key=lambda dt: dt.start)
if len(intervals) > 0:
base_interval = intervals[0]
@ -725,7 +731,7 @@ def detect_gaps(intervals, start, end):
return gaps
def merge_if_overlaps(a: DatetimeInterval, b: DatetimeInterval):
def merge_if_overlaps(a: DatetimeInterval, b: DatetimeInterval) -> typing.Tuple[bool, DatetimeInterval]:
if a.end >= b.end:
return True, DatetimeInterval(a.start, a.end)
if b.start - a.end < datetime.timedelta(minutes=1):
@ -734,13 +740,13 @@ def merge_if_overlaps(a: DatetimeInterval, b: DatetimeInterval):
return False, DatetimeInterval(b.start, b.end)
def start_end_with_respect_to_all_day(event, calendar_tz):
def start_end_with_respect_to_all_day(event: IcalEvent, calendar_tz):
start, _ = ical_date_to_datetime(event[ICAL_DATETIME_START].dt, calendar_tz, start=True)
end, _ = ical_date_to_datetime(event[ICAL_DATETIME_END].dt, calendar_tz, start=False)
return start, end
def event_start_end_all_day_with_respect_to_type(event, calendar_tz):
def event_start_end_all_day_with_respect_to_type(event: IcalEvent, calendar_tz):
all_day = False
if type(event[ICAL_DATETIME_START].dt) == datetime.date:
start, end = start_end_with_respect_to_all_day(event, calendar_tz)
@ -750,7 +756,7 @@ def event_start_end_all_day_with_respect_to_type(event, calendar_tz):
return start, end, all_day
def convert_windows_timezone_to_iana(tz_name):
def convert_windows_timezone_to_iana(tz_name: str) -> str | None:
"""
Conversion info taken from https://raw.githubusercontent.com/unicode-org/cldr/main/common/supplemental/windowsZones.xml
Also see https://gist.github.com/mrled/8d29fde758cfc7dd0b52f3bbf2b8f06e

View file

@ -67,10 +67,14 @@ class QualityReportOverloadedUser(typing.TypedDict):
score: int
QualityReportOverloadedUsers = typing.List[QualityReportOverloadedUser]
QualityReportComments = typing.List[QualityReportComment]
class QualityReport(typing.TypedDict):
total_score: int
comments: typing.List[QualityReportComment]
overloaded_users: typing.List[QualityReportOverloadedUser]
comments: QualityReportComments
overloaded_users: QualityReportOverloadedUsers
class ScheduleEventUser(typing.TypedDict):
@ -89,9 +93,9 @@ class ScheduleEvent(typing.TypedDict):
end: datetime.datetime
users: typing.List[ScheduleEventUser]
missing_users: typing.List[str]
priority_level: typing.Union[int, None]
source: typing.Union[str, None]
calendar_type: typing.Union[int, None]
priority_level: typing.Optional[int]
source: typing.Optional[str]
calendar_type: typing.Optional[int]
is_empty: bool
is_gap: bool
is_override: bool
@ -109,6 +113,7 @@ class ScheduleFinalShift(typing.TypedDict):
ScheduleEvents = typing.List[ScheduleEvent]
ScheduleEventIntervals = typing.List[typing.List[datetime.datetime]]
ScheduleFinalShifts = typing.List[ScheduleFinalShift]
DurationMap = typing.Dict[str, datetime.timedelta]
def generate_public_primary_key_for_oncall_schedule_channel():
@ -217,14 +222,14 @@ class OnCallSchedule(PolymorphicModel):
has_empty_shifts = models.BooleanField(default=False)
empty_shifts_report_sent_at = models.DateField(null=True, default=None)
def get_icalendars(self):
def get_icalendars(self) -> typing.Tuple[typing.Optional[icalendar.Calendar], typing.Optional[icalendar.Calendar]]:
"""Returns list of calendars. Primary calendar should always be the first"""
calendar_primary = None
calendar_overrides = None
calendar_primary: typing.Optional[icalendar.Calendar] = None
calendar_overrides: typing.Optional[icalendar.Calendar] = None
# if self._ical_file_(primary|overrides) is None -> no cache, will trigger a refresh
# if self._ical_file_(primary|overrides) == "" -> cached value for an empty schedule
if self._ical_file_primary:
calendar_primary = icalendar.Calendar.from_ical(self._ical_file_primary)
calendar_primary: icalendar.Calendar = icalendar.Calendar.from_ical(self._ical_file_primary)
if self._ical_file_overrides:
calendar_overrides = icalendar.Calendar.from_ical(self._ical_file_overrides)
return calendar_primary, calendar_overrides
@ -260,9 +265,11 @@ class OnCallSchedule(PolymorphicModel):
self._refresh_primary_ical_file()
self._refresh_overrides_ical_file()
@property
def _ical_file_primary(self):
raise NotImplementedError
@property
def _ical_file_overrides(self):
raise NotImplementedError
@ -468,7 +475,7 @@ class OnCallSchedule(PolymorphicModel):
events = self.final_events(user_tz="UTC", starting_date=date, days=days)
# an event is “good” if it's not a gap and not empty
good_events = [event for event in events if not event["is_gap"] and not event["is_empty"]]
good_events: ScheduleEvents = [event for event in events if not event["is_gap"] and not event["is_empty"]]
if not good_events:
return {
"total_score": 0,
@ -476,7 +483,7 @@ class OnCallSchedule(PolymorphicModel):
"overloaded_users": [],
}
def event_duration(ev: dict) -> datetime.timedelta:
def event_duration(ev: ScheduleEvent) -> datetime.timedelta:
return ev["end"] - ev["start"]
def timedelta_sum(deltas: typing.Iterable[datetime.timedelta]) -> datetime.timedelta:
@ -485,9 +492,9 @@ class OnCallSchedule(PolymorphicModel):
def score_to_percent(value: float) -> int:
return round(value * 100)
def get_duration_map(evs: list[dict]) -> dict[str, datetime.timedelta]:
def get_duration_map(evs: ScheduleEvents) -> DurationMap:
"""Return a map of user PKs to total duration of events they are in."""
result = defaultdict(datetime.timedelta)
result: DurationMap = defaultdict(datetime.timedelta)
for ev in evs:
for user in ev["users"]:
user_pk = user["pk"]
@ -495,7 +502,7 @@ class OnCallSchedule(PolymorphicModel):
return result
def get_balance_score_by_duration_map(dur_map: dict[str, datetime.timedelta]) -> float:
def get_balance_score_by_duration_map(dur_map: DurationMap) -> float:
"""
Return a score between 0 and 1, based on how balanced the durations are in the duration map.
The formula is taken from https://github.com/grafana/oncall/issues/118#issuecomment-1161787854.
@ -503,7 +510,7 @@ class OnCallSchedule(PolymorphicModel):
if len(dur_map) <= 1:
return 1
result = 0
result = 0.0
for key_1, key_2 in itertools.combinations(dur_map, 2):
duration_1 = dur_map[key_1]
duration_2 = dur_map[key_2]
@ -524,9 +531,10 @@ class OnCallSchedule(PolymorphicModel):
balance_score = score_to_percent(balance_score)
# calculate overloaded users
overloaded_users: QualityReportOverloadedUsers = []
if balance_score >= 95: # tolerate minor imbalance
balance_score = 100
overloaded_users = []
else:
average_duration = timedelta_sum(duration_map.values()) / len(duration_map)
overloaded_user_pks = [
@ -540,7 +548,6 @@ class OnCallSchedule(PolymorphicModel):
"public_primary_key", "username"
)
}
overloaded_users = []
for user_pk in overloaded_user_pks:
score = score_to_percent(duration_map[user_pk] / average_duration) - 100
username = usernames.get(user_pk) or "unknown" # fallback to "unknown" if user is not found
@ -550,7 +557,7 @@ class OnCallSchedule(PolymorphicModel):
overloaded_users.sort(key=lambda u: (-u["score"], u["username"]))
# generate comments regarding gaps
comments = []
comments: QualityReportComments = []
if good_event_score == 100:
comments.append({"type": QualityReportCommentType.INFO, "text": "Schedule has no gaps"})
else:
@ -628,8 +635,8 @@ class OnCallSchedule(PolymorphicModel):
resolved: ScheduleEvents = []
pending: ScheduleEvents = events
current_interval_idx = 0 # current scheduled interval being checked
current_type = OnCallSchedule.TYPE_ICAL_OVERRIDES # current calendar type
current_priority = None # current priority level being resolved
current_type: typing.Optional[int] = OnCallSchedule.TYPE_ICAL_OVERRIDES # current calendar type
current_priority: typing.Optional[int] = None # current priority level being resolved
while pending:
ev = pending.pop(0)

View file

@ -1,4 +1,4 @@
from django.utils import timezone
import datetime
SLACK_BOT_ID = "USLACKBOT"
SLACK_INVALID_AUTH_RESPONSE = "no_enough_permissions_to_retrieve"
@ -6,7 +6,7 @@ PLACEHOLDER = "Placeholder"
SLACK_WRONG_TEAM_NAMES = [SLACK_INVALID_AUTH_RESPONSE, PLACEHOLDER]
SLACK_RATE_LIMIT_TIMEOUT = timezone.timedelta(minutes=5)
SLACK_RATE_LIMIT_TIMEOUT = datetime.timedelta(minutes=5)
SLACK_RATE_LIMIT_DELAY = 10
CACHE_UPDATE_INCIDENT_SLACK_MESSAGE_LIFETIME = 60 * 10

View file

@ -4,6 +4,7 @@ import logging
from apps.alerts.models import AlertGroup
from apps.api.permissions import user_is_authorized
from apps.slack.models import SlackMessage, SlackTeamIdentity
from apps.user_management.models import User
logger = logging.getLogger(__name__)
@ -13,6 +14,8 @@ class AlertGroupActionsMixin:
Mixin for alert group actions (ack, resolve, etc.). Intended to be used as a mixin along with ScenarioStep.
"""
user: User | None
REQUIRED_PERMISSIONS = []
def get_alert_group(self, slack_team_identity: SlackTeamIdentity, payload: dict) -> AlertGroup:

View file

@ -2,10 +2,10 @@ import re
import emoji
from django.apps import apps
from slackviewer.formatter import SlackFormatter
from slackviewer.formatter import SlackFormatter as SlackFormatterBase
class SlackFormatter(SlackFormatter):
class SlackFormatter(SlackFormatterBase):
_LINK_PAT = re.compile(r"<(https|http|mailto):[A-Za-z0-9_\.\-\/\?\,\=\#\:\@\& ]+\|[^>]+>")
def __init__(self, organization):

View file

@ -100,8 +100,8 @@ class TelegramClient:
message_id: Union[int, str],
text: str,
keyboard: Optional[InlineKeyboardMarkup] = None,
) -> Message:
message = self.api_client.edit_message_text(
) -> Union[Message, bool]:
return self.api_client.edit_message_text(
chat_id=chat_id,
message_id=message_id,
text=text,
@ -109,7 +109,6 @@ class TelegramClient:
parse_mode=self.PARSE_MODE,
disable_web_page_preview=False,
)
return message
@staticmethod
def _get_message_and_keyboard(

View file

@ -18,6 +18,12 @@ from common.insight_log import ChatOpsEvent, ChatOpsTypePlug, write_chatops_insi
from common.oncall_gateway import create_oncall_connector, delete_oncall_connector, delete_slack_connector
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
if typing.TYPE_CHECKING:
from django.db.models.manager import RelatedManager
from apps.schedules.models import OnCallSchedule
from apps.user_management.models import User
logger = logging.getLogger(__name__)
@ -36,7 +42,6 @@ def generate_public_primary_key_for_organization():
class ProvisionedPlugin(typing.TypedDict):
error: typing.Union[str, None]
stackId: int
orgId: int
onCallToken: str
@ -64,6 +69,8 @@ class OrganizationManager(models.Manager):
class Organization(MaintainableObject):
users: "RelatedManager['User']"
oncall_schedules: "RelatedManager['OnCallSchedule']"
objects = OrganizationManager()
objects_with_deleted = models.Manager()

View file

@ -244,7 +244,7 @@ class User(models.Model):
return self.username
@property
def timezone(self):
def timezone(self) -> typing.Optional[str]:
if self._timezone:
return self._timezone
@ -313,7 +313,7 @@ class User(models.Model):
# TODO: check whether this signal can be moved to save method of the model
@receiver(post_save, sender=User)
def listen_for_user_model_save(sender, instance, created, *args, **kwargs):
def listen_for_user_model_save(sender: User, instance: User, created: bool, *args, **kwargs) -> None:
if created:
instance.notification_policies.create_default_policies_for_user(instance)
instance.notification_policies.create_important_policies_for_user(instance)

View file

@ -1,3 +1,5 @@
import typing
from django.contrib import admin
from django.core.exceptions import FieldDoesNotExist
from django.db.models import ForeignKey, Model
@ -7,7 +9,7 @@ class RawForeignKeysMixin:
model: Model
@property
def raw_id_fields(self) -> tuple[str]:
def raw_id_fields(self) -> typing.Tuple[str, ...]:
fields = self.model._meta.fields
fk_field_names = tuple(str(field.name) for field in fields if isinstance(field, ForeignKey))
@ -18,13 +20,13 @@ class SearchableByIdsMixin:
model: Model
@property
def search_fields(self) -> tuple[str]:
def search_fields(self) -> typing.Tuple[str, ...]:
search_fields = (
"id",
"public_primary_key",
)
existing_fields = []
existing_fields: typing.List[str] = []
for field in search_fields:
try:
@ -39,10 +41,10 @@ class SearchableByIdsMixin:
class SelectRelatedMixin:
model: Model
list_display: tuple[str]
list_display: typing.Tuple[str, ...]
@property
def list_select_related(self) -> tuple[str]:
def list_select_related(self) -> typing.Tuple[str, ...]:
fk_field_names = []
for field_name in self.list_display:

View file

@ -1,5 +1,6 @@
import json
import math
import typing
from django.core.exceptions import ObjectDoesNotExist
from django.db.models import Q
@ -7,6 +8,7 @@ from django.utils.functional import cached_property
from rest_framework import status
from rest_framework.decorators import action
from rest_framework.exceptions import NotFound, Throttled
from rest_framework.request import Request
from rest_framework.response import Response
from apps.alerts.incident_appearance.templaters import (
@ -377,11 +379,25 @@ class PreviewTemplateMixin:
return destination, attr_name
class GrafanaContext(typing.TypedDict):
IsAnonymous: bool
class InstanceContext(typing.TypedDict):
stack_id: int
org_id: int
grafana_token: str
class GrafanaHeadersMixin:
@cached_property
def grafana_context(self) -> dict:
return json.loads(self.request.headers.get("X-Grafana-Context"))
request: Request
@cached_property
def instance_context(self) -> dict:
return json.loads(self.request.headers["X-Instance-Context"])
def grafana_context(self) -> GrafanaContext:
grafana_context: GrafanaContext = json.loads(self.request.headers["X-Grafana-Context"])
return grafana_context
@cached_property
def instance_context(self) -> InstanceContext:
instance_context: InstanceContext = json.loads(self.request.headers["X-Instance-Context"])
return instance_context

View file

@ -19,7 +19,12 @@ class EntityEvent(enum.Enum):
class InsightLoggable(ABC):
@property
@abstractmethod
def public_primary_key(self):
def id(self) -> int:
pass
@property
@abstractmethod
def public_primary_key(self) -> str:
pass
@property
@ -65,7 +70,7 @@ def write_resource_insight_log(instance: InsightLoggable, author, event: EntityE
author = json.dumps(author.username)
entity_type = instance.insight_logs_type_verbal
try:
entity_id = instance.public_primary_key
entity_id: str | int = instance.public_primary_key
except AttributeError:
# Fallback for entities which have no public_primary_key, E.g. public api token, schedule export token
entity_id = instance.id

View file

@ -1,52 +0,0 @@
from django.core.management import BaseCommand
from django.db.models.signals import post_save
from django.urls import reverse
from apps.alerts.models import Alert, AlertGroup, AlertReceiveChannel, listen_for_alertreceivechannel_model_save
from apps.alerts.tests.factories import AlertReceiveChannelFactory
from apps.user_management.tests.factories import OrganizationFactory
class Command(BaseCommand):
def add_arguments(self, parser):
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
"--bootstrap_integration",
action="store_true",
help="Create random formatted webhook integration",
)
group.add_argument(
"--return_results_for_test_id",
type=str,
help="Count alert groups with specific text in the title and their alerts",
)
def handle(self, *args, **options):
if options["bootstrap_integration"]:
organization = OrganizationFactory()
def _make_alert_receive_channel(organization, **kwargs):
if "integration" not in kwargs:
kwargs["integration"] = "formatted_webhook"
post_save.disconnect(listen_for_alertreceivechannel_model_save, sender=AlertReceiveChannel)
alert_receive_channel = AlertReceiveChannelFactory(organization=organization, **kwargs)
post_save.connect(listen_for_alertreceivechannel_model_save, sender=AlertReceiveChannel)
return alert_receive_channel
integration = _make_alert_receive_channel(
organization, integration=AlertReceiveChannel.INTEGRATION_FORMATTED_WEBHOOK
)
url = reverse(
"integrations:universal",
kwargs={
"integration_type": AlertReceiveChannel.INTEGRATION_FORMATTED_WEBHOOK,
"alert_channel_key": integration.token,
},
)
return url
elif test_id := options["return_results_for_test_id"]:
alert_groups_pks = list(AlertGroup.all_objects.filter(web_title_cache=test_id).values_list("id", flat=True))
alert_groups_count = len(alert_groups_pks)
alerts_count = Alert.objects.filter(group_id__in=alert_groups_pks).count()
return f"{alert_groups_count}, {alerts_count}"

View file

@ -12,6 +12,7 @@ target-version = ["py39"]
force-exclude = "migrations"
[tool.mypy]
mypy_path = "$MYPY_CONFIG_FILE_DIR/type_stubs"
implicit_reexport = true
plugins = [
"mypy_django_plugin.main",
@ -39,7 +40,7 @@ module = [
"fcm_django.*",
"firebase_admin.*",
"humanize.*",
"icalendar.*",
"ipware.*",
"markdown2.*",
"mirage.*",
"ordered_model.*",
@ -50,6 +51,7 @@ module = [
"recurring_ical_events.*",
"rest_polymorphic.*",
"slackclient.*",
"slackviewer.*",
"social_core.*",
"social_django.*",
"twilio.*",

View file

@ -1,4 +1,4 @@
celery-types==0.17.0
celery-types==0.18.0
django-filter-stubs==0.1.3
django-stubs[compatible-mypy]==4.2.1
djangorestframework-stubs[compatible-mypy]==3.14.1
@ -10,3 +10,4 @@ pytest_factoryboy==2.5.1
types-beautifulsoup4==4.12.0.5
types-PyMySQL==1.0.19.7
types-python-dateutil==2.8.19.13
types-requests==2.31.0.1

View file

@ -1,6 +1,7 @@
import base64
import json
import os
import typing
from random import randrange
from celery.schedules import crontab
@ -88,8 +89,8 @@ DANGEROUS_WEBHOOKS_ENABLED = getenv_boolean("DANGEROUS_WEBHOOKS_ENABLED", defaul
WEBHOOK_RESPONSE_LIMIT = 50000
# Multiregion settings
ONCALL_GATEWAY_URL = os.environ.get("ONCALL_GATEWAY_URL")
ONCALL_GATEWAY_API_TOKEN = os.environ.get("ONCALL_GATEWAY_API_TOKEN")
ONCALL_GATEWAY_URL = os.environ.get("ONCALL_GATEWAY_URL", "")
ONCALL_GATEWAY_API_TOKEN = os.environ.get("ONCALL_GATEWAY_API_TOKEN", "")
ONCALL_BACKEND_REGION = os.environ.get("ONCALL_BACKEND_REGION")
# Prometheus exporter metrics endpoint auth
@ -125,7 +126,9 @@ assert DATABASE_TYPE in {DatabaseTypes.MYSQL, DatabaseTypes.POSTGRESQL, Database
DATABASE_ENGINE = f"django.db.backends.{DATABASE_TYPE}"
DATABASE_CONFIGS = {
DatabaseConfig = typing.Dict[str, typing.Dict[str, typing.Any]]
DATABASE_CONFIGS: DatabaseConfig = {
DatabaseTypes.SQLITE3: {
"ENGINE": DATABASE_ENGINE,
"NAME": DATABASE_NAME or "/var/lib/oncall/oncall.db",
@ -152,6 +155,7 @@ DATABASE_CONFIGS = {
},
}
READONLY_DATABASES: DatabaseConfig = {}
DATABASES = {
"default": DATABASE_CONFIGS[DATABASE_TYPE],
}
@ -570,7 +574,7 @@ SOCIAL_AUTH_PIPELINE = (
"apps.social_auth.pipeline.delete_slack_auth_token",
)
SOCIAL_AUTH_FIELDS_STORED_IN_SESSION = []
SOCIAL_AUTH_FIELDS_STORED_IN_SESSION: typing.List[str] = []
SOCIAL_AUTH_REDIRECT_IS_HTTPS = getenv_boolean("SOCIAL_AUTH_REDIRECT_IS_HTTPS", default=True)
SOCIAL_AUTH_SLUGIFY_USERNAMES = True

View file

@ -70,8 +70,12 @@ INTERNAL_IPS = [
"127.0.0.1",
]
# # the below two lines make it possible to use django-debug-toolbar inside of docker locally
# # https://knasmueller.net/fix-djangos-debug-toolbar-not-showing-inside-docker
# # https://stackoverflow.com/questions/10517765/django-debug-toolbar-not-showing-up
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS += [".".join(ip.split(".")[:-1] + ["1"]) for ip in ips]
try:
# # the below two lines make it possible to use django-debug-toolbar inside of docker locally
# # https://knasmueller.net/fix-djangos-debug-toolbar-not-showing-inside-docker
# # https://stackoverflow.com/questions/10517765/django-debug-toolbar-not-showing-up
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS += [".".join(ip.split(".")[:-1] + ["1"]) for ip in ips]
except OSError:
# usually raised if this is being run outside of a docker container context
INTERNAL_IPS = []

View file

@ -0,0 +1,34 @@
from icalendar.cal import Alarm as Alarm
from icalendar.cal import Calendar as Calendar
from icalendar.cal import ComponentFactory as ComponentFactory
from icalendar.cal import Event as Event
from icalendar.cal import FreeBusy as FreeBusy
from icalendar.cal import Journal as Journal
from icalendar.cal import Timezone as Timezone
from icalendar.cal import TimezoneDaylight as TimezoneDaylight
from icalendar.cal import TimezoneStandard as TimezoneStandard
from icalendar.cal import Todo as Todo
from icalendar.parser import Parameters as Parameters
from icalendar.parser import q_join as q_join
from icalendar.parser import q_split as q_split
from icalendar.prop import FixedOffset as FixedOffset
from icalendar.prop import LocalTimezone as LocalTimezone
from icalendar.prop import TypesFactory as TypesFactory
from icalendar.prop import vBinary as vBinary
from icalendar.prop import vBoolean as vBoolean
from icalendar.prop import vCalAddress as vCalAddress
from icalendar.prop import vDate as vDate
from icalendar.prop import vDatetime as vDatetime
from icalendar.prop import vDDDTypes as vDDDTypes
from icalendar.prop import vDuration as vDuration
from icalendar.prop import vFloat as vFloat
from icalendar.prop import vFrequency as vFrequency
from icalendar.prop import vGeo as vGeo
from icalendar.prop import vInt as vInt
from icalendar.prop import vPeriod as vPeriod
from icalendar.prop import vRecur as vRecur
from icalendar.prop import vText as vText
from icalendar.prop import vTime as vTime
from icalendar.prop import vUri as vUri
from icalendar.prop import vUTCOffset as vUTCOffset
from icalendar.prop import vWeekday as vWeekday

View file

@ -0,0 +1,109 @@
from _typeshed import Incomplete
from icalendar.caselessdict import CaselessDict as CaselessDict
from icalendar.compat import unicode_type as unicode_type
from icalendar.parser import Contentline as Contentline
from icalendar.parser import Contentlines as Contentlines
from icalendar.parser import Parameters as Parameters
from icalendar.parser import q_join as q_join
from icalendar.parser import q_split as q_split
from icalendar.parser_tools import DEFAULT_ENCODING as DEFAULT_ENCODING
from icalendar.prop import TypesFactory as TypesFactory
from icalendar.prop import vDDDLists as vDDDLists
from icalendar.prop import vText as vText
class ComponentFactory(CaselessDict):
def __init__(self, *args, **kwargs) -> None: ...
INLINE: Incomplete
class Component(CaselessDict):
name: Incomplete
required: Incomplete
singletons: Incomplete
multiple: Incomplete
exclusive: Incomplete
inclusive: Incomplete
ignore_exceptions: bool
subcomponents: Incomplete
errors: Incomplete
def __init__(self, *args, **kwargs) -> None: ...
def __bool__(self) -> bool: ...
__nonzero__ = __bool__
def is_empty(self): ...
@property
def is_broken(self): ...
def add(self, name, value, parameters: Incomplete | None = ..., encode: int = ...) -> None: ...
def decoded(self, name, default=...): ...
def get_inline(self, name, decode: int = ...): ...
def set_inline(self, name, values, encode: int = ...) -> None: ...
def add_component(self, component) -> None: ...
def walk(self, name: Incomplete | None = ...): ...
def property_items(self, recursive: bool = ..., sorted: bool = ...): ...
@classmethod
def from_ical(cls, st, multiple: bool = ...): ...
def content_line(self, name, value, sorted: bool = ...): ...
def content_lines(self, sorted: bool = ...): ...
def to_ical(self, sorted: bool = ...): ...
class Event(Component):
name: str
canonical_order: Incomplete
required: Incomplete
singletons: Incomplete
exclusive: Incomplete
multiple: Incomplete
ignore_exceptions: bool
class Todo(Component):
name: str
required: Incomplete
singletons: Incomplete
exclusive: Incomplete
multiple: Incomplete
class Journal(Component):
name: str
required: Incomplete
singletons: Incomplete
multiple: Incomplete
class FreeBusy(Component):
name: str
required: Incomplete
singletons: Incomplete
multiple: Incomplete
class Timezone(Component):
name: str
canonical_order: Incomplete
required: Incomplete
singletons: Incomplete
def to_tz(self): ...
class TimezoneStandard(Component):
name: str
required: Incomplete
singletons: Incomplete
multiple: Incomplete
class TimezoneDaylight(Component):
name: str
required: Incomplete
singletons: Incomplete
multiple: Incomplete
class Alarm(Component):
name: str
required: Incomplete
singletons: Incomplete
inclusive: Incomplete
multiple: Incomplete
class Calendar(Component):
name: str
canonical_order: Incomplete
required: Incomplete
singletons: Incomplete
types_factory: Incomplete
component_factory: Incomplete

View file

@ -0,0 +1,26 @@
from collections import OrderedDict
from _typeshed import Incomplete
from icalendar.compat import iteritems as iteritems
from icalendar.parser_tools import to_unicode as to_unicode
def canonsort_keys(keys, canonical_order: Incomplete | None = ...): ...
def canonsort_items(dict1, canonical_order: Incomplete | None = ...): ...
class CaselessDict(OrderedDict):
def __init__(self, *args, **kwargs) -> None: ...
def __getitem__(self, key): ...
def __setitem__(self, key, value) -> None: ...
def __delitem__(self, key) -> None: ...
def __contains__(self, key) -> bool: ...
def get(self, key, default: Incomplete | None = ...): ...
def setdefault(self, key, value: Incomplete | None = ...): ...
def pop(self, key, default: Incomplete | None = ...): ...
def popitem(self): ...
def has_key(self, key): ...
def update(self, *args, **kwargs) -> None: ...
def copy(self): ...
def __eq__(self, other): ...
canonical_order: Incomplete
def sorted_keys(self): ...
def sorted_items(self): ...

View file

@ -0,0 +1,4 @@
from . import Calendar as Calendar
def view(input_handle, output_handle) -> None: ...
def main() -> None: ...

View file

@ -0,0 +1,5 @@
from _typeshed import Incomplete
unicode_type = str
bytes_type = bytes
iteritems: Incomplete

View file

@ -0,0 +1,54 @@
from _typeshed import Incomplete
from icalendar import compat as compat
from icalendar.caselessdict import CaselessDict as CaselessDict
from icalendar.parser_tools import DEFAULT_ENCODING as DEFAULT_ENCODING
from icalendar.parser_tools import SEQUENCE_TYPES as SEQUENCE_TYPES
from icalendar.parser_tools import to_unicode as to_unicode
from icalendar.prop import vText as vText
def escape_char(text): ...
def unescape_char(text): ...
def tzid_from_dt(dt): ...
def foldline(line, limit: int = ..., fold_sep: str = ...): ...
def param_value(value): ...
NAME: Incomplete
UNSAFE_CHAR: Incomplete
QUNSAFE_CHAR: Incomplete
FOLD: Incomplete
uFOLD: Incomplete
NEWLINE: Incomplete
def validate_token(name) -> None: ...
def validate_param_value(value, quoted: bool = ...) -> None: ...
QUOTABLE: Incomplete
def dquote(val): ...
def q_split(st, sep: str = ..., maxsplit: int = ...): ...
def q_join(lst, sep: str = ...): ...
class Parameters(CaselessDict):
def params(self): ...
def to_ical(self, sorted: bool = ...): ...
@classmethod
def from_ical(cls, st, strict: bool = ...): ...
def escape_string(val): ...
def unescape_string(val): ...
def unescape_list_or_string(val): ...
class Contentline(compat.unicode_type):
strict: Incomplete
def __new__(cls, value, strict: bool = ..., encoding=...): ...
@classmethod
def from_parts(cls, name, params, values, sorted: bool = ...): ...
def parts(self): ...
@classmethod
def from_ical(cls, ical, strict: bool = ...): ...
def to_ical(self): ...
class Contentlines(list):
def to_ical(self): ...
@classmethod
def from_ical(cls, st): ...

View file

@ -0,0 +1,8 @@
from _typeshed import Incomplete
from icalendar import compat as compat
SEQUENCE_TYPES: Incomplete
DEFAULT_ENCODING: str
def to_unicode(value, encoding: str = ...): ...
def data_encode(data, encoding=...): ...

View file

@ -0,0 +1,219 @@
from datetime import tzinfo
from _typeshed import Incomplete
from icalendar import compat as compat
from icalendar.caselessdict import CaselessDict as CaselessDict
from icalendar.parser import Parameters as Parameters
from icalendar.parser import escape_char as escape_char
from icalendar.parser import tzid_from_dt as tzid_from_dt
from icalendar.parser import unescape_char as unescape_char
from icalendar.parser_tools import DEFAULT_ENCODING as DEFAULT_ENCODING
from icalendar.parser_tools import SEQUENCE_TYPES as SEQUENCE_TYPES
from icalendar.parser_tools import to_unicode as to_unicode
from icalendar.windows_to_olson import WINDOWS_TO_OLSON as WINDOWS_TO_OLSON
DATE_PART: str
TIME_PART: str
DATETIME_PART: Incomplete
WEEKS_PART: str
DURATION_REGEX: Incomplete
WEEKDAY_RULE: Incomplete
ZERO: Incomplete
HOUR: Incomplete
STDOFFSET: Incomplete
DSTOFFSET: Incomplete
DSTOFFSET = STDOFFSET
DSTDIFF: Incomplete
class FixedOffset(tzinfo):
def __init__(self, offset, name) -> None: ...
def utcoffset(self, dt): ...
def tzname(self, dt): ...
def dst(self, dt): ...
class LocalTimezone(tzinfo):
def utcoffset(self, dt): ...
def dst(self, dt): ...
def tzname(self, dt): ...
class vBinary:
obj: Incomplete
params: Incomplete
def __init__(self, obj) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vBoolean(int):
BOOL_MAP: Incomplete
params: Incomplete
def __new__(cls, *args, **kwargs): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vCalAddress(compat.unicode_type):
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vFloat(float):
params: Incomplete
def __new__(cls, *args, **kwargs): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vInt(int):
params: Incomplete
def __new__(cls, *args, **kwargs): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vDDDLists:
params: Incomplete
dts: Incomplete
def __init__(self, dt_list) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical, timezone: Incomplete | None = ...): ...
class vCategory:
cats: Incomplete
def __init__(self, c_list) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical, timezone: Incomplete | None = ...): ...
class vDDDTypes:
params: Incomplete
dt: Incomplete
def __init__(self, dt) -> None: ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical, timezone: Incomplete | None = ...): ...
class vDate:
dt: Incomplete
params: Incomplete
def __init__(self, dt) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vDatetime:
dt: Incomplete
params: Incomplete
def __init__(self, dt) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical, timezone: Incomplete | None = ...): ...
class vDuration:
td: Incomplete
params: Incomplete
def __init__(self, td) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vPeriod:
params: Incomplete
start: Incomplete
end: Incomplete
by_duration: Incomplete
duration: Incomplete
def __init__(self, per) -> None: ...
def __cmp__(self, other): ...
def overlaps(self, other): ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vWeekday(compat.unicode_type):
week_days: Incomplete
relative: Incomplete
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vFrequency(compat.unicode_type):
frequencies: Incomplete
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vRecur(CaselessDict):
frequencies: Incomplete
canonical_order: Incomplete
types: Incomplete
params: Incomplete
def __init__(self, *args, **kwargs) -> None: ...
def to_ical(self): ...
@classmethod
def parse_type(cls, key, values): ...
@classmethod
def from_ical(cls, ical): ...
class vText(compat.unicode_type):
encoding: Incomplete
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vTime:
dt: Incomplete
params: Incomplete
def __init__(self, *args) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vUri(compat.unicode_type):
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vGeo:
latitude: Incomplete
longitude: Incomplete
params: Incomplete
def __init__(self, geo) -> None: ...
def to_ical(self): ...
@staticmethod
def from_ical(ical): ...
class vUTCOffset:
ignore_exceptions: bool
td: Incomplete
params: Incomplete
def __init__(self, td) -> None: ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class vInline(compat.unicode_type):
params: Incomplete
def __new__(cls, value, encoding=...): ...
def to_ical(self): ...
@classmethod
def from_ical(cls, ical): ...
class TypesFactory(CaselessDict):
all_types: Incomplete
def __init__(self, *args, **kwargs) -> None: ...
types_map: Incomplete
def for_property(self, name): ...
def to_ical(self, name, value): ...
def from_ical(self, name, value): ...

View file

@ -0,0 +1,9 @@
from _typeshed import Incomplete
from icalendar.parser_tools import to_unicode as to_unicode
from icalendar.prop import vDatetime as vDatetime
from icalendar.prop import vText as vText
class UIDGenerator:
chars: Incomplete
def rnd_string(self, length: int = ...): ...
def uid(self, host_name: str = ..., unique: str = ...): ...

View file

@ -0,0 +1,3 @@
from _typeshed import Incomplete
WINDOWS_TO_OLSON: Incomplete

View file

@ -33,8 +33,8 @@ export const commonTemplateForEdit: { [id: string]: TemplateForEdit } = {
type: 'html',
},
slack_title_template: {
name: 'slack_title_template',
displayName: TemplateOptions.SlackTitle.key,
displayName: 'Slack title',
name: TemplateOptions.SlackTitle.key,
description: '',
additionalData: {
chatOpsName: 'slack',
@ -45,13 +45,13 @@ export const commonTemplateForEdit: { [id: string]: TemplateForEdit } = {
},
sms_title_template: {
name: TemplateOptions.SMS.key,
displayName: 'Sms title',
displayName: 'SMS title',
description: '',
type: 'plain',
},
phone_call_title_template: {
name: TemplateOptions.Phone.key,
displayName: 'Phone call title',
displayName: 'Phone Call title',
description: '',
type: 'plain',
},

View file

@ -112,11 +112,11 @@ export const ScheduleQualityDetails: FC<ScheduleQualityDetailsProps> = ({ qualit
<Text type="primary" className={cx('text')}>
The next 52 weeks (~1 year) are taken into account when generating the quality report. Refer to the{' '}
<a
href={'https://grafana.com/docs/oncall/latest/calendar-schedules/web-schedule/#schedule-quality-report'}
href={'https://grafana.com/docs/oncall/latest/on-call-schedules/web-schedule/#schedule-quality-report'}
target="_blank"
rel="noreferrer"
>
documentation
<Text type="link">documentation</Text>
</a>{' '}
for more details.
</Text>

View file

@ -19,15 +19,14 @@ export const commonTemplatesToRender: TemplateBlock[] = [
{
name: 'grouping_id_template',
label: 'Grouping',
labelTooltip:
'The Grouping Template is applied to every incoming alert payload after the Routing Template. It can be based on time, or alert content, or both. If the resulting grouping key matches an existing non-resolved alert group, the alert will be grouped accordingly. Otherwise, a new alert group will be created',
labelTooltip: 'Alerts with the same Grouping Id will be grouped together. See docs for more information',
height: MONACO_INPUT_HEIGHT_SMALL,
},
{
name: 'resolve_condition_template',
label: 'Autoresolution',
labelTooltip:
'If Autoresolution Template is True, the alert will resolve its group as "resolved by source". If the group is already resolved, the alert will be added to that group',
'If Autoresolution Template is True, the alert will resolve its group as "resolved by source", See docs for more information',
height: MONACO_INPUT_HEIGHT_SMALL,
},
],
@ -68,7 +67,7 @@ export const commonTemplatesToRender: TemplateBlock[] = [
],
},
{
name: null,
name: 'Phone',
contents: [
{
name: 'phone_call_title_template',

View file

@ -283,7 +283,7 @@ describe('PluginConfigPage', () => {
const metaJsonDataOnCallApiUrl = 'onCallApiUrlFromMetaJsonData';
process.env.ONCALL_API_URL = processEnvOnCallApiUrl;
window.location.reload = jest.fn();
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce(null);
mockSyncDataWithOnCall(License.OSS);
@ -302,8 +302,6 @@ describe('PluginConfigPage', () => {
// click the confirm button within the modal, which actually triggers the callback
await userEvent.click(screen.getByText('Remove'));
await screen.findByTestId(successful ? PLUGIN_CONFIGURATION_FORM_DATA_ID : STATUS_MESSAGE_BLOCK_DATA_ID);
// assertions
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledWith(metaJsonDataOnCallApiUrl);

View file

@ -1,12 +1,11 @@
import React, { FC, useCallback, useEffect, useState } from 'react';
import { Button, Label, Legend, LoadingPlaceholder } from '@grafana/ui';
import { Button, HorizontalGroup, Label, Legend, LoadingPlaceholder } from '@grafana/ui';
import { useLocation } from 'react-router-dom';
import { OnCallPluginConfigPageProps } from 'types';
import logo from 'img/logo.svg';
import PluginState, { PluginStatusResponseBase } from 'state/plugin';
import { GRAFANA_LICENSE_OSS } from 'utils/consts';
import { FALLBACK_LICENSE, GRAFANA_LICENSE_OSS } from 'utils/consts';
import ConfigurationForm from './parts/ConfigurationForm';
import RemoveCurrentConfigurationButton from './parts/RemoveCurrentConfigurationButton';
@ -75,13 +74,13 @@ const PluginConfigPage: FC<OnCallPluginConfigPageProps> = ({
const pluginMetaOnCallApiUrl = jsonData?.onCallApiUrl;
const processEnvOnCallApiUrl = process.env.ONCALL_API_URL; // don't destructure this, will break how webpack supplies this
const onCallApiUrl = pluginMetaOnCallApiUrl || processEnvOnCallApiUrl;
const licenseType = pluginIsConnected?.license;
const licenseType = pluginIsConnected?.license || FALLBACK_LICENSE;
const resetQueryParams = useCallback(() => removePluginConfiguredQueryParams(pluginIsEnabled), [pluginIsEnabled]);
const triggerDataSyncWithOnCall = useCallback(async () => {
resetMessages();
setSyncingPlugin(true);
setSyncError(null);
const syncDataResponse = await PluginState.syncDataWithOnCall(onCallApiUrl);
@ -144,35 +143,25 @@ const PluginConfigPage: FC<OnCallPluginConfigPageProps> = ({
}
}, [pluginMetaOnCallApiUrl, processEnvOnCallApiUrl, onCallApiUrl, pluginConfiguredRedirect]);
const resetState = useCallback(() => {
const resetMessages = useCallback(() => {
setPluginResetError(null);
setPluginConnectionCheckError(null);
setPluginIsConnected(null);
setSyncError(null);
}, []);
const resetState = useCallback(() => {
resetMessages();
resetQueryParams();
}, [resetQueryParams]);
/**
* NOTE: there is a possible edge case when resetting the plugin, that would lead to an error message being shown
* (which could be fixed by just reloading the page)
* This would happen if the user removes the plugin configuration, leaves the page, then comes back to the plugin
* configuration.
*
* This is because the props being passed into this component wouldn't reflect the actual plugin
* provisioning state. The props would still have onCallApiUrl set in the plugin jsonData, so when we make the API
* call to check the plugin state w/ OnCall API the plugin-proxy would return a 502 Bad Gateway because the actual
* provisioned plugin doesn't know about the onCallApiUrl.
*
* This could be fixed by instead of passing in the plugin provisioning information as props always fetching it
* when this component renders (via a useEffect). We probably don't need to worry about this because it should happen
* very rarely, if ever
*/
const triggerPluginReset = useCallback(async () => {
setResettingPlugin(true);
resetState();
try {
await PluginState.resetPlugin();
window.location.reload();
} catch (e) {
// this should rarely, if ever happen, but we should handle the case nevertheless
setPluginResetError('There was an error resetting your plugin, try again.');
@ -186,6 +175,15 @@ const PluginConfigPage: FC<OnCallPluginConfigPageProps> = ({
[resettingPlugin, triggerPluginReset]
);
const ReconfigurePluginButtons = () => (
<HorizontalGroup>
<Button variant="primary" onClick={triggerDataSyncWithOnCall} size="md">
Retry Sync
</Button>
{licenseType === GRAFANA_LICENSE_OSS ? <RemoveConfigButton /> : null}
</HorizontalGroup>
);
let content: React.ReactNode;
if (checkingIfPluginIsConnected) {
@ -196,16 +194,14 @@ const PluginConfigPage: FC<OnCallPluginConfigPageProps> = ({
content = (
<>
<StatusMessageBlock text={pluginConnectionCheckError || pluginResetError} />
<RemoveConfigButton />
<ReconfigurePluginButtons />
</>
);
} else if (syncError) {
content = (
<>
<StatusMessageBlock text={syncError} />
<Button variant="primary" onClick={triggerDataSyncWithOnCall} size="md">
Retry Sync
</Button>
<ReconfigurePluginButtons />
</>
);
} else if (!pluginIsConnected) {
@ -228,8 +224,8 @@ const PluginConfigPage: FC<OnCallPluginConfigPageProps> = ({
{pluginIsConnected ? (
<>
<p>
Plugin is connected! Continue to Grafana OnCall by clicking the{' '}
<img alt="Grafana OnCall Logo" src={logo} width={18} /> icon over there 👈
Plugin is connected! Continue to Grafana OnCall by clicking OnCall under Alerts & IRM in the navigation over
there 👈
</p>
<StatusMessageBlock
text={`Connected to OnCall (${pluginIsConnected.version}, ${pluginIsConnected.license})`}

View file

@ -19,16 +19,39 @@ exports[`PluginConfigPage If onCallApiUrl is not set in the plugin's meta jsonDa
ohhh nooo an error msg from self hosted install plugin
</span>
</pre>
<button
class="css-1ed0qk5-button"
type="button"
<div
class="css-ve64a7-horizontal-group"
style="width: 100%; height: 100%;"
>
<span
class="css-1mhnkuh"
<div
class="css-cvef6c-layoutChildrenWrapper"
>
Remove current configuration
</span>
</button>
<button
class="css-z53gi5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Retry Sync
</span>
</button>
</div>
<div
class="css-cvef6c-layoutChildrenWrapper"
>
<button
class="css-1ed0qk5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Remove current configuration
</span>
</button>
</div>
</div>
</div>
`;
@ -152,16 +175,39 @@ exports[`PluginConfigPage If onCallApiUrl is set, and checkIfPluginIsConnected r
ohhh nooo a plugin connection error
</span>
</pre>
<button
class="css-1ed0qk5-button"
type="button"
<div
class="css-ve64a7-horizontal-group"
style="width: 100%; height: 100%;"
>
<span
class="css-1mhnkuh"
<div
class="css-cvef6c-layoutChildrenWrapper"
>
Remove current configuration
</span>
</button>
<button
class="css-z53gi5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Retry Sync
</span>
</button>
</div>
<div
class="css-cvef6c-layoutChildrenWrapper"
>
<button
class="css-1ed0qk5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Remove current configuration
</span>
</button>
</div>
</div>
</div>
`;
@ -173,14 +219,7 @@ exports[`PluginConfigPage It doesn't make any network calls if the plugin config
Configure Grafana OnCall
</legend>
<p>
Plugin is connected! Continue to Grafana OnCall by clicking the
<img
alt="Grafana OnCall Logo"
src="[object Object]"
width="18"
/>
icon over there 👈
Plugin is connected! Continue to Grafana OnCall by clicking OnCall under Alerts & IRM in the navigation over there 👈
</p>
<pre
data-testid="status-message-block"
@ -212,14 +251,7 @@ exports[`PluginConfigPage OnCallApiUrl is set, and syncDataWithOnCall does not r
Configure Grafana OnCall
</legend>
<p>
Plugin is connected! Continue to Grafana OnCall by clicking the
<img
alt="Grafana OnCall Logo"
src="[object Object]"
width="18"
/>
icon over there 👈
Plugin is connected! Continue to Grafana OnCall by clicking OnCall under Alerts & IRM in the navigation over there 👈
</p>
<pre
data-testid="status-message-block"
@ -251,14 +283,7 @@ exports[`PluginConfigPage OnCallApiUrl is set, and syncDataWithOnCall does not r
Configure Grafana OnCall
</legend>
<p>
Plugin is connected! Continue to Grafana OnCall by clicking the
<img
alt="Grafana OnCall Logo"
src="[object Object]"
width="18"
/>
icon over there 👈
Plugin is connected! Continue to Grafana OnCall by clicking OnCall under Alerts & IRM in the navigation over there 👈
</p>
<pre
data-testid="status-message-block"
@ -302,16 +327,39 @@ exports[`PluginConfigPage OnCallApiUrl is set, and syncDataWithOnCall returns an
ohhh noooo a sync issue
</span>
</pre>
<button
class="css-z53gi5-button"
type="button"
<div
class="css-ve64a7-horizontal-group"
style="width: 100%; height: 100%;"
>
<span
class="css-1mhnkuh"
<div
class="css-cvef6c-layoutChildrenWrapper"
>
Retry Sync
</span>
</button>
<button
class="css-z53gi5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Retry Sync
</span>
</button>
</div>
<div
class="css-cvef6c-layoutChildrenWrapper"
>
<button
class="css-1ed0qk5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Remove current configuration
</span>
</button>
</div>
</div>
</div>
`;
@ -334,16 +382,39 @@ exports[`PluginConfigPage Plugin reset: successful - false 1`] = `
There was an error resetting your plugin, try again.
</span>
</pre>
<button
class="css-1ed0qk5-button"
type="button"
<div
class="css-ve64a7-horizontal-group"
style="width: 100%; height: 100%;"
>
<span
class="css-1mhnkuh"
<div
class="css-cvef6c-layoutChildrenWrapper"
>
Remove current configuration
</span>
</button>
<button
class="css-z53gi5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Retry Sync
</span>
</button>
</div>
<div
class="css-cvef6c-layoutChildrenWrapper"
>
<button
class="css-1ed0qk5-button"
type="button"
>
<span
class="css-1mhnkuh"
>
Remove current configuration
</span>
</button>
</div>
</div>
</div>
`;

View file

@ -1,58 +1,58 @@
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`PluginState.generateInvalidOnCallApiURLErrorMsg it returns the proper error message - configured through env var: false 1`] = `
"Could not communicate with your OnCall API at http://hello.com.
Validate that the URL is correct, your OnCall API is running, and that it is accessible from your Grafana instance."
"Could not communicate with OnCall API at http://hello.com.
Validate that the URL is correct, OnCall API is running, and that it is accessible from your Grafana instance."
`;
exports[`PluginState.generateInvalidOnCallApiURLErrorMsg it returns the proper error message - configured through env var: true 1`] = `
"Could not communicate with your OnCall API at http://hello.com (NOTE: your OnCall API URL is currently being taken from process.env of your UI).
Validate that the URL is correct, your OnCall API is running, and that it is accessible from your Grafana instance."
"Could not communicate with OnCall API at http://hello.com (NOTE: OnCall API URL is currently being taken from process.env of your UI).
Validate that the URL is correct, OnCall API is running, and that it is accessible from your Grafana instance."
`;
exports[`PluginState.generateOnCallApiUrlConfiguredThroughEnvVarMsg it returns the proper error message - configured through env var: false 1`] = `""`;
exports[`PluginState.generateOnCallApiUrlConfiguredThroughEnvVarMsg it returns the proper error message - configured through env var: true 1`] = `" (NOTE: your OnCall API URL is currently being taken from process.env of your UI)"`;
exports[`PluginState.generateOnCallApiUrlConfiguredThroughEnvVarMsg it returns the proper error message - configured through env var: true 1`] = `" (NOTE: OnCall API URL is currently being taken from process.env of your UI)"`;
exports[`PluginState.generateUnknownErrorMsg it returns the proper error message - configured through env var: false 1`] = `
"An unknown error occured when trying to install the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct?
"An unknown error occurred when trying to install the plugin. Verify OnCall API URL, http://hello.com, is correct?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.generateUnknownErrorMsg it returns the proper error message - configured through env var: false 2`] = `
"An unknown error occured when trying to sync the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct?
"An unknown error occurred when trying to sync the plugin. Verify OnCall API URL, http://hello.com, is correct?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.generateUnknownErrorMsg it returns the proper error message - configured through env var: true 1`] = `
"An unknown error occured when trying to install the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct (NOTE: your OnCall API URL is currently being taken from process.env of your UI)?
"An unknown error occurred when trying to install the plugin. Verify OnCall API URL, http://hello.com, is correct (NOTE: OnCall API URL is currently being taken from process.env of your UI)?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.generateUnknownErrorMsg it returns the proper error message - configured through env var: true 2`] = `
"An unknown error occured when trying to sync the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct (NOTE: your OnCall API URL is currently being taken from process.env of your UI)?
"An unknown error occurred when trying to sync the plugin. Verify OnCall API URL, http://hello.com, is correct (NOTE: OnCall API URL is currently being taken from process.env of your UI)?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.getHumanReadableErrorFromOnCallError it handles a 400 network error properly - has custom error message: false 1`] = `
"An unknown error occured when trying to install the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct (NOTE: your OnCall API URL is currently being taken from process.env of your UI)?
"An unknown error occurred when trying to install the plugin. Verify OnCall API URL, http://hello.com, is correct (NOTE: OnCall API URL is currently being taken from process.env of your UI)?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.getHumanReadableErrorFromOnCallError it handles a 400 network error properly - has custom error message: true 1`] = `"ohhhh nooo an error"`;
exports[`PluginState.getHumanReadableErrorFromOnCallError it handles a non-400 network error properly - status code: 409 1`] = `
"An unknown error occured when trying to install the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct (NOTE: your OnCall API URL is currently being taken from process.env of your UI)?
"An unknown error occurred when trying to install the plugin. Verify OnCall API URL, http://hello.com, is correct (NOTE: OnCall API URL is currently being taken from process.env of your UI)?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;
exports[`PluginState.getHumanReadableErrorFromOnCallError it handles a non-400 network error properly - status code: 502 1`] = `
"Could not communicate with your OnCall API at http://hello.com (NOTE: your OnCall API URL is currently being taken from process.env of your UI).
Validate that the URL is correct, your OnCall API is running, and that it is accessible from your Grafana instance."
"Could not communicate with OnCall API at http://hello.com (NOTE: OnCall API URL is currently being taken from process.env of your UI).
Validate that the URL is correct, OnCall API is running, and that it is accessible from your Grafana instance."
`;
exports[`PluginState.getHumanReadableErrorFromOnCallError it handles an unknown error properly 1`] = `
"An unknown error occured when trying to install the plugin. Are you sure that your OnCall API URL, http://hello.com, is correct (NOTE: your OnCall API URL is currently being taken from process.env of your UI)?
"An unknown error occurred when trying to install the plugin. Verify OnCall API URL, http://hello.com, is correct (NOTE: OnCall API URL is currently being taken from process.env of your UI)?
Refresh your page and try again, or try removing your plugin configuration and reconfiguring."
`;

View file

@ -48,21 +48,19 @@ class PluginState {
static grafanaBackend = getBackendSrv();
static generateOnCallApiUrlConfiguredThroughEnvVarMsg = (isConfiguredThroughEnvVar: boolean): string =>
isConfiguredThroughEnvVar
? ' (NOTE: your OnCall API URL is currently being taken from process.env of your UI)'
: '';
isConfiguredThroughEnvVar ? ' (NOTE: OnCall API URL is currently being taken from process.env of your UI)' : '';
static generateInvalidOnCallApiURLErrorMsg = (onCallApiUrl: string, isConfiguredThroughEnvVar: boolean): string =>
`Could not communicate with your OnCall API at ${onCallApiUrl}${this.generateOnCallApiUrlConfiguredThroughEnvVarMsg(
`Could not communicate with OnCall API at ${onCallApiUrl}${this.generateOnCallApiUrlConfiguredThroughEnvVarMsg(
isConfiguredThroughEnvVar
)}.\nValidate that the URL is correct, your OnCall API is running, and that it is accessible from your Grafana instance.`;
)}.\nValidate that the URL is correct, OnCall API is running, and that it is accessible from your Grafana instance.`;
static generateUnknownErrorMsg = (
onCallApiUrl: string,
verb: InstallationVerb,
isConfiguredThroughEnvVar: boolean
): string =>
`An unknown error occured when trying to ${verb} the plugin. Are you sure that your OnCall API URL, ${onCallApiUrl}, is correct${this.generateOnCallApiUrlConfiguredThroughEnvVarMsg(
`An unknown error occurred when trying to ${verb} the plugin. Verify OnCall API URL, ${onCallApiUrl}, is correct${this.generateOnCallApiUrlConfiguredThroughEnvVarMsg(
isConfiguredThroughEnvVar
)}?\nRefresh your page and try again, or try removing your plugin configuration and reconfiguring.`;
@ -78,7 +76,7 @@ class PluginState {
installationVerb,
onCallApiUrlIsConfiguredThroughEnvVar
);
const consoleMsg = `occured while trying to ${installationVerb} the plugin w/ the OnCall backend`;
const consoleMsg = `occurred while trying to ${installationVerb} the plugin w/ the OnCall backend`;
if (isNetworkError(e)) {
const { status: statusCode } = e.response;
@ -104,7 +102,7 @@ class PluginState {
errorMsg = unknownErrorMsg;
}
} else {
// a non-network related error occured.. this scenario shouldn't occur...
// a non-network related error occurred.. this scenario shouldn't occur...
console.warn(`An unknown error ${consoleMsg}`, e);
errorMsg = unknownErrorMsg;
}
@ -121,11 +119,11 @@ class PluginState {
if (isNetworkError(e)) {
// The user likely put in a bogus URL for the OnCall API URL
console.warn('An HTTP related error occured while trying to provision the plugin w/ Grafana', e.response);
console.warn('An HTTP related error occurred while trying to provision the plugin w/ Grafana', e.response);
errorMsg = this.generateInvalidOnCallApiURLErrorMsg(onCallApiUrl, onCallApiUrlIsConfiguredThroughEnvVar);
} else {
// a non-network related error occured.. this scenario shouldn't occur...
console.warn('An unknown error occured while trying to provision the plugin w/ Grafana', e);
// a non-network related error occurred.. this scenario shouldn't occur...
console.warn('An unknown error occurred while trying to provision the plugin w/ Grafana', e);
errorMsg = this.generateUnknownErrorMsg(onCallApiUrl, installationVerb, onCallApiUrlIsConfiguredThroughEnvVar);
}
return errorMsg;
@ -137,16 +135,20 @@ class PluginState {
static updateGrafanaPluginSettings = async (data: UpdateGrafanaPluginSettingsProps, enabled = true) =>
this.grafanaBackend.post(this.GRAFANA_PLUGIN_SETTINGS_URL, { ...data, enabled, pinned: true });
static createGrafanaToken = async () => {
const baseUrl = '/api/auth/keys';
const keys = await this.grafanaBackend.get(baseUrl);
const existingKey = keys.find((key: { id: number; name: string; role: string }) => key.name === 'OnCall');
static readonly KEYS_BASE_URL = '/api/auth/keys';
static getGrafanaToken = async () => {
const keys = await this.grafanaBackend.get(this.KEYS_BASE_URL);
return keys.find((key: { id: number; name: string; role: string }) => key.name === 'OnCall');
};
static createGrafanaToken = async () => {
const existingKey = await this.getGrafanaToken();
if (existingKey) {
await this.grafanaBackend.delete(`${baseUrl}/${existingKey.id}`);
await this.grafanaBackend.delete(`${this.KEYS_BASE_URL}/${existingKey.id}`);
}
return await this.grafanaBackend.post(baseUrl, {
return await this.grafanaBackend.post(this.KEYS_BASE_URL, {
name: 'OnCall',
role: 'Admin',
secondsToLive: null,
@ -205,9 +207,27 @@ class PluginState {
onCallApiUrlIsConfiguredThroughEnvVar = false
): Promise<PluginSyncStatusResponse | string> => {
try {
/**
* Allows the plugin config page to repair settings like the app initialization screen if a user deletes
* an API key on accident but leaves the plugin settings intact.
*/
const existingKey = await this.getGrafanaToken();
if (!existingKey) {
try {
await this.installPlugin();
} catch (e) {
return this.getHumanReadableErrorFromOnCallError(
e,
onCallApiUrl,
'install',
onCallApiUrlIsConfiguredThroughEnvVar
);
}
}
const startSyncResponse = await makeRequest(`${this.ONCALL_BASE_URL}/sync`, { method: 'POST' });
if (typeof startSyncResponse === 'string') {
// an error occured trying to initiate the sync
// an error occurred trying to initiate the sync
return startSyncResponse;
}
@ -300,11 +320,22 @@ class PluginState {
return null;
};
static checkIfBackendIsInMaintenanceMode = async (): Promise<string> => {
const response = await makeRequest<PluginIsInMaintenanceModeResponse>('/maintenance-mode-status', {
method: 'GET',
});
return response.currently_undergoing_maintenance_message;
static checkIfBackendIsInMaintenanceMode = async (
onCallApiUrl: string,
onCallApiUrlIsConfiguredThroughEnvVar = false
): Promise<PluginIsInMaintenanceModeResponse | string> => {
try {
return await makeRequest<PluginIsInMaintenanceModeResponse>('/maintenance-mode-status', {
method: 'GET',
});
} catch (e) {
return this.getHumanReadableErrorFromOnCallError(
e,
onCallApiUrl,
'install',
onCallApiUrlIsConfiguredThroughEnvVar
);
}
};
static checkIfPluginIsConnected = async (

View file

@ -383,6 +383,7 @@ describe('PluginState.syncDataWithOnCall', () => {
const errorMsg = 'asdfasdf';
makeRequest.mockResolvedValueOnce(errorMsg);
PluginState.getGrafanaToken = jest.fn().mockReturnValueOnce({ id: 1 });
PluginState.pollOnCallDataSyncStatus = jest.fn();
// test
@ -403,6 +404,7 @@ describe('PluginState.syncDataWithOnCall', () => {
const mockedPollOnCallDataSyncStatusResponse = 'dfjkdfjdf';
makeRequest.mockResolvedValueOnce(mockedResponse);
PluginState.getGrafanaToken = jest.fn().mockReturnValueOnce({ id: 1 });
PluginState.pollOnCallDataSyncStatus = jest.fn().mockResolvedValueOnce(mockedPollOnCallDataSyncStatusResponse);
// test
@ -427,6 +429,7 @@ describe('PluginState.syncDataWithOnCall', () => {
const mockedHumanReadableError = 'asdfjkdfjkdfjk';
makeRequest.mockRejectedValueOnce(mockedError);
PluginState.getGrafanaToken = jest.fn().mockReturnValueOnce({ id: 1 });
PluginState.pollOnCallDataSyncStatus = jest.fn();
PluginState.getHumanReadableErrorFromOnCallError = jest.fn().mockReturnValueOnce(mockedHumanReadableError);
@ -663,13 +666,14 @@ describe('PluginState.checkIfBackendIsInMaintenanceMode', () => {
// mocks
const maintenanceModeMsg = 'asdfljkadsjlfkajsdf';
const mockedResp = { currently_undergoing_maintenance_message: maintenanceModeMsg };
const onCallApiUrl = 'http://hello.com';
makeRequest.mockResolvedValueOnce(mockedResp);
// test
const response = await PluginState.checkIfBackendIsInMaintenanceMode();
const response = await PluginState.checkIfBackendIsInMaintenanceMode(onCallApiUrl);
// assertions
expect(response).toEqual(maintenanceModeMsg);
expect(response).toEqual(mockedResp);
expect(makeRequest).toHaveBeenCalledTimes(1);
expect(makeRequest).toHaveBeenCalledWith('/maintenance-mode-status', { method: 'GET' });
});

View file

@ -1,3 +1,5 @@
import { OrgRole } from '@grafana/data';
import { contextSrv } from 'grafana/app/core/core';
import { action, observable } from 'mobx';
import moment from 'moment-timezone';
import qs from 'query-string';
@ -32,8 +34,7 @@ import { UserGroupStore } from 'models/user_group/user_group';
import { makeRequest } from 'network';
import { AppFeature } from 'state/features';
import PluginState from 'state/plugin';
import { isUserActionAllowed, UserActions } from 'utils/authorization';
import { GRAFANA_LICENSE_OSS } from 'utils/consts';
import { APP_VERSION, CLOUD_VERSION_REGEX, GRAFANA_LICENSE_CLOUD, GRAFANA_LICENSE_OSS } from 'utils/consts';
// ------ Dashboard ------ //
@ -162,13 +163,15 @@ export class RootBaseStore {
return this.setupPluginError('🚫 Plugin has not been initialized');
}
const isInMaintenanceMode = await PluginState.checkIfBackendIsInMaintenanceMode();
if (isInMaintenanceMode !== null) {
const maintenanceMode = await PluginState.checkIfBackendIsInMaintenanceMode(this.onCallApiUrl);
if (typeof maintenanceMode === 'string') {
return this.setupPluginError(maintenanceMode);
} else if (maintenanceMode.currently_undergoing_maintenance_message) {
this.currentlyUndergoingMaintenance = true;
return this.setupPluginError(`🚧 ${isInMaintenanceMode} 🚧`);
return this.setupPluginError(`🚧 ${maintenanceMode.currently_undergoing_maintenance_message} 🚧`);
}
// at this point we know the plugin is provionsed
// at this point we know the plugin is provisioned
const pluginConnectionStatus = await PluginState.checkIfPluginIsConnected(this.onCallApiUrl);
if (typeof pluginConnectionStatus === 'string') {
return this.setupPluginError(pluginConnectionStatus);
@ -178,28 +181,38 @@ export class RootBaseStore {
if (is_user_anonymous) {
return this.setupPluginError(
'😞 Unfortunately Grafana OnCall is available for authorized users only, please sign in to proceed.'
'😞 Grafana OnCall is available for authorized users only, please sign in to proceed.'
);
} else if (!is_installed || !token_ok) {
if (!allow_signup) {
return this.setupPluginError('🚫 OnCall has temporarily disabled signup of new users. Please try again later.');
}
if (!isUserActionAllowed(UserActions.PluginsInstall)) {
return this.setupPluginError(
'🚫 An Admin in your organization must sign on and setup OnCall before it can be used'
);
}
try {
/**
* this will install AND sync the necessary data
* the sync is done automatically by the /plugin/install OnCall API endpoint
* therefore there is no need to trigger an additional/separate sync, nor poll a status
*/
await PluginState.installPlugin();
} catch (e) {
return this.setupPluginError(PluginState.getHumanReadableErrorFromOnCallError(e, this.onCallApiUrl, 'install'));
const missingPermissions = this.checkMissingSetupPermissions();
if (missingPermissions.length === 0) {
try {
/**
* this will install AND sync the necessary data
* the sync is done automatically by the /plugin/install OnCall API endpoint
* therefore there is no need to trigger an additional/separate sync, nor poll a status
*/
await PluginState.installPlugin();
} catch (e) {
return this.setupPluginError(
PluginState.getHumanReadableErrorFromOnCallError(e, this.onCallApiUrl, 'install')
);
}
} else {
if (contextSrv.accessControlEnabled()) {
return this.setupPluginError(
'🚫 User is missing permission(s) ' +
missingPermissions.join(', ') +
' to setup OnCall before it can be used'
);
} else {
return this.setupPluginError(
'🚫 User with Admin permissions in your organization must sign on and setup OnCall before it can be used'
);
}
}
} else {
const syncDataResponse = await PluginState.syncDataWithOnCall(this.onCallApiUrl);
@ -223,13 +236,37 @@ export class RootBaseStore {
this.appLoading = false;
}
checkMissingSetupPermissions() {
const fallback = contextSrv.user.orgRole === OrgRole.Admin && !contextSrv.accessControlEnabled();
const setupRequiredPermissions = [
'plugins:write',
'org.users:read',
'teams:read',
'apikeys:create',
'apikeys:delete',
];
return setupRequiredPermissions.filter(function (permission) {
return !contextSrv.hasAccess(permission, fallback);
});
}
hasFeature(feature: string | AppFeature) {
// todo use AppFeature only
return this.features?.[feature];
}
get license() {
if (this.backendLicense) {
return this.backendLicense;
}
if (CLOUD_VERSION_REGEX.test(APP_VERSION)) {
return GRAFANA_LICENSE_CLOUD;
}
return GRAFANA_LICENSE_OSS;
}
isOpenSource(): boolean {
return this.backendLicense === GRAFANA_LICENSE_OSS;
return this.license === GRAFANA_LICENSE_OSS;
}
@observable

View file

@ -1,17 +1,24 @@
import { OrgRole } from '@grafana/data';
import { contextSrv } from 'grafana/app/core/core';
import { OnCallAppPluginMeta } from 'types';
import PluginState from 'state/plugin';
import { UserActions, isUserActionAllowed as isUserActionAllowedOriginal } from 'utils/authorization';
import { isUserActionAllowed as isUserActionAllowedOriginal } from 'utils/authorization';
import { RootBaseStore } from './';
jest.mock('state/plugin');
jest.mock('utils/authorization');
jest.mock('grafana/app/core/core', () => ({
contextSrv: {
user: {
orgRole: null,
},
},
}));
const isUserActionAllowed = isUserActionAllowedOriginal as jest.Mock<ReturnType<typeof isUserActionAllowedOriginal>>;
const PluginInstallAction = UserActions.PluginsInstall;
const generatePluginData = (
onCallApiUrl: OnCallAppPluginMeta['jsonData']['onCallApiUrl'] = null
): OnCallAppPluginMeta =>
@ -42,7 +49,9 @@ describe('rootBaseStore', () => {
const onCallApiUrl = 'http://asdfasdf.com';
const rootBaseStore = new RootBaseStore();
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce(errorMsg);
// test
@ -62,14 +71,16 @@ describe('rootBaseStore', () => {
const rootBaseStore = new RootBaseStore();
const maintenanceMessage = 'mncvnmvcmnvkjdjkd';
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(maintenanceMessage);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: maintenanceMessage });
// test
await rootBaseStore.setupPlugin(generatePluginData(onCallApiUrl));
// assertions
expect(PluginState.checkIfBackendIsInMaintenanceMode).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfBackendIsInMaintenanceMode).toHaveBeenCalledWith();
expect(PluginState.checkIfBackendIsInMaintenanceMode).toHaveBeenCalledWith(onCallApiUrl);
expect(rootBaseStore.appLoading).toBe(false);
expect(rootBaseStore.initializationError).toEqual(`🚧 ${maintenanceMessage} 🚧`);
@ -81,7 +92,9 @@ describe('rootBaseStore', () => {
const onCallApiUrl = 'http://asdfasdf.com';
const rootBaseStore = new RootBaseStore();
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: true,
is_installed: true,
@ -100,7 +113,7 @@ describe('rootBaseStore', () => {
expect(rootBaseStore.appLoading).toBe(false);
expect(rootBaseStore.initializationError).toEqual(
'😞 Unfortunately Grafana OnCall is available for authorized users only, please sign in to proceed.'
'😞 Grafana OnCall is available for authorized users only, please sign in to proceed.'
);
});
@ -109,7 +122,9 @@ describe('rootBaseStore', () => {
const onCallApiUrl = 'http://asdfasdf.com';
const rootBaseStore = new RootBaseStore();
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: false,
is_installed: false,
@ -140,7 +155,13 @@ describe('rootBaseStore', () => {
const onCallApiUrl = 'http://asdfasdf.com';
const rootBaseStore = new RootBaseStore();
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
contextSrv.user.orgRole = OrgRole.Viewer;
contextSrv.accessControlEnabled = jest.fn().mockReturnValue(false);
contextSrv.hasAccess = jest.fn().mockReturnValue(false);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: false,
is_installed: false,
@ -159,14 +180,11 @@ describe('rootBaseStore', () => {
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledWith(onCallApiUrl);
expect(isUserActionAllowed).toHaveBeenCalledTimes(1);
expect(isUserActionAllowed).toHaveBeenCalledWith(PluginInstallAction);
expect(PluginState.installPlugin).toHaveBeenCalledTimes(0);
expect(rootBaseStore.appLoading).toBe(false);
expect(rootBaseStore.initializationError).toEqual(
'🚫 An Admin in your organization must sign on and setup OnCall before it can be used'
'🚫 User with Admin permissions in your organization must sign on and setup OnCall before it can be used'
);
});
@ -179,7 +197,13 @@ describe('rootBaseStore', () => {
const rootBaseStore = new RootBaseStore();
const mockedLoadCurrentUser = jest.fn();
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
contextSrv.user.orgRole = OrgRole.Admin;
contextSrv.accessControlEnabled = jest.fn().mockResolvedValueOnce(false);
contextSrv.hasAccess = jest.fn().mockReturnValue(true);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
...scenario,
is_user_anonymous: false,
@ -198,9 +222,6 @@ describe('rootBaseStore', () => {
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledWith(onCallApiUrl);
expect(isUserActionAllowed).toHaveBeenCalledTimes(1);
expect(isUserActionAllowed).toHaveBeenCalledWith(PluginInstallAction);
expect(PluginState.installPlugin).toHaveBeenCalledTimes(1);
expect(PluginState.installPlugin).toHaveBeenCalledWith();
@ -211,6 +232,71 @@ describe('rootBaseStore', () => {
expect(rootBaseStore.initializationError).toBeNull();
});
test.each([
{ role: OrgRole.Admin, missing_permissions: [], expected_result: true },
{ role: OrgRole.Viewer, missing_permissions: [], expected_result: true },
{
role: OrgRole.Admin,
missing_permissions: ['plugins:write', 'org.users:read', 'teams:read', 'apikeys:create', 'apikeys:delete'],
expected_result: false,
},
{
role: OrgRole.Viewer,
missing_permissions: ['plugins:write', 'org.users:read', 'teams:read', 'apikeys:create', 'apikeys:delete'],
expected_result: false,
},
])('signup is allowed, accessControlEnabled, various roles and permissions', async (scenario) => {
// mocks/setup
const onCallApiUrl = 'http://asdfasdf.com';
const rootBaseStore = new RootBaseStore();
const mockedLoadCurrentUser = jest.fn();
contextSrv.user.orgRole = scenario.role;
contextSrv.accessControlEnabled = jest.fn().mockReturnValue(true);
rootBaseStore.checkMissingSetupPermissions = jest.fn().mockImplementation(() => scenario.missing_permissions);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
...scenario,
is_user_anonymous: false,
allow_signup: true,
version: 'asdfasdf',
license: 'asdfasdf',
});
isUserActionAllowed.mockReturnValueOnce(true);
PluginState.installPlugin = jest.fn().mockResolvedValueOnce(null);
rootBaseStore.userStore.loadCurrentUser = mockedLoadCurrentUser;
// test
await rootBaseStore.setupPlugin(generatePluginData(onCallApiUrl));
// assertions
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledWith(onCallApiUrl);
expect(rootBaseStore.appLoading).toBe(false);
if (scenario.expected_result) {
expect(PluginState.installPlugin).toHaveBeenCalledTimes(1);
expect(PluginState.installPlugin).toHaveBeenCalledWith();
expect(mockedLoadCurrentUser).toHaveBeenCalledTimes(1);
expect(mockedLoadCurrentUser).toHaveBeenCalledWith();
expect(rootBaseStore.initializationError).toBeNull();
} else {
expect(PluginState.installPlugin).toHaveBeenCalledTimes(0);
expect(rootBaseStore.initializationError).toEqual(
'🚫 User is missing permission(s) ' +
scenario.missing_permissions.join(', ') +
' to setup OnCall before it can be used'
);
}
});
test('plugin is not installed, signup is allowed, the user is an admin, and plugin installation throws an error', async () => {
// mocks/setup
const onCallApiUrl = 'http://asdfasdf.com';
@ -218,7 +304,13 @@ describe('rootBaseStore', () => {
const installPluginError = new Error('asdasdfasdfasf');
const humanReadableErrorMsg = 'asdfasldkfjaksdjflk';
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
contextSrv.user.orgRole = OrgRole.Admin;
contextSrv.accessControlEnabled = jest.fn().mockReturnValue(false);
contextSrv.hasAccess = jest.fn().mockReturnValue(true);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: false,
is_installed: false,
@ -238,9 +330,6 @@ describe('rootBaseStore', () => {
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledTimes(1);
expect(PluginState.checkIfPluginIsConnected).toHaveBeenCalledWith(onCallApiUrl);
expect(isUserActionAllowed).toHaveBeenCalledTimes(1);
expect(isUserActionAllowed).toHaveBeenCalledWith(PluginInstallAction);
expect(PluginState.installPlugin).toHaveBeenCalledTimes(1);
expect(PluginState.installPlugin).toHaveBeenCalledWith();
@ -263,7 +352,9 @@ describe('rootBaseStore', () => {
const version = 'asdfalkjslkjdf';
const license = 'lkjdkjfdkjfdjkfd';
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: false,
is_installed: true,
@ -299,7 +390,9 @@ describe('rootBaseStore', () => {
const mockedLoadCurrentUser = jest.fn();
const syncDataWithOnCallError = 'asdasdfasdfasf';
PluginState.checkIfBackendIsInMaintenanceMode = jest.fn().mockResolvedValueOnce(null);
PluginState.checkIfBackendIsInMaintenanceMode = jest
.fn()
.mockResolvedValueOnce({ currently_undergoing_maintenance_message: null });
PluginState.checkIfPluginIsConnected = jest.fn().mockResolvedValueOnce({
is_user_anonymous: false,
is_installed: true,

View file

@ -25,7 +25,6 @@ export enum Resource {
OTHER_SETTINGS = 'other-settings',
TEAMS = 'teams',
PLUGINS = 'plugins',
}
export enum Action {
@ -35,7 +34,6 @@ export enum Action {
TEST = 'test',
EXPORT = 'export',
UPDATE_SETTINGS = 'update-settings',
INSTALL = 'install',
}
type Actions =
@ -66,8 +64,7 @@ type Actions =
| 'UserSettingsAdmin'
| 'OtherSettingsRead'
| 'OtherSettingsWrite'
| 'TeamsWrite'
| 'PluginsInstall';
| 'TeamsWrite';
const roleMapping: Record<OrgRole, number> = {
[OrgRole.Admin]: 0,
@ -164,5 +161,4 @@ export const UserActions: { [action in Actions]: UserAction } = {
// These are not oncall specific
TeamsWrite: constructAction(Resource.TEAMS, Action.WRITE, OrgRole.Admin, false),
PluginsInstall: constructAction(Resource.PLUGINS, Action.INSTALL, OrgRole.Admin, false),
};

View file

@ -4,9 +4,17 @@ import plugin from '../../package.json'; // eslint-disable-line
export const APP_TITLE = 'Grafana OnCall';
export const APP_SUBTITLE = `Developer-friendly incident response (${plugin?.version})`;
export const APP_VERSION = `${plugin?.version}`;
export const CLOUD_VERSION_REGEX = new RegExp('r[\\d]+-v[\\d]+.[\\d]+.[\\d]+');
// License
export const GRAFANA_LICENSE_OSS = 'OpenSource';
export const GRAFANA_LICENSE_CLOUD = 'Cloud';
export const FALLBACK_LICENSE = CLOUD_VERSION_REGEX.test(APP_VERSION) ? GRAFANA_LICENSE_CLOUD : GRAFANA_LICENSE_OSS;
// height of new Grafana sticky header with breadcrumbs
export const GRAFANA_HEADER_HEIGTH = 80;