This commit is contained in:
Joey Orlando 2023-07-19 09:33:04 +02:00 committed by GitHub
commit 6f1838d2b7
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
113 changed files with 1739 additions and 614 deletions

View file

@ -7,6 +7,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## Unreleased
## v1.3.15 (2023-07-19)
### Changed
- Deprecate `AlertGroup.is_archived` column. Column will be removed in a subsequent release. By @joeyorlando ([#2524](https://github.com/grafana/oncall/pull/2524)).
- Update Slack "invite" feature to use direct paging by @vadimkerr ([#2562](https://github.com/grafana/oncall/pull/2562))
- Change "Current responders" to "Additional Responders" in web UI by @vadimkerr ([#2567](https://github.com/grafana/oncall/pull/2567))
### Fixed
- Fix duplicate orders on routes and escalation policies by @vadimkerr ([#2568](https://github.com/grafana/oncall/pull/2568))
- Fixed Slack channels sync by @Ferril ([#2571](https://github.com/grafana/oncall/pull/2571))
- Fixed rendering of slack connection errors ([#2526](https://github.com/grafana/oncall/pull/2526))
## v1.3.14 (2023-07-17)
### Changed

View file

@ -1,5 +1,5 @@
# The source of this file is https://raw.githubusercontent.com/grafana/writers-toolkit/main/docs/docs.mk.
# 4.0.0 (2023-06-06)
# A changelog is included in the head of the `make-docs` script.
include variables.mk
-include variables.mk.local

View file

@ -1,6 +1,116 @@
#!/bin/sh
# The source of this file is https://raw.githubusercontent.com/grafana/writers-toolkit/main/docs/make-docs.
# 4.1.0 (2023-06-16)
# # `make-docs` procedure changelog
#
# Updates should conform to the guidelines in https://keepachangelog.com/en/1.1.0/.
# [Semantic versioning](https://semver.org/) is used to help the reader identify the significance of changes.
# Changes are relevant to this script and the support docs.mk GNU Make interface.
# ## Unreleased
# ## 4.1.0 (2023-06-16)
# ### Added
# - Mounts of `layouts` and `config` directories for the `website` project.
# Ensures that local changes to mounts or shortcodes are reflected in the development server.
# ### Fixed
# - Version inference for versioned docs pages.
# Pages in versioned projects now have the `versioned: true` front matter set to ensure that "version" in $.Page.Scratch is set on builds.
# ## 4.0.0 (2023-06-06)
# ### Removed
# - `doc-validator/%` target.
# The behavior of the target was not as described.
# Instead, to limit `doc-validator` to only specific files, refer to https://grafana.com/docs/writers-toolkit/writing-guide/tooling-and-workflows/validate-technical-documentation/#run-on-specific-files.
# ## 3.0.0 (2023-05-18)
# ### Fixed
# - Compatibility with the updated Make targets in the `website` repository.
# `docs` now runs this script itself, `server-docs` builds the site with the `docs` Hugo environment.
# ## 2.0.0 (2023-05-18)
# ### Added
# - Support for the grafana-cloud/frontend-observability/faro-web-sdk project.
# - Use of `doc-validator` v2.0.x which includes breaking changes to command line options.
# ### Fixed
# - Source grafana-cloud project from website repository.
# ### Added
# - Support for running the Vale linter with `make vale`.
# ## 1.2.1 (2023-05-05)
# ### Fixed
# - Use `latest` tag of `grafana/vale` image by default instead of hardcoded older version.
# - Fix mounting multiple projects broken by the changes in 1.0.1
# ## 1.2.0 (2023-05-05)
# ### Added
# - Support for running the Vale linter with `make vale`.
# ### Fixed
# ## 1.1.0 (2023-05-05)
# ### Added
# - Rewrite error output so it can be followed by text editors.
# ### Fixed
# - Fix `docs-debug` container process port.
# ## 1.0.1 (2023-05-04)
# ### Fixed
# - Ensure complete section hierarchy so that all projects have a visible menu.
# ## 1.0.0 (2023-05-04)
# ### Added
# - Build multiple projects simultaneously if all projects are checked out locally.
# - Run [`doc-validator`](https://github.com/grafana/technical-documentation/tree/main/tools/cmd/doc-validator) over projects.
# - Redirect project root to mounted version.
# For example redirect `/docs/grafana/` to `/docs/grafana/latest/`.
# - Support for Podman or Docker containers with `PODMAN` environment variable.
# - Support for projects:
# - agent
# - enterprise-logs
# - enterprise-metrics
# - enterprise-traces
# - grafana
# - grafana-cloud
# - grafana-cloud/machine-learning
# - helm-charts/mimir-distributed
# - helm-charts/tempo-distributed
# - incident
# - loki
# - mimir
# - oncall
# - opentelemetry
# - phlare
# - plugins
# - slo
# - tempo
# - writers-toolkit
set -ef
@ -63,10 +173,11 @@ SOURCES_as_code='as-code-docs'
SOURCES_enterprise_metrics='backend-enterprise'
SOURCES_enterprise_metrics_='backend-enterprise'
SOURCES_grafana_cloud='website'
SOURCES_grafana_cloud_alerting_and_irm_machine_learning='machine-learning'
SOURCES_grafana_cloud_alerting_and_irm_slo='slo'
SOURCES_grafana_cloud_k6='k6-docs'
SOURCES_grafana_cloud_data_configuration_integrations='cloud-onboarding'
SOURCES_grafana_cloud_frontend_observability_faro_web_sdk='faro-web-sdk'
SOURCES_grafana_cloud_machine_learning='machine-learning'
SOURCES_helm_charts_mimir_distributed='mimir'
SOURCES_helm_charts_tempo_distributed='tempo'
SOURCES_opentelemetry='opentelemetry-docs'
@ -74,10 +185,11 @@ SOURCES_plugins_grafana_splunk_datasource='splunk-datasource'
VERSIONS_as_code='UNVERSIONED'
VERSIONS_grafana_cloud='UNVERSIONED'
VERSIONS_grafana_cloud_alerting_and_irm_machine_learning='UNVERSIONED'
VERSIONS_grafana_cloud_alerting_and_irm_slo='UNVERSIONED'
VERSIONS_grafana_cloud_k6='UNVERSIONED'
VERSIONS_grafana_cloud_data_configuration_integrations='UNVERSIONED'
VERSIONS_grafana_cloud_frontend_observability_faro_web_sdk='UNVERSIONED'
VERSIONS_grafana_cloud_machine_learning='UNVERSIONED'
VERSIONS_opentelemetry='UNVERSIONED'
VERSIONS_technical_documentation='UNVERSIONED'
VERSIONS_website='UNVERSIONED'

View file

@ -39,6 +39,7 @@ When creating a webhook in Jira, select the following events to be sent to Grafa
1. Issue - created
2. Issue - updated
3. Issue - deleted
After setting up the connection, you can test it by creating a new issue in Jira. You should see a new alert group in Grafana OnCall.
## Grouping, auto-acknowledge and auto-resolve
@ -53,7 +54,7 @@ To customize this behaviour, consider modifying alert templates in integration s
## Configuring Grafana OnCall to send data to Jira
Grafana OnCall can automatically create and resolve issues in Jira via [outgoing webhooks]({{< relref "_index.md" >}}).
Grafana OnCall can automatically create and resolve issues in Jira via [outgoing webhooks][outgoing-webhooks].
This guide provides example webhook configurations for common use cases, as well as information on how to set up a user in Jira to be used by Grafana OnCall.
### Prerequisites
@ -167,11 +168,14 @@ to get the list of available transitions.
The examples above describe how to create outgoing webhooks in Grafana OnCall that will allow to automatically create and resolve issues in Jira.
Consider modifying example templates to fit your use case (e.g. to include more information on alert groups).
Refer to [outgoing webhooks documentation]({{< relref "_index.md" >}}) for more information on available template variables and webhook configuration.
Refer to [outgoing webhooks documentation][outgoing-webhooks] for more information on available template variables and webhook configuration.
For more information on Jira REST API, refer to [Jira REST API documentation](https://developer.atlassian.com/cloud/jira/platform/rest/v2/api-group-issues).
{{% docs/reference %}}
[user-and-team-management]: "/docs/oncall/ -> /docs/oncall/<ONCALL VERSION>/user-and-team-management"
[user-and-team-management]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/oncall/user-and-team-management"
[outgoing-webhooks]: "/docs/oncall/ -> /docs/oncall/<ONCALL VERSION>/outgoing-webhooks"
[outgoing-webhooks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/oncall/outgoing-webhooks"
{{% /docs/reference %}}

View file

@ -16,7 +16,7 @@ weight: 500
# Integrate Grafana OnCall with ServiceNow
Grafana OnCall can automatically create, assign and resolve incidents in ServiceNow via [outgoing webhooks]({{< relref "_index.md" >}}).
Grafana OnCall can automatically create, assign and resolve incidents in ServiceNow via [outgoing webhooks][outgoing-webhooks].
This guide provides example webhook configurations for common use cases, as well as information on how to set up a user in ServiceNow to be used by Grafana OnCall.
## Prerequisites
@ -135,6 +135,11 @@ Use the following JSON template as webhook data:
The examples above describe how to create outgoing webhooks in Grafana OnCall that will allow to automatically create, assign and resolve incidents in ServiceNow.
Consider modifying example templates to fit your use case (e.g. to include more information on alert groups).
Refer to [outgoing webhooks documentation]({{< relref "_index.md" >}}) for more information on available template variables and webhook configuration.
Refer to [outgoing webhooks documentation][outgoing-webhooks] for more information on available template variables and webhook configuration.
For more information on ServiceNow REST API, refer to [ServiceNow REST API documentation](https://developer.servicenow.com/dev.do#!/reference/api/sandiego/rest).
{{% docs/reference %}}
[outgoing-webhooks]: "/docs/oncall/ -> /docs/oncall/<ONCALL VERSION>/outgoing-webhooks"
[outgoing-webhooks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/oncall/outgoing-webhooks"
{{% /docs/reference %}}

View file

@ -32,9 +32,8 @@ The integration provides grouping, auto-acknowledge and auto-resolve logic via c
Create a new "Trigger or automation" webhook connection in Zendesk to send events to Grafana OnCall using the integration URL above.
Refer to [Zendesk documentation]
(<https://support.zendesk.com/hc/en-us/articles/4408839108378-Creating-webhooks-to-interact-with-third-party-systems>
) for more information on how to create and manage webhooks.
Refer to [Zendesk documentation](<https://support.zendesk.com/hc/en-us/articles/4408839108378-Creating-webhooks-to-interact-with-third-party-systems>)
for more information on how to create and manage webhooks.
After setting up a webhook in Zendesk, create a new trigger with the following condition:
`Meet ANY of the following conditions: "Ticket Is Created", "Ticket status Changed"`
@ -68,7 +67,7 @@ To customize this behaviour, consider modifying alert templates in integration s
## Configuring Grafana OnCall to send data to Zendesk
Grafana OnCall can automatically create and resolve tickets in Zendesk via [outgoing webhooks]({{< relref "_index.md" >}}).
Grafana OnCall can automatically create and resolve tickets in Zendesk via [outgoing webhooks][outgoing-webhooks].
This guide provides example webhook configurations for common use cases, as well as information on how to set up a user in Zendesk to be used by Grafana OnCall.
### Prerequisites
@ -159,11 +158,14 @@ Use the following JSON template as webhook data:
The examples above describe how to create outgoing webhooks in Grafana OnCall that will allow to automatically create and resolve tickets in Zendesk.
Consider modifying example templates to fit your use case (e.g. to include more information on alert groups).
Refer to [outgoing webhooks documentation]({{< relref "_index.md" >}}) for more information on available template variables and webhook configuration.
Refer to [outgoing webhooks documentation][outgoing-webhooks] for more information on available template variables and webhook configuration.
For more information on Zendesk API, refer to [Zendesk API documentation](https://developer.zendesk.com/api-reference/ticketing/tickets/tickets/).
{{% docs/reference %}}
[user-and-team-management]: "/docs/oncall/ -> /docs/oncall/<ONCALL VERSION>/user-and-team-management"
[user-and-team-management]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/oncall/user-and-team-management"
[outgoing-webhooks]: "/docs/oncall/ -> /docs/oncall/<ONCALL VERSION>/outgoing-webhooks"
[outgoing-webhooks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/oncall/outgoing-webhooks"
{{% /docs/reference %}}

View file

@ -1,7 +1,4 @@
---
aliases:
- ../integrations/configure-outgoing-webhooks/
- /docs/oncall/latest/outgoing-webhooks/
canonical: https://grafana.com/docs/oncall/latest/outgoing-webhooks/
keywords:
- Grafana Cloud
@ -10,11 +7,11 @@ keywords:
- on-call
- amixr
- webhooks
title: Configure outgoing webhooks for Grafana OnCall
title: Outgoing Webhooks
weight: 500
---
# Configure outgoing webhooks for Grafana OnCall
# Outgoing Webhooks
Outgoing webhooks are used by Grafana OnCall to send data to a URL in a flexible way. These webhooks can be
triggered from a variety of event types and make use of Jinja2 to transform data into the format required at
@ -30,7 +27,7 @@ click **Create Webhook**
### Outgoing webhook fields
The outgoing webhook is defined by the following fields. For more information about template usage
see [Outgoing webhook templates)](#outgoing-webhook-templates) section.
see [Outgoing webhook templates](#outgoing-webhook-templates) section.
#### ID

View file

@ -27,7 +27,7 @@ class AlertGroupAdmin(CustomModelAdmin):
list_filter = ("started_at",)
def get_queryset(self, request):
return AlertGroup.all_objects
return AlertGroup.objects
@admin.register(AlertGroupLogRecord)

View file

@ -256,7 +256,7 @@ class EscalationSnapshotMixin:
)
task_id = celery_uuid()
AlertGroup.all_objects.filter(pk=self.pk,).update(
AlertGroup.objects.filter(pk=self.pk,).update(
active_escalation_id=task_id,
is_escalation_finished=False,
raw_escalation_snapshot=raw_escalation_snapshot,

View file

@ -88,9 +88,7 @@ class EscalationSnapshot:
"""
if self.last_active_escalation_policy_order is None:
return []
elif self.last_active_escalation_policy_order == 0:
return [self.escalation_policies_snapshots[0]]
return self.escalation_policies_snapshots[: self.last_active_escalation_policy_order]
return self.escalation_policies_snapshots[: self.last_active_escalation_policy_order + 1]
def next_step_eta_is_valid(self) -> typing.Optional[bool]:
"""
@ -147,7 +145,8 @@ class EscalationSnapshot:
self.stop_escalation = execution_result.stop_escalation # result of STEP_FINAL_RESOLVE
self.pause_escalation = execution_result.pause_escalation # result of STEP_NOTIFY_IF_NUM_ALERTS_IN_WINDOW
last_active_escalation_policy_order = escalation_policy_snapshot.order
# use the index of last escalation policy snapshot, since orders are not guaranteed to be sequential
last_active_escalation_policy_order = self.escalation_policies_snapshots.index(escalation_policy_snapshot)
if execution_result.start_from_beginning: # result of STEP_REPEAT_ESCALATION_N_TIMES
last_active_escalation_policy_order = None

View file

@ -213,12 +213,6 @@ class AlertGroupSlackRenderer(AlertGroupBaseRenderer):
},
)
if self.alert_group.invitations.filter(is_active=True).count() < 5:
action_id = ScenarioStep.get_step("distribute_alerts", "InviteOtherPersonToIncident").routing_uid()
text = "Invite..."
invitation_element = self._get_select_user_element(action_id, text=text)
buttons.append(invitation_element)
if not self.alert_group.silenced:
silence_options = [
{
@ -245,6 +239,15 @@ class AlertGroupSlackRenderer(AlertGroupBaseRenderer):
},
)
buttons.append(
{
"text": {"type": "plain_text", "text": "Responders", "emoji": True},
"type": "button",
"value": self._alert_group_action_value(),
"action_id": ScenarioStep.get_step("manage_responders", "StartManageResponders").routing_uid(),
},
)
attach_button = {
"text": {"type": "plain_text", "text": "Attach to ...", "emoji": True},
"type": "button",

View file

@ -0,0 +1,18 @@
# Generated by Django 3.2.20 on 2023-07-18 06:41
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('alerts', '0021_alter_alertgroup_started_at'),
]
operations = [
migrations.AlterField(
model_name='alertgroup',
name='manual_severity',
field=models.IntegerField(choices=[(0, 'high'), (1, 'low'), (2, 'none')], default=2, null=True),
),
]

View file

@ -0,0 +1,31 @@
# Generated by Django 3.2.20 on 2023-07-18 09:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('alerts', '0022_alter_alertgroup_manual_severity'),
]
operations = [
migrations.AlterModelManagers(
name='alertgroup',
managers=[
],
),
migrations.RemoveIndex(
model_name='alertgroup',
name='alerts_aler_channel_ee84a7_idx',
),
migrations.AlterField(
model_name='alertgroup',
name='is_archived',
field=models.BooleanField(default=False, null=True),
),
migrations.AddIndex(
model_name='alertgroup',
index=models.Index(fields=['channel_id', 'resolved', 'acknowledged', 'silenced', 'root_alert_group_id'], name='alerts_aler_channel_81aeec_idx'),
),
]

View file

@ -0,0 +1,54 @@
# Generated by Django 3.2.20 on 2023-07-18 09:53
from django.db import migrations, models
import django_migration_linter as linter
from django.db.models import Count
from common.database import get_random_readonly_database_key_if_present_otherwise_default
def fix_duplicate_orders(apps, schema_editor):
EscalationPolicy = apps.get_model('alerts', 'EscalationPolicy')
# it should be safe to use a readonly database because duplicates are pretty infrequent
db = get_random_readonly_database_key_if_present_otherwise_default()
# find all (escalation_chain_id, order) tuples that have more than one entry (meaning duplicates)
items_with_duplicate_orders = EscalationPolicy.objects.using(db).values(
"escalation_chain_id", "order"
).annotate(count=Count("order")).order_by().filter(count__gt=1) # use order_by() to reset any existing ordering
# make sure we don't fix the same escalation chain more than once
escalation_chain_ids = set(item["escalation_chain_id"] for item in items_with_duplicate_orders)
for escalation_chain_id in escalation_chain_ids:
policies = EscalationPolicy.objects.filter(escalation_chain_id=escalation_chain_id).order_by("order", "id")
# assign correct sequential order for each policy starting from 0
for idx, policy in enumerate(policies):
policy.order = idx
EscalationPolicy.objects.bulk_update(policies, fields=["order"])
class Migration(migrations.Migration):
dependencies = [
('alerts', '0023_auto_20230718_0952'),
]
operations = [
linter.IgnoreMigration(), # adding a unique constraint after fixing duplicates should be fine
migrations.AlterModelOptions(
name='escalationpolicy',
options={'ordering': ['order']},
),
migrations.AlterField(
model_name='escalationpolicy',
name='order',
field=models.PositiveIntegerField(db_index=True, editable=False, null=True),
),
migrations.RunPython(fix_duplicate_orders, migrations.RunPython.noop),
migrations.AddConstraint(
model_name='escalationpolicy',
constraint=models.UniqueConstraint(fields=('escalation_chain_id', 'order'), name='unique_escalation_policy_order'),
),
]

View file

@ -0,0 +1,56 @@
# Generated by Django 3.2.20 on 2023-07-18 10:42
from django.db import migrations, models
import django_migration_linter as linter
from django.db.models import Count
from common.database import get_random_readonly_database_key_if_present_otherwise_default
def fix_duplicate_orders(apps, schema_editor):
ChannelFilter = apps.get_model('alerts', 'ChannelFilter')
# it should be safe to use a readonly database because duplicates are pretty infrequent
db = get_random_readonly_database_key_if_present_otherwise_default()
# find all (alert_receive_channel_id, is_default, order) tuples that have more than one entry (meaning duplicates)
items_with_duplicate_orders = ChannelFilter.objects.using(db).values(
"alert_receive_channel_id", "is_default", "order"
).annotate(count=Count("order")).order_by().filter(count__gt=1) # use order_by() to reset any existing ordering
# make sure we don't fix the same (alert_receive_channel_id, is_default) pair more than once
values_to_fix = set((item["alert_receive_channel_id"], item["is_default"]) for item in items_with_duplicate_orders)
for alert_receive_channel_id, is_default in values_to_fix:
channel_filters = ChannelFilter.objects.filter(
alert_receive_channel_id=alert_receive_channel_id, is_default=is_default
).order_by("order", "id")
# assign correct sequential order for each route starting from 0
for idx, channel_filter in enumerate(channel_filters):
channel_filter.order = idx
ChannelFilter.objects.bulk_update(channel_filters, fields=["order"])
class Migration(migrations.Migration):
dependencies = [
('alerts', '0024_auto_20230718_0953'),
]
operations = [
linter.IgnoreMigration(), # adding a unique constraint after fixing duplicates should be fine
migrations.AlterModelOptions(
name='channelfilter',
options={'ordering': ['alert_receive_channel_id', 'is_default', 'order']},
),
migrations.AlterField(
model_name='channelfilter',
name='order',
field=models.PositiveIntegerField(db_index=True, editable=False, null=True),
),
migrations.RunPython(fix_duplicate_orders, migrations.RunPython.noop),
migrations.AddConstraint(
model_name='channelfilter',
constraint=models.UniqueConstraint(fields=('alert_receive_channel_id', 'is_default', 'order'), name='unique_channel_filter_order'),
),
]

View file

@ -91,7 +91,7 @@ class Alert(models.Model):
if channel_filter is None:
channel_filter = ChannelFilter.select_filter(alert_receive_channel, raw_request_data, force_route_id)
group, group_created = AlertGroup.all_objects.get_or_create_grouping(
group, group_created = AlertGroup.objects.get_or_create_grouping(
channel=alert_receive_channel,
channel_filter=channel_filter,
group_data=group_data,
@ -134,7 +134,7 @@ class Alert(models.Model):
if maintenance_uuid is not None:
try:
maintenance_incident = AlertGroup.all_objects.get(maintenance_uuid=maintenance_uuid)
maintenance_incident = AlertGroup.objects.get(maintenance_uuid=maintenance_uuid)
group.root_alert_group = maintenance_incident
group.save(update_fields=["root_alert_group"])
log_record_for_root_incident = maintenance_incident.log_records.create(

View file

@ -16,6 +16,7 @@ from django.db.models.signals import post_save
from django.dispatch import receiver
from django.utils import timezone
from django.utils.functional import cached_property
from django_deprecate_fields import deprecate_field
from apps.alerts.constants import AlertGroupState
from apps.alerts.escalation_snapshot import EscalationSnapshotMixin
@ -46,7 +47,7 @@ def generate_public_primary_key_for_alert_group():
new_public_primary_key = generate_public_primary_key(prefix)
failure_counter = 0
while AlertGroup.all_objects.filter(public_primary_key=new_public_primary_key).exists():
while AlertGroup.objects.filter(public_primary_key=new_public_primary_key).exists():
new_public_primary_key = increase_public_primary_key_length(
failure_counter=failure_counter, prefix=prefix, model_name="AlertGroup"
)
@ -110,11 +111,6 @@ class AlertGroupQuerySet(models.QuerySet):
raise
class UnarchivedAlertGroupQuerySet(models.QuerySet):
def filter(self, *args, **kwargs):
return super().filter(*args, **kwargs, is_archived=False)
class AlertGroupSlackRenderingMixin:
"""
Ideally this mixin should not exist. Instead of this instance of AlertGroupSlackRenderer should be created and used
@ -139,8 +135,7 @@ class AlertGroupSlackRenderingMixin:
class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.Model):
log_records: "RelatedManager['AlertGroupLogRecord']"
all_objects = AlertGroupQuerySet.as_manager()
unarchived_objects = UnarchivedAlertGroupQuerySet.as_manager()
objects = AlertGroupQuerySet.as_manager()
(
NEW,
@ -314,9 +309,9 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
(SEVERITY_LOW, "low"),
(SEVERITY_NONE, "none"),
)
manual_severity = models.IntegerField(choices=SEVERITY_CHOICES, default=SEVERITY_NONE)
manual_severity = deprecate_field(models.IntegerField(choices=SEVERITY_CHOICES, default=SEVERITY_NONE))
resolution_note_ts = models.CharField(max_length=100, null=True, default=None)
resolution_note_ts = deprecate_field(models.CharField(max_length=100, null=True, default=None))
root_alert_group = models.ForeignKey(
"alerts.AlertGroup",
@ -329,7 +324,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
# NOTE: we should probably migrate this field to models.UUIDField as it's ONLY ever being
# set to the result of uuid.uuid1
last_unique_unacknowledge_process_id: UUID | None = models.CharField(max_length=100, null=True, default=None)
is_archived = models.BooleanField(default=False)
is_archived = deprecate_field(models.BooleanField(default=False))
wiped_at = models.DateTimeField(null=True, default=None)
wiped_by = models.ForeignKey(
@ -413,9 +408,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
"is_open_for_grouping",
]
indexes = [
models.Index(
fields=["channel_id", "resolved", "acknowledged", "silenced", "root_alert_group_id", "is_archived"]
),
models.Index(fields=["channel_id", "resolved", "acknowledged", "silenced", "root_alert_group_id"]),
]
def __str__(self):
@ -498,6 +491,23 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
def happened_while_maintenance(self):
return self.root_alert_group is not None and self.root_alert_group.maintenance_uuid is not None
def get_paged_users(self) -> QuerySet[User]:
from apps.alerts.models import AlertGroupLogRecord
users_ids = set()
for log_record in self.log_records.filter(
type__in=(AlertGroupLogRecord.TYPE_DIRECT_PAGING, AlertGroupLogRecord.TYPE_UNPAGE_USER)
):
# filter paging events, track still active escalations
info = log_record.get_step_specific_info()
user_id = info.get("user") if info else None
if user_id is not None:
users_ids.add(
user_id
) if log_record.type == AlertGroupLogRecord.TYPE_DIRECT_PAGING else users_ids.discard(user_id)
return User.objects.filter(public_primary_key__in=users_ids)
def _get_response_time(self):
"""Return response_time based on current alert group status."""
response_time = None
@ -1167,7 +1177,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
"is_escalation_finished",
"response_time",
]
AlertGroup.all_objects.bulk_update(alert_groups_to_acknowledge_list, fields=fields_to_update, batch_size=100)
AlertGroup.objects.bulk_update(alert_groups_to_acknowledge_list, fields=fields_to_update, batch_size=100)
for alert_group in alert_groups_to_unresolve_before_acknowledge_list:
alert_group.log_records.create(
@ -1208,9 +1218,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
# Find all dependent alert_groups to update them in one query
# convert qs to list to prevent changes by update
root_alert_group_pks = list(root_alert_groups_to_acknowledge.values_list("pk", flat=True))
dependent_alert_groups_to_acknowledge = AlertGroup.unarchived_objects.filter(
root_alert_group__pk__in=root_alert_group_pks
)
dependent_alert_groups_to_acknowledge = AlertGroup.objects.filter(root_alert_group__pk__in=root_alert_group_pks)
with transaction.atomic():
AlertGroup._bulk_acknowledge(user, root_alert_groups_to_acknowledge)
AlertGroup._bulk_acknowledge(user, dependent_alert_groups_to_acknowledge)
@ -1255,7 +1263,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
"is_escalation_finished",
"response_time",
]
AlertGroup.all_objects.bulk_update(alert_groups_to_resolve_list, fields=fields_to_update, batch_size=100)
AlertGroup.objects.bulk_update(alert_groups_to_resolve_list, fields=fields_to_update, batch_size=100)
for alert_group in alert_groups_to_unsilence_before_resolve_list:
alert_group.log_records.create(
@ -1297,7 +1305,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
)
# convert qs to list to prevent changes by update
root_alert_group_pks = list(root_alert_groups_to_resolve.values_list("pk", flat=True))
dependent_alert_groups_to_resolve = AlertGroup.all_objects.filter(root_alert_group__pk__in=root_alert_group_pks)
dependent_alert_groups_to_resolve = AlertGroup.objects.filter(root_alert_group__pk__in=root_alert_group_pks)
with transaction.atomic():
AlertGroup._bulk_resolve(user, root_alert_groups_to_resolve)
AlertGroup._bulk_resolve(user, dependent_alert_groups_to_resolve)
@ -1437,7 +1445,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
)
# convert qs to list to prevent changes by update
root_alert_group_pks = list(root_alert_groups_unack.values_list("pk", flat=True))
dependent_alert_groups_unack = AlertGroup.all_objects.filter(root_alert_group__pk__in=root_alert_group_pks)
dependent_alert_groups_unack = AlertGroup.objects.filter(root_alert_group__pk__in=root_alert_group_pks)
with transaction.atomic():
AlertGroup._bulk_restart_unack(user, root_alert_groups_unack)
AlertGroup._bulk_restart_unack(user, dependent_alert_groups_unack)
@ -1445,7 +1453,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
root_alert_groups_unresolve = alert_groups.filter(resolved=True, root_alert_group__isnull=True)
# convert qs to list to prevent changes by update
root_alert_group_pks = list(root_alert_groups_unresolve.values_list("pk", flat=True))
dependent_alert_groups_unresolve = AlertGroup.all_objects.filter(root_alert_group__pk__in=root_alert_group_pks)
dependent_alert_groups_unresolve = AlertGroup.objects.filter(root_alert_group__pk__in=root_alert_group_pks)
with transaction.atomic():
AlertGroup._bulk_restart_unresolve(user, root_alert_groups_unresolve)
AlertGroup._bulk_restart_unresolve(user, dependent_alert_groups_unresolve)
@ -1520,7 +1528,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
"is_escalation_finished",
"response_time",
]
AlertGroup.all_objects.bulk_update(alert_groups_to_silence_list, fields=fields_to_update, batch_size=100)
AlertGroup.objects.bulk_update(alert_groups_to_silence_list, fields=fields_to_update, batch_size=100)
# create log records
for alert_group in alert_groups_to_unresolve_before_silence_list:
@ -1707,12 +1715,6 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
]
)
def archive(self):
if self.root_alert_group:
self.root_alert_group = None
self.is_archived = True
self.save(update_fields=["is_archived", "root_alert_group"])
@property
def long_verbose_name(self):
title = str_or_backup(self.slack_templated_first_alert.title, DEFAULT_BACKUP_TITLE)
@ -1729,8 +1731,6 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
def get_resolve_text(self, mention_user=False):
if self.resolved_by == AlertGroup.SOURCE:
return "Resolved by alert source"
elif self.resolved_by == AlertGroup.ARCHIVED:
return "Resolved because alert has been archived"
elif self.resolved_by == AlertGroup.LAST_STEP:
return "Resolved automatically"
elif self.resolved_by == AlertGroup.WIPED:

View file

@ -6,10 +6,10 @@ from django.apps import apps
from django.conf import settings
from django.core.validators import MinLengthValidator
from django.db import models
from ordered_model.models import OrderedModel
from common.jinja_templater import apply_jinja_template
from common.jinja_templater.apply_jinja_template import JinjaTemplateError, JinjaTemplateWarning
from common.ordered_model.ordered_model import OrderedModel
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
logger = logging.getLogger(__name__)
@ -34,7 +34,7 @@ class ChannelFilter(OrderedModel):
Actually it's a Router based on terms now. Not a Filter.
"""
order_with_respect_to = ("alert_receive_channel", "is_default")
order_with_respect_to = ["alert_receive_channel_id", "is_default"]
public_primary_key = models.CharField(
max_length=20,
@ -82,11 +82,12 @@ class ChannelFilter(OrderedModel):
is_default = models.BooleanField(default=False)
class Meta:
ordering = (
"alert_receive_channel",
"is_default",
"order",
)
ordering = ["alert_receive_channel_id", "is_default", "order"]
constraints = [
models.UniqueConstraint(
fields=["alert_receive_channel_id", "is_default", "order"], name="unique_channel_filter_order"
)
]
def __str__(self):
return f"{self.pk}: {self.filtering_term or 'default'}"

View file

@ -3,8 +3,8 @@ import datetime
from django.conf import settings
from django.core.validators import MinLengthValidator
from django.db import models
from ordered_model.models import OrderedModel
from common.ordered_model.ordered_model import OrderedModel
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
@ -23,7 +23,7 @@ def generate_public_primary_key_for_escalation_policy():
class EscalationPolicy(OrderedModel):
order_with_respect_to = "escalation_chain"
order_with_respect_to = ["escalation_chain_id"]
MAX_TIMES_REPEAT = 5
@ -312,6 +312,12 @@ class EscalationPolicy(OrderedModel):
num_alerts_in_window = models.PositiveIntegerField(null=True, default=None)
num_minutes_in_window = models.PositiveIntegerField(null=True, default=None)
class Meta:
ordering = ["order"]
constraints = [
models.UniqueConstraint(fields=["escalation_chain_id", "order"], name="unique_escalation_policy_order")
]
def __str__(self):
return f"{self.pk}: {self.step_type_verbal}"

View file

@ -118,7 +118,7 @@ class MaintainableObject(models.Model):
self.maintenance_started_at = _self.maintenance_started_at
self.maintenance_author = _self.maintenance_author
if mode == AlertReceiveChannel.MAINTENANCE:
group = AlertGroup.all_objects.create(
group = AlertGroup.objects.create(
distinction=uuid4(),
web_title_cache=f"Maintenance of {verbal} for {maintenance_duration}",
maintenance_uuid=maintenance_uuid,

View file

@ -28,7 +28,7 @@ ScheduleNotifications = list[tuple[OnCallSchedule, bool]]
def _trigger_alert(
organization: Organization,
team: Team,
team: Team | None,
title: str,
message: str,
from_user: User,
@ -133,7 +133,7 @@ def check_user_availability(user: User) -> list[dict[str, Any]]:
def direct_paging(
organization: Organization,
team: Team,
team: Team | None,
from_user: User,
title: str = None,
message: str = None,

View file

@ -21,9 +21,7 @@ def acknowledge_reminder_task(alert_group_pk, unacknowledge_process_id):
task_logger.info(f"Starting a reminder task for acknowledgement timeout with process id {unacknowledge_process_id}")
with transaction.atomic():
try:
alert_group = AlertGroup.unarchived_objects.filter(pk=alert_group_pk).select_for_update()[
0
] # Lock alert_group:
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
except IndexError:
return f"acknowledge_reminder_task: Alert group with pk {alert_group_pk} doesn't exist"
@ -89,17 +87,12 @@ def unacknowledge_timeout_task(alert_group_pk, unacknowledge_process_id):
)
with transaction.atomic():
try:
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
except IndexError:
return f"unacknowledge_timeout_task: Alert group with pk {alert_group_pk} doesn't exist"
if unacknowledge_process_id == alert_group.last_unique_unacknowledge_process_id:
if (
not alert_group.resolved
and not alert_group.is_archived
and alert_group.acknowledged
and alert_group.is_root_alert_group
):
if not alert_group.resolved and alert_group.acknowledged and alert_group.is_root_alert_group:
if not alert_group.acknowledged_by_confirmed:
log_record = AlertGroupLogRecord(
type=AlertGroupLogRecord.TYPE_AUTO_UN_ACK,

View file

@ -27,7 +27,7 @@ def update_web_title_cache_for_alert_receive_channel(alert_receive_channel_pk):
countdown = 0
cursor = 0
queryset = AlertGroup.all_objects.filter(channel_id=alert_receive_channel_pk)
queryset = AlertGroup.objects.filter(channel_id=alert_receive_channel_pk)
ids = batch_ids(queryset, cursor)
while ids:
@ -57,7 +57,7 @@ def update_web_title_cache(alert_receive_channel_pk, alert_group_pks):
task_logger.warning(f"AlertReceiveChannel {alert_receive_channel_pk} doesn't exist")
return
alert_groups = AlertGroup.all_objects.filter(pk__in=alert_group_pks).only("pk")
alert_groups = AlertGroup.objects.filter(pk__in=alert_group_pks).only("pk")
# get first alerts in 2 SQL queries
alerts_info = (
@ -84,4 +84,4 @@ def update_web_title_cache(alert_receive_channel_pk, alert_group_pks):
alert_group.web_title_cache = web_title_cache
AlertGroup.all_objects.bulk_update(alert_groups, ["web_title_cache"])
AlertGroup.objects.bulk_update(alert_groups, ["web_title_cache"])

View file

@ -9,7 +9,7 @@ from common.custom_celery_tasks import shared_dedicated_queue_retry_task
def call_ack_url(ack_url, alert_group_pk, channel, http_method="GET"):
AlertGroup = apps.get_model("alerts", "AlertGroup")
SlackMessage = apps.get_model("slack", "SlackMessage")
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk)[0]
alert_group = AlertGroup.objects.filter(pk=alert_group_pk)[0]
is_successful, result_message = request_outgoing_webhook(ack_url, http_method)
if is_successful:

View file

@ -114,7 +114,7 @@ def check_escalation_finished_task() -> None:
now = timezone.now()
two_days_ago = now - datetime.timedelta(days=2)
alert_groups = AlertGroup.all_objects.using(get_random_readonly_database_key_if_present_otherwise_default()).filter(
alert_groups = AlertGroup.objects.using(get_random_readonly_database_key_if_present_otherwise_default()).filter(
~Q(silenced=True, silenced_until__isnull=True), # filter silenced forever alert_groups
# here we should query maintenance_uuid rather than joining on channel__integration
# and checking for something like ~Q(channel__integration=AlertReceiveChannel.INTEGRATION_MAINTENANCE)

View file

@ -34,7 +34,7 @@ def custom_button_result(custom_button_pk, alert_group_pk, user_pk=None, escalat
task_logger.info(f"Custom_button {custom_button_pk} for alert_group {alert_group_pk} does not exist")
return
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk)[0]
alert_group = AlertGroup.objects.filter(pk=alert_group_pk)[0]
escalation_policy = EscalationPolicy.objects.filter(pk=escalation_policy_pk).first()
task_logger.debug(
f"Start getting data for request in custom_button_result task for alert_group {alert_group_pk}, "

View file

@ -13,7 +13,7 @@ logger = get_task_logger(__name__)
def delete_alert_group(alert_group_pk, user_pk):
AlertGroup = apps.get_model("alerts", "AlertGroup")
User = apps.get_model("user_management", "User")
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk).first()
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).first()
if not alert_group:
logger.debug("Alert group not found, skipping delete_alert_group")
return

View file

@ -24,7 +24,7 @@ def distribute_alert(alert_id):
send_alert_create_signal.apply_async((alert_id,))
# If it's the first alert, let's launch the escalation!
if alert.is_the_first_alert_in_group:
alert_group = AlertGroup.all_objects.filter(pk=alert.group_id).get()
alert_group = AlertGroup.objects.filter(pk=alert.group_id).get()
alert_group.start_escalation_if_needed(countdown=TASK_DELAY_SECONDS)
alert_group_escalation_snapshot_built.send(sender=distribute_alert, alert_group=alert_group)

View file

@ -24,7 +24,7 @@ def escalate_alert_group(alert_group_pk):
with transaction.atomic():
try:
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
except IndexError:
return f"Alert group with pk {alert_group_pk} doesn't exist"
@ -49,12 +49,6 @@ def escalate_alert_group(alert_group_pk):
# TODO: consistent_is_escalation_finished remove this check for is_escalation_finished
return "Alert is dependent on another. No need to activate escalation."
if alert_group.is_archived:
# TODO: consistent_is_escalation_finished remove this check for is_escalation_finished
return "Escalation stopped. Reason: incident is archived. Escalation id: {}".format(
alert_group.active_escalation_id
)
if alert_group.wiped_at is not None:
# TODO: consistent_is_escalation_finished remove this check for is_escalation_finished
return "Alert is wiped. No need to activate escalation."

View file

@ -22,7 +22,7 @@ def invite_user_to_join_incident(invitation_pk):
except IndexError:
return f"invite_user_to_join_incident: Invitation with pk {invitation_pk} doesn't exist"
if not invitation.is_active or invitation.alert_group.is_archived:
if not invitation.is_active:
return None
if invitation.attempts_left <= 0 or invitation.alert_group.resolved:
invitation.is_active = False

View file

@ -47,7 +47,7 @@ def disable_maintenance(*args, **kwargs):
write_maintenance_insight_log(object_under_maintenance, user, MaintenanceEvent.FINISHED)
if object_under_maintenance.maintenance_mode == object_under_maintenance.MAINTENANCE:
mode_verbal = "Maintenance"
maintenance_incident = AlertGroup.all_objects.get(
maintenance_incident = AlertGroup.objects.get(
maintenance_uuid=object_under_maintenance.maintenance_uuid
)
transaction.on_commit(maintenance_incident.resolve_by_disable_maintenance)

View file

@ -16,7 +16,7 @@ def notify_all_task(alert_group_pk, escalation_policy_snapshot_order=None):
EscalationPolicy = apps.get_model("alerts", "EscalationPolicy")
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
# check alert group state before notifying all users in the channel
if alert_group.resolved or alert_group.acknowledged or alert_group.silenced:

View file

@ -19,7 +19,7 @@ def notify_group_task(alert_group_pk, escalation_policy_snapshot_order=None):
AlertGroup = apps.get_model("alerts", "AlertGroup")
EscalationDeliveryStep = scenario_step.ScenarioStep.get_step("escalation_delivery", "EscalationDeliveryStep")
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
# check alert group state before notifying all users in the group
if alert_group.resolved or alert_group.acknowledged or alert_group.silenced:
task_logger.info(f"alert_group {alert_group.pk} was resolved, acked or silenced. No need to notify group")

View file

@ -36,7 +36,7 @@ def notify_user_task(
UserHasNotification = apps.get_model("alerts", "UserHasNotification")
try:
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
except AlertGroup.DoesNotExist:
return f"notify_user_task: alert_group {alert_group_pk} doesn't exist"
@ -127,11 +127,10 @@ def notify_user_task(
if (
(alert_group.acknowledged and not notify_even_acknowledged)
or alert_group.resolved
or alert_group.is_archived
or alert_group.wiped_at
or alert_group.root_alert_group
):
return "Acknowledged, resolved, archived, attached or wiped."
return "Acknowledged, resolved, attached or wiped."
if alert_group.silenced and not notify_anyway:
task_logger.info(

View file

@ -15,7 +15,7 @@ def resolve_alert_group_by_source_if_needed(alert_group_pk):
AlertGroupForAlertManager = apps.get_model("alerts", "AlertGroupForAlertManager")
AlertForAlertManager = apps.get_model("alerts", "AlertForAlertManager")
alert_group = AlertGroupForAlertManager.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroupForAlertManager.objects.get(pk=alert_group_pk)
if not resolve_alert_group_by_source_if_needed.request.id == alert_group.active_resolve_calculation_id:
return "Resolve calculation celery ID mismatch. Duplication or non-active. Active: {}".format(

View file

@ -9,5 +9,5 @@ from common.custom_celery_tasks import shared_dedicated_queue_retry_task
)
def resolve_by_last_step_task(alert_group_pk):
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
alert_group.resolve_by_last_step()

View file

@ -14,7 +14,7 @@ def send_update_log_report_signal(log_record_pk=None, alert_group_pk=None):
AlertGroup = apps.get_model("alerts", "AlertGroup")
AlertReceiveChannel = apps.get_model("alerts", "AlertReceiveChannel")
alert_group = AlertGroup.all_objects.get(id=alert_group_pk)
alert_group = AlertGroup.objects.get(id=alert_group_pk)
if alert_group.is_maintenance_incident:
task_logger.debug(
f'send_update_log_report_signal: alert_group={alert_group_pk} msg="skip alert_group_update_log_report_signal, alert group is maintenance incident "'

View file

@ -1,9 +1,13 @@
import logging
from django.apps import apps
from django.conf import settings
from apps.alerts.signals import alert_group_update_resolution_note_signal
from common.custom_celery_tasks import shared_dedicated_queue_retry_task
logger = logging.getLogger(__name__)
@shared_dedicated_queue_retry_task(
autoretry_for=(Exception,), retry_backoff=True, max_retries=1 if settings.DEBUG else None
@ -13,9 +17,9 @@ def send_update_resolution_note_signal(alert_group_pk, resolution_note_pk):
AlertGroup = apps.get_model("alerts", "AlertGroup")
ResolutionNote = apps.get_model("alerts", "ResolutionNote")
alert_group = AlertGroup.unarchived_objects.filter(pk=alert_group_pk).first()
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).first()
if alert_group is None:
print("Sent signal to update resolution note, but alert group is archived or does not exist")
logger.info("Sent signal to update resolution note, but alert group does not exist")
return
resolution_note = ResolutionNote.objects_with_deleted.get(pk=resolution_note_pk)

View file

@ -18,9 +18,7 @@ def unsilence_task(alert_group_pk):
task_logger.info(f"Start unsilence_task for alert_group {alert_group_pk}")
with transaction.atomic():
try:
alert_group = AlertGroup.unarchived_objects.filter(pk=alert_group_pk).select_for_update()[
0
] # Lock alert_group:
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).select_for_update()[0] # Lock alert_group:
except IndexError:
task_logger.info(f"unsilence_task. alert_group {alert_group_pk} doesn't exist")
return

View file

@ -10,6 +10,6 @@ from common.custom_celery_tasks import shared_dedicated_queue_retry_task
def wipe(alert_group_pk, user_pk):
AlertGroup = apps.get_model("alerts", "AlertGroup")
User = apps.get_model("user_management", "User")
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk).first()
alert_group = AlertGroup.objects.filter(pk=alert_group_pk).first()
user = User.objects.filter(pk=user_pk).first()
alert_group.wipe_by_user(user)

View file

@ -120,7 +120,6 @@ def test_get_acknowledge_text(
"source,expected_text",
[
(AlertGroup.SOURCE, "Resolved by alert source"),
(AlertGroup.ARCHIVED, "Resolved because alert has been archived"),
(AlertGroup.LAST_STEP, "Resolved automatically"),
(AlertGroup.WIPED, "Resolved by wipe"),
(AlertGroup.DISABLE_MAINTENANCE, "Resolved by stop maintenance"),

View file

@ -213,5 +213,56 @@ def test_executed_escalation_policy_snapshots(escalation_snapshot_test_setup):
escalation_snapshot.escalation_policies_snapshots[0]
]
escalation_snapshot.last_active_escalation_policy_order = len(escalation_snapshot.escalation_policies_snapshots)
escalation_snapshot.last_active_escalation_policy_order = len(escalation_snapshot.escalation_policies_snapshots) - 1
assert escalation_snapshot.executed_escalation_policy_snapshots == escalation_snapshot.escalation_policies_snapshots
@pytest.mark.django_db
def test_escalation_snapshot_non_sequential_orders(
make_organization,
make_alert_receive_channel,
make_escalation_chain,
make_channel_filter,
make_escalation_policy,
make_alert_group,
):
organization = make_organization()
alert_receive_channel = make_alert_receive_channel(organization)
escalation_chain = make_escalation_chain(organization)
channel_filter = make_channel_filter(
alert_receive_channel,
escalation_chain=escalation_chain,
notification_backends={"BACKEND": {"channel_id": "abc123"}},
)
step_1 = make_escalation_policy(
escalation_chain=channel_filter.escalation_chain,
escalation_policy_step=EscalationPolicy.STEP_WAIT,
order=12,
)
step_2 = make_escalation_policy(
escalation_chain=channel_filter.escalation_chain,
escalation_policy_step=EscalationPolicy.STEP_WAIT,
order=42,
)
alert_group = make_alert_group(alert_receive_channel, channel_filter=channel_filter)
alert_group.raw_escalation_snapshot = alert_group.build_raw_escalation_snapshot()
alert_group.save()
escalation_snapshot = alert_group.escalation_snapshot
assert escalation_snapshot.last_active_escalation_policy_order is None
assert escalation_snapshot.next_active_escalation_policy_snapshot.id == step_1.id
escalation_snapshot.execute_actual_escalation_step()
assert escalation_snapshot.last_active_escalation_policy_order == 0
assert escalation_snapshot.next_active_escalation_policy_snapshot.id == step_2.id
escalation_snapshot.execute_actual_escalation_step()
assert escalation_snapshot.last_active_escalation_policy_order == 1
assert escalation_snapshot.next_active_escalation_policy_snapshot is None
policy_ids = [p.id for p in escalation_snapshot.executed_escalation_policy_snapshots]
assert policy_ids == [step_1.id, step_2.id]

View file

@ -102,7 +102,7 @@ def test_alert_attached_to_maintenance_incident_integration(
duration = AlertReceiveChannel.DURATION_ONE_HOUR.seconds
alert_receive_channel.start_maintenance(mode, duration, user)
maintenance_incident = AlertGroup.all_objects.get(maintenance_uuid=alert_receive_channel.maintenance_uuid)
maintenance_incident = AlertGroup.objects.get(maintenance_uuid=alert_receive_channel.maintenance_uuid)
alert = make_alert_with_custom_create_method(
title="test_title",
@ -132,7 +132,7 @@ def test_stop_maintenance(
duration = AlertReceiveChannel.DURATION_ONE_HOUR.seconds
alert_receive_channel.start_maintenance(mode, duration, user)
maintenance_incident = AlertGroup.all_objects.get(maintenance_uuid=alert_receive_channel.maintenance_uuid)
maintenance_incident = AlertGroup.objects.get(maintenance_uuid=alert_receive_channel.maintenance_uuid)
alert = make_alert_with_custom_create_method(
title="test_title",
message="test_message",

View file

@ -139,7 +139,7 @@ def test_direct_paging_user(make_organization, make_user_for_organization):
)
# alert group created
alert_groups = AlertGroup.all_objects.all()
alert_groups = AlertGroup.objects.all()
assert alert_groups.count() == 1
ag = alert_groups.get()
alert = ag.alerts.get()
@ -172,7 +172,7 @@ def test_direct_paging_schedule(
direct_paging(organization, None, from_user, schedules=[(schedule, False), (other_schedule, True)])
# alert group created
alert_groups = AlertGroup.all_objects.all()
alert_groups = AlertGroup.objects.all()
assert alert_groups.count() == 1
ag = alert_groups.get()
alert = ag.alerts.get()
@ -203,7 +203,7 @@ def test_direct_paging_reusing_alert_group(
direct_paging(organization, None, from_user, users=[(user, False)], alert_group=alert_group)
# no new alert group is created
alert_groups = AlertGroup.all_objects.all()
alert_groups = AlertGroup.objects.all()
assert alert_groups.count() == 1
assert_log_record(alert_group, f"{from_user.username} paged user {user.username}")
# notifications sent
@ -236,7 +236,7 @@ def test_direct_paging_custom_chain(
direct_paging(organization, None, from_user, escalation_chain=custom_chain)
# alert group created
alert_groups = AlertGroup.all_objects.all()
alert_groups = AlertGroup.objects.all()
assert alert_groups.count() == 1
ag = alert_groups.get()
channel_filter = ag.channel_filter_with_respect_to_escalation_snapshot
@ -256,7 +256,7 @@ def test_direct_paging_returns_alert_group(make_organization, make_user_for_orga
alert_group = direct_paging(organization, None, from_user, title="Help!", message="Fire", users=[(user, False)])
# check alert group returned by direct paging is the same as the one created
assert alert_group == AlertGroup.all_objects.get()
assert alert_group == AlertGroup.objects.get()
@pytest.mark.django_db
@ -305,7 +305,7 @@ def test_direct_paging_always_create_group(make_organization, make_user_for_orga
direct_paging(organization, None, from_user, title="Help!", users=[(user, False)])
# alert group created
alert_groups = AlertGroup.all_objects.all()
alert_groups = AlertGroup.objects.all()
assert alert_groups.count() == 2
# notifications sent
assert notify_task.apply_async.called_with((user.pk, alert_groups[0].pk), {"important": False})

View file

@ -6,8 +6,7 @@ from rest_framework import serializers
from apps.alerts.incident_appearance.renderers.classic_markdown_renderer import AlertGroupClassicMarkdownRenderer
from apps.alerts.incident_appearance.renderers.web_renderer import AlertGroupWebRenderer
from apps.alerts.models import AlertGroup, AlertGroupLogRecord
from apps.user_management.models import User
from apps.alerts.models import AlertGroup
from common.api_helpers.custom_fields import TeamPrimaryKeyRelatedField
from common.api_helpers.mixins import EagerLoadingMixin
@ -216,17 +215,4 @@ class AlertGroupSerializer(AlertGroupListSerializer):
return AlertSerializer(alerts, many=True).data
def get_paged_users(self, obj):
users_ids = set()
for log_record in obj.log_records.filter(
type__in=(AlertGroupLogRecord.TYPE_DIRECT_PAGING, AlertGroupLogRecord.TYPE_UNPAGE_USER)
):
# filter paging events, track still active escalations
info = log_record.get_step_specific_info()
user_id = info.get("user") if info else None
if user_id is not None:
users_ids.add(
user_id
) if log_record.type == AlertGroupLogRecord.TYPE_DIRECT_PAGING else users_ids.discard(user_id)
users = [u.short() for u in User.objects.filter(public_primary_key__in=users_ids)]
return users
return [u.short() for u in obj.get_paged_users()]

View file

@ -7,12 +7,12 @@ from apps.base.messaging import get_messaging_backend_from_id
from apps.telegram.models import TelegramToOrganizationConnector
from common.api_helpers.custom_fields import OrganizationFilteredPrimaryKeyRelatedField
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import EagerLoadingMixin, OrderedModelSerializerMixin
from common.api_helpers.mixins import EagerLoadingMixin
from common.jinja_templater.apply_jinja_template import JinjaTemplateError
from common.utils import is_regex_valid
class ChannelFilterSerializer(OrderedModelSerializerMixin, EagerLoadingMixin, serializers.ModelSerializer):
class ChannelFilterSerializer(EagerLoadingMixin, serializers.ModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
alert_receive_channel = OrganizationFilteredPrimaryKeyRelatedField(queryset=AlertReceiveChannel.objects)
escalation_chain = OrganizationFilteredPrimaryKeyRelatedField(
@ -27,7 +27,6 @@ class ChannelFilterSerializer(OrderedModelSerializerMixin, EagerLoadingMixin, se
queryset=TelegramToOrganizationConnector.objects, filter_field="organization", allow_null=True, required=False
)
telegram_channel_details = serializers.SerializerMethodField()
order = serializers.IntegerField(required=False)
filtering_term_as_jinja2 = serializers.SerializerMethodField()
filtering_term = serializers.CharField(required=False, allow_null=True, allow_blank=True)
@ -37,7 +36,6 @@ class ChannelFilterSerializer(OrderedModelSerializerMixin, EagerLoadingMixin, se
model = ChannelFilter
fields = [
"id",
"order",
"alert_receive_channel",
"escalation_chain",
"slack_channel",
@ -148,7 +146,6 @@ class ChannelFilterCreateSerializer(ChannelFilterSerializer):
model = ChannelFilter
fields = [
"id",
"order",
"alert_receive_channel",
"escalation_chain",
"slack_channel",
@ -181,14 +178,8 @@ class ChannelFilterCreateSerializer(ChannelFilterSerializer):
return result
def create(self, validated_data):
order = validated_data.pop("order", None)
if order is not None:
alert_receive_channel_id = validated_data.get("alert_receive_channel")
self._validate_order(order, {"alert_receive_channel_id": alert_receive_channel_id, "is_default": False})
instance = super().create(validated_data)
self._change_position(order, instance)
else:
instance = super().create(validated_data)
instance = super().create(validated_data)
instance.to_index(0) # the new route should be the first one
return instance
@ -200,18 +191,8 @@ class ChannelFilterUpdateSerializer(ChannelFilterCreateSerializer):
extra_kwargs = {"filtering_term": {"required": False}}
def update(self, instance, validated_data):
order = validated_data.get("order")
filtering_term = validated_data.get("filtering_term")
if instance.is_default and order is not None and instance.order != order:
raise BadRequest(detail="The order of default channel filter cannot be changed")
if instance.is_default and filtering_term is not None:
raise BadRequest(detail="Filtering term of default channel filter cannot be changed")
if order is not None:
self._validate_order(
order, {"alert_receive_channel_id": instance.alert_receive_channel_id, "is_default": False}
)
self._change_position(order, instance)
return super().update(instance, validated_data)

View file

@ -85,7 +85,6 @@ class EscalationPolicySerializer(EagerLoadingMixin, serializers.ModelSerializer)
model = EscalationPolicy
fields = [
"id",
"order",
"step",
"wait_delay",
"escalation_chain",
@ -101,7 +100,6 @@ class EscalationPolicySerializer(EagerLoadingMixin, serializers.ModelSerializer)
"notify_to_group",
"important",
]
read_only_fields = ["order"]
SELECT_RELATED = [
"escalation_chain",
@ -199,7 +197,6 @@ class EscalationPolicySerializer(EagerLoadingMixin, serializers.ModelSerializer)
class EscalationPolicyCreateSerializer(EscalationPolicySerializer):
class Meta(EscalationPolicySerializer.Meta):
read_only_fields = ["order"]
extra_kwargs = {"escalation_chain": {"required": True, "allow_null": False}}
def create(self, validated_data):
@ -212,7 +209,7 @@ class EscalationPolicyUpdateSerializer(EscalationPolicySerializer):
escalation_chain = serializers.CharField(read_only=True, source="escalation_chain.public_primary_key")
class Meta(EscalationPolicySerializer.Meta):
read_only_fields = ["order", "escalation_chain"]
read_only_fields = ["escalation_chain"]
def update(self, instance, validated_data):
step = validated_data.get("step", instance.step)

View file

@ -70,7 +70,7 @@ class DirectPagingSerializer(serializers.Serializer):
if alert_group_id:
try:
attrs["alert_group"] = AlertGroup.unarchived_objects.get(
attrs["alert_group"] = AlertGroup.objects.get(
public_primary_key=alert_group_id, channel__organization=organization
)
except ObjectDoesNotExist:

View file

@ -11,7 +11,7 @@ class ResolutionNoteSerializer(EagerLoadingMixin, serializers.ModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
alert_group = OrganizationFilteredPrimaryKeyRelatedField(
filter_field="channel__organization",
queryset=AlertGroup.unarchived_objects,
queryset=AlertGroup.objects,
)
text = serializers.CharField(allow_null=False, source="message_text")
author = FastUserSerializer(read_only=True)

View file

@ -33,7 +33,11 @@ class UserNotificationPolicyBaseSerializer(EagerLoadingMixin, serializers.ModelS
class Meta:
model = UserNotificationPolicy
fields = ["id", "step", "order", "notify_by", "wait_delay", "important", "user"]
fields = ["id", "step", "notify_by", "wait_delay", "important", "user"]
# Field "order" is not consumed by the plugin frontend, but is used by the mobile app
# TODO: remove this field when the mobile app is updated
fields += ["order"]
read_only_fields = ["order"]
def to_internal_value(self, data):
@ -100,7 +104,7 @@ class UserNotificationPolicyUpdateSerializer(UserNotificationPolicyBaseSerialize
)
class Meta(UserNotificationPolicyBaseSerializer.Meta):
read_only_fields = ["order", "user", "important"]
read_only_fields = UserNotificationPolicyBaseSerializer.Meta.read_only_fields + ["user", "important"]
def update(self, instance, validated_data):
self_or_admin = instance.user.self_or_admin(

View file

@ -0,0 +1,70 @@
from unittest.mock import patch
import pytest
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.http import HttpResponse
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
from apps.auth_token.constants import SLACK_AUTH_TOKEN_NAME
@pytest.mark.django_db
@pytest.mark.parametrize(
"backend_name,expected_url",
(
("slack-login", "/a/grafana-oncall-app/users/me"),
("slack-install-free", "/a/grafana-oncall-app/chat-ops"),
),
)
def test_complete_slack_auth_redirect_ok(
make_organization,
make_user_for_organization,
make_slack_token_for_user,
backend_name,
expected_url,
):
organization = make_organization()
admin = make_user_for_organization(organization)
_, slack_token = make_slack_token_for_user(admin)
client = APIClient()
url = (
reverse("api-internal:complete-slack-auth", kwargs={"backend": backend_name})
+ f"?{SLACK_AUTH_TOKEN_NAME}={slack_token}"
)
with patch("apps.api.views.auth.do_complete") as mock_do_complete:
mock_do_complete.return_value = None
response = client.get(url)
assert response.status_code == status.HTTP_302_FOUND
assert response.url == expected_url
@pytest.mark.django_db
def test_complete_slack_auth_redirect_error(
make_organization,
make_user_for_organization,
make_slack_token_for_user,
):
organization = make_organization()
admin = make_user_for_organization(organization)
_, slack_token = make_slack_token_for_user(admin)
client = APIClient()
url = (
reverse("api-internal:complete-slack-auth", kwargs={"backend": "slack-login"})
+ f"?{SLACK_AUTH_TOKEN_NAME}={slack_token}"
)
def _custom_do_complete(backend, *args, **kwargs):
backend.strategy.session[REDIRECT_FIELD_NAME] = "some-url"
return HttpResponse(status=status.HTTP_400_BAD_REQUEST)
with patch("apps.api.views.auth.do_complete", side_effect=_custom_do_complete):
response = client.get(url)
assert response.status_code == status.HTTP_302_FOUND
assert response.url == "some-url"

View file

@ -6,6 +6,7 @@ from rest_framework import status
from rest_framework.response import Response
from rest_framework.test import APIClient
from apps.alerts.models import ChannelFilter
from apps.api.permissions import LegacyAccessControlRole
@ -220,7 +221,7 @@ def test_channel_filter_move_to_position_permissions(
@pytest.mark.django_db
def test_channel_filter_create_with_order(
def test_channel_filter_create_order(
make_organization_and_user_with_plugin_token,
make_alert_receive_channel,
make_escalation_chain,
@ -230,7 +231,6 @@ def test_channel_filter_create_with_order(
organization, user, token = make_organization_and_user_with_plugin_token()
alert_receive_channel = make_alert_receive_channel(organization)
make_escalation_chain(organization)
# create default channel filter
make_channel_filter(alert_receive_channel, is_default=True)
channel_filter = make_channel_filter(alert_receive_channel, filtering_term="a", is_default=False)
client = APIClient()
@ -239,46 +239,18 @@ def test_channel_filter_create_with_order(
data_for_creation = {
"alert_receive_channel": alert_receive_channel.public_primary_key,
"filtering_term": "b",
"order": 0,
}
response = client.post(url, data=data_for_creation, format="json", **make_user_auth_headers(user, token))
channel_filter.refresh_from_db()
assert response.status_code == status.HTTP_201_CREATED
assert response.json()["order"] == 0
# check that orders are correct
assert ChannelFilter.objects.get(public_primary_key=response.json()["id"]).order == 0
assert channel_filter.order == 1
@pytest.mark.django_db
def test_channel_filter_create_without_order(
make_organization_and_user_with_plugin_token,
make_alert_receive_channel,
make_escalation_chain,
make_channel_filter,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
alert_receive_channel = make_alert_receive_channel(organization)
make_escalation_chain(organization)
make_channel_filter(alert_receive_channel, is_default=True)
channel_filter = make_channel_filter(alert_receive_channel, filtering_term="a", is_default=False)
client = APIClient()
url = reverse("api-internal:channel_filter-list")
data_for_creation = {
"alert_receive_channel": alert_receive_channel.public_primary_key,
"filtering_term": "b",
}
response = client.post(url, data=data_for_creation, format="json", **make_user_auth_headers(user, token))
channel_filter.refresh_from_db()
assert response.status_code == status.HTTP_201_CREATED
assert response.json()["order"] == 1
assert channel_filter.order == 0
@pytest.mark.django_db
def test_move_to_position(
make_organization_and_user_with_plugin_token,
@ -297,7 +269,7 @@ def test_move_to_position(
url = reverse(
"api-internal:channel_filter-move-to-position", kwargs={"pk": first_channel_filter.public_primary_key}
)
url += f"?position=2"
url += f"?position=1"
response = client.put(url, **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
@ -307,6 +279,30 @@ def test_move_to_position(
assert second_channel_filter.order == 1
@pytest.mark.django_db
def test_move_to_position_invalid_index(
make_organization_and_user_with_plugin_token,
make_alert_receive_channel,
make_channel_filter,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
alert_receive_channel = make_alert_receive_channel(organization)
# create default channel filter
make_channel_filter(alert_receive_channel, is_default=True, order=0)
first_channel_filter = make_channel_filter(alert_receive_channel, filtering_term="a", is_default=False, order=1)
make_channel_filter(alert_receive_channel, filtering_term="b", is_default=False, order=2)
client = APIClient()
url = reverse(
"api-internal:channel_filter-move-to-position", kwargs={"pk": first_channel_filter.public_primary_key}
)
url += f"?position=2"
response = client.put(url, **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_move_to_position_cant_move_default(
make_organization_and_user_with_plugin_token,
@ -331,42 +327,7 @@ def test_move_to_position_cant_move_default(
@pytest.mark.django_db
def test_channel_filter_update_with_order(
make_organization_and_user_with_plugin_token,
make_alert_receive_channel,
make_channel_filter,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
alert_receive_channel = make_alert_receive_channel(organization)
# create default channel filter
make_channel_filter(alert_receive_channel, is_default=True)
first_channel_filter = make_channel_filter(alert_receive_channel, filtering_term="a", is_default=False)
second_channel_filter = make_channel_filter(alert_receive_channel, filtering_term="b", is_default=False)
client = APIClient()
url = reverse("api-internal:channel_filter-detail", kwargs={"pk": first_channel_filter.public_primary_key})
data_for_update = {
"id": first_channel_filter.public_primary_key,
"alert_receive_channel": alert_receive_channel.public_primary_key,
"order": 1,
"filtering_term": first_channel_filter.filtering_term,
}
response = client.put(url, data=data_for_update, format="json", **make_user_auth_headers(user, token))
first_channel_filter.refresh_from_db()
second_channel_filter.refresh_from_db()
assert response.status_code == status.HTTP_200_OK
assert response.json()["order"] == 1
assert first_channel_filter.order == 1
assert second_channel_filter.order == 0
@pytest.mark.django_db
def test_channel_filter_update_without_order(
def test_channel_filter_update(
make_organization_and_user_with_plugin_token,
make_alert_receive_channel,
make_channel_filter,
@ -394,7 +355,6 @@ def test_channel_filter_update_without_order(
second_channel_filter.refresh_from_db()
assert response.status_code == status.HTTP_200_OK
assert response.json()["order"] == 0
assert first_channel_filter.order == 0
assert second_channel_filter.order == 1

View file

@ -53,7 +53,7 @@ def test_create_escalation_policy(escalation_policy_internal_api_setup, make_use
response = client.post(url, data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_201_CREATED
assert response.data["order"] == max_order + 1
assert EscalationPolicy.objects.get(public_primary_key=response.data["id"]).order == max_order + 1
@pytest.mark.django_db
@ -77,9 +77,9 @@ def test_create_escalation_policy_webhook(
response = client.post(url, data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_201_CREATED
assert response.data["order"] == max_order + 1
assert response.data["custom_webhook"] == webhook.public_primary_key
escalation_policy = EscalationPolicy.objects.get(public_primary_key=response.data["id"])
assert escalation_policy.order == max_order + 1
assert escalation_policy.custom_webhook == webhook
@ -107,7 +107,7 @@ def test_move_to_position(escalation_policy_internal_api_setup, make_user_auth_h
token, _, escalation_policy, user, _ = escalation_policy_internal_api_setup
client = APIClient()
position_to_move = 1
position_to_move = 0
url = reverse(
"api-internal:escalation_policy-move-to-position", kwargs={"pk": escalation_policy.public_primary_key}
)
@ -119,6 +119,22 @@ def test_move_to_position(escalation_policy_internal_api_setup, make_user_auth_h
assert escalation_policy.order == position_to_move
@pytest.mark.django_db
def test_move_to_position_invalid_index(escalation_policy_internal_api_setup, make_user_auth_headers):
token, _, escalation_policy, user, _ = escalation_policy_internal_api_setup
client = APIClient()
position_to_move = 1
url = reverse(
"api-internal:escalation_policy-move-to-position", kwargs={"pk": escalation_policy.public_primary_key}
)
response = client.put(
f"{url}?position={position_to_move}", content_type="application/json", **make_user_auth_headers(user, token)
)
escalation_policy.refresh_from_db()
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
@pytest.mark.parametrize(
"role,expected_status",
@ -736,7 +752,6 @@ def test_escalation_policy_switch_importance(
data_for_update = {
"id": escalation_policy.public_primary_key,
"step": escalation_policy.step,
"order": escalation_policy.order,
"escalation_chain": escalation_chain.public_primary_key,
"notify_to_users_queue": [],
"from_time": None,
@ -792,7 +807,6 @@ def test_escalation_policy_filter_by_user(
expected_payload = [
{
"id": escalation_policy_with_one_user.public_primary_key,
"order": 0,
"step": 13,
"wait_delay": None,
"escalation_chain": escalation_chain.public_primary_key,
@ -810,7 +824,6 @@ def test_escalation_policy_filter_by_user(
},
{
"id": escalation_policy_with_two_users.public_primary_key,
"order": 1,
"step": 13,
"wait_delay": None,
"escalation_chain": escalation_chain.public_primary_key,
@ -873,7 +886,6 @@ def test_escalation_policy_filter_by_slack_channel(
expected_payload = [
{
"id": escalation_policy_from_alert_receive_channel_with_slack_channel.public_primary_key,
"order": 0,
"step": 0,
"wait_delay": None,
"escalation_chain": escalation_chain.public_primary_key,

View file

@ -242,7 +242,7 @@ class AlertGroupTeamFilteringMixin(TeamFilteringMixin):
organization_id=self.request.auth.organization.id,
).values_list("id", flat=True)
)
queryset = AlertGroup.unarchived_objects.filter(
queryset = AlertGroup.objects.filter(
channel__in=alert_receive_channels_ids,
).only("public_primary_key")
@ -331,7 +331,7 @@ class AlertGroupView(
alert_receive_channels_ids = list(alert_receive_channels_qs.values_list("id", flat=True))
queryset = AlertGroup.unarchived_objects.filter(
queryset = AlertGroup.objects.filter(
channel__in=alert_receive_channels_ids,
)
@ -362,7 +362,7 @@ class AlertGroupView(
# enrich alert groups with select_related and prefetch_related
alert_group_pks = [alert_group.pk for alert_group in alert_groups]
queryset = AlertGroup.all_objects.filter(pk__in=alert_group_pks).order_by("-pk")
queryset = AlertGroup.objects.filter(pk__in=alert_group_pks).order_by("-pk")
queryset = self.get_serializer_class().setup_eager_loading(queryset)
alert_groups = list(queryset)
@ -688,7 +688,7 @@ class AlertGroupView(
raise BadRequest(detail="Please specify a delay for silence")
kwargs["silence_delay"] = delay
alert_groups = AlertGroup.unarchived_objects.filter(
alert_groups = AlertGroup.objects.filter(
channel__organization=self.request.auth.organization, public_primary_key__in=alert_group_public_pks
)

View file

@ -3,7 +3,7 @@ from urllib.parse import urljoin
from django.conf import settings
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.http import HttpResponseRedirect
from django.http import HttpResponse, HttpResponseRedirect
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_exempt
from rest_framework.decorators import api_view, authentication_classes
@ -49,7 +49,7 @@ def overridden_complete_slack_auth(request, backend, *args, **kwargs):
# if this was a user login/linking account, redirect to profile
redirect_to = "/a/grafana-oncall-app/users/me"
do_complete(
result = do_complete(
request.backend,
_do_login,
user=request.user,
@ -59,6 +59,13 @@ def overridden_complete_slack_auth(request, backend, *args, **kwargs):
**kwargs,
)
# We build the frontend url using org url since multiple stacks could be connected to one backend.
return_to = urljoin(request.user.organization.grafana_url, redirect_to)
# handle potential errors in the strategy pipeline
return_to = None
if isinstance(result, HttpResponse):
# check if there was a redirect set in the session
return_to = request.backend.strategy.session.get(REDIRECT_FIELD_NAME)
if return_to is None:
# We build the frontend url using org url since multiple stacks could be connected to one backend.
return_to = urljoin(request.user.organization.grafana_url, redirect_to)
return HttpResponseRedirect(return_to)

View file

@ -3,7 +3,6 @@ from rest_framework import status
from rest_framework.decorators import action
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.viewsets import ModelViewSet
from apps.alerts.models import ChannelFilter
from apps.api.permissions import RBACPermission
@ -21,12 +20,16 @@ from common.api_helpers.mixins import (
TeamFilteringMixin,
UpdateSerializerMixin,
)
from common.api_helpers.serializers import get_move_to_position_param
from common.insight_log import EntityEvent, write_resource_insight_log
from common.ordered_model.viewset import OrderedModelViewSet
class ChannelFilterView(
TeamFilteringMixin, PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateSerializerMixin, ModelViewSet
TeamFilteringMixin,
PublicPrimaryKeyMixin,
CreateSerializerMixin,
UpdateSerializerMixin,
OrderedModelViewSet,
):
authentication_classes = (PluginAuthentication,)
permission_classes = (IsAuthenticated, RBACPermission)
@ -109,24 +112,10 @@ class ChannelFilterView(
@action(detail=True, methods=["put"])
def move_to_position(self, request, pk):
instance = self.get_object()
position = get_move_to_position_param(request)
if instance.is_default:
raise BadRequest(detail="Unable to change position for default filter")
prev_state = instance.insight_logs_serialized
instance.to(position)
new_state = instance.insight_logs_serialized
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(status=status.HTTP_200_OK)
return super().move_to_position(request, pk)
@action(detail=True, methods=["post"])
def convert_from_regex_to_jinja2(self, request, pk):

View file

@ -1,10 +1,8 @@
from django.conf import settings
from django.db.models import Q
from rest_framework import status
from rest_framework.decorators import action
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.viewsets import ModelViewSet
from apps.alerts.models import EscalationPolicy
from apps.api.permissions import RBACPermission
@ -20,12 +18,16 @@ from common.api_helpers.mixins import (
TeamFilteringMixin,
UpdateSerializerMixin,
)
from common.api_helpers.serializers import get_move_to_position_param
from common.insight_log import EntityEvent, write_resource_insight_log
from common.ordered_model.viewset import OrderedModelViewSet
class EscalationPolicyView(
TeamFilteringMixin, PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateSerializerMixin, ModelViewSet
TeamFilteringMixin,
PublicPrimaryKeyMixin,
CreateSerializerMixin,
UpdateSerializerMixin,
OrderedModelViewSet,
):
authentication_classes = (PluginAuthentication,)
permission_classes = (IsAuthenticated, RBACPermission)
@ -108,25 +110,6 @@ class EscalationPolicyView(
)
instance.delete()
@action(detail=True, methods=["put"])
def move_to_position(self, request, pk):
instance = self.get_object()
position = get_move_to_position_param(request)
prev_state = instance.insight_logs_serialized
instance.to(position)
new_state = instance.insight_logs_serialized
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(status=status.HTTP_200_OK)
@action(detail=False, methods=["get"])
def escalation_options(self, request):
choices = []

View file

@ -35,7 +35,7 @@ class RouteRegexDebuggerView(APIView):
MAX_INCIDENTS_TO_SHOW = 5
INCIDENTS_TO_LOOKUP = 100
for ag in (
AlertGroup.unarchived_objects.prefetch_related(Prefetch("alerts", queryset=Alert.objects.order_by("pk")))
AlertGroup.objects.prefetch_related(Prefetch("alerts", queryset=Alert.objects.order_by("pk")))
.filter(channel__organization=organization, channel__team=team)
.order_by("-started_at")[:INCIDENTS_TO_LOOKUP]
):

View file

@ -1,10 +1,8 @@
from django.conf import settings
from django.http import Http404
from rest_framework import status
from rest_framework.decorators import action
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.viewsets import ModelViewSet
from apps.api.permissions import IsOwnerOrHasRBACPermissions, RBACPermission
from apps.api.serializers.user_notification_policy import (
@ -19,12 +17,12 @@ from apps.mobile_app.auth import MobileAppAuthTokenAuthentication
from apps.user_management.models import User
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import UpdateSerializerMixin
from common.api_helpers.serializers import get_move_to_position_param
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
from common.insight_log import EntityEvent, write_resource_insight_log
from common.ordered_model.viewset import OrderedModelViewSet
class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
class UserNotificationPolicyView(UpdateSerializerMixin, OrderedModelViewSet):
authentication_classes = (
MobileAppAuthTokenAuthentication,
PluginAuthentication,
@ -78,9 +76,7 @@ class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
queryset = self.model.objects.filter(user=target_user, important=important)
queryset = self.serializer_class.setup_eager_loading(queryset)
return queryset.order_by("order")
return self.serializer_class.setup_eager_loading(queryset)
def get_object(self):
# we need overriden get object, because original one call get_queryset first and raise 404 trying to access
@ -138,18 +134,6 @@ class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
new_state=new_state,
)
@action(detail=True, methods=["put"])
def move_to_position(self, request, pk):
instance = self.get_object()
position = get_move_to_position_param(request)
try:
instance.to_index(position)
except IndexError:
raise BadRequest(detail="Invalid position")
return Response(status=status.HTTP_200_OK)
@action(detail=False, methods=["get"])
def delay_options(self, request):
choices = []

View file

@ -14,6 +14,6 @@ class RetrieveViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet):
class AlertGroupsView(RetrieveViewSet):
authentication_classes = (GrafanaIncidentStaticKeyAuth,)
queryset = AlertGroup.unarchived_objects.all()
queryset = AlertGroup.objects.all()
serializer_class = AlertGroupSerializer
lookup_field = "public_primary_key"

View file

@ -9,9 +9,9 @@ from django.db import IntegrityError, models
from django.db.models import Q
from apps.base.messaging import get_messaging_backends
from apps.base.models.ordered_model import OrderedModel
from apps.user_management.models import User
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
from common.ordered_model.ordered_model import OrderedModel
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length

View file

@ -38,7 +38,7 @@ def notify_user_async(user_pk, alert_group_pk, notification_policy_pk):
return
try:
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
except AlertGroup.DoesNotExist:
logger.warning(f"Alert group {alert_group_pk} does not exist")
return

View file

@ -322,7 +322,7 @@ def notify_user_async(user_pk, alert_group_pk, notification_policy_pk, critical)
return
try:
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
except AlertGroup.DoesNotExist:
logger.warning(f"Alert group {alert_group_pk} does not exist")
return

View file

@ -14,7 +14,8 @@ from common.api_helpers.custom_fields import (
UsersFilteredByOrganizationField,
)
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import EagerLoadingMixin, OrderedModelSerializerMixin
from common.api_helpers.mixins import EagerLoadingMixin
from common.ordered_model.serializer import OrderedModelSerializer
class EscalationPolicyTypeField(fields.CharField):
@ -35,12 +36,11 @@ class EscalationPolicyTypeField(fields.CharField):
return step_type
class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializerMixin, serializers.ModelSerializer):
class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
escalation_chain_id = OrganizationFilteredPrimaryKeyRelatedField(
queryset=EscalationChain.objects, source="escalation_chain"
)
position = serializers.IntegerField(required=False, source="order")
type = EscalationPolicyTypeField(source="step", allow_null=True)
duration = serializers.ChoiceField(required=False, source="wait_delay", choices=EscalationPolicy.DURATION_CHOICES)
persons_to_notify = UsersFilteredByOrganizationField(
@ -67,17 +67,15 @@ class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializerMixin,
required=False,
source="custom_button_trigger",
)
manual_order = serializers.BooleanField(default=False, write_only=True)
important = serializers.BooleanField(required=False)
notify_if_time_from = CustomTimeField(required=False, source="from_time")
notify_if_time_to = CustomTimeField(required=False, source="to_time")
class Meta:
model = EscalationPolicy
fields = [
fields = OrderedModelSerializer.Meta.fields + [
"id",
"escalation_chain_id",
"position",
"type",
"duration",
"important",
@ -86,7 +84,6 @@ class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializerMixin,
"notify_on_call_from_schedule",
"group_to_notify",
"action_to_trigger",
"manual_order",
"notify_if_time_from",
"notify_if_time_to",
"num_alerts_in_window",
@ -126,21 +123,7 @@ class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializerMixin,
def create(self, validated_data):
validated_data = self._correct_validated_data(validated_data)
manual_order = validated_data.pop("manual_order")
if not manual_order:
order = validated_data.pop("order", None)
escalation_chain_id = validated_data.get("escalation_chain")
# validate 'order' value before creation
self._validate_order(order, {"escalation_chain_id": escalation_chain_id})
instance = super().create(validated_data)
self._change_position(order, instance)
else:
# validate will raise if there is a duplicated order
self._validate_manual_order(None, validated_data)
instance = super().create(validated_data)
return instance
return super().create(validated_data)
def to_representation(self, instance):
step = instance.step
@ -211,18 +194,6 @@ class EscalationPolicySerializer(EagerLoadingMixin, OrderedModelSerializerMixin,
result.pop(field, None)
return result
def _validate_manual_order(self, instance, validated_data):
order = validated_data.get("order")
if order is None:
return
policies_with_order = self.escalation_chain.escalation_policies.filter(order=order)
if instance and instance.id:
policies_with_order = policies_with_order.exclude(id=instance.id)
if policies_with_order.exists():
raise BadRequest(detail="Steps cannot have duplicated positions")
def _correct_validated_data(self, validated_data):
validated_data_fields_to_remove = [
"notify_to_users_queue",
@ -310,14 +281,4 @@ class EscalationPolicyUpdateSerializer(EscalationPolicySerializer):
instance.num_alerts_in_window = None
instance.num_minutes_in_window = None
manual_order = validated_data.pop("manual_order")
if not manual_order:
order = validated_data.pop("order", None)
self._validate_order(order, {"escalation_chain_id": instance.escalation_chain_id})
self._change_position(order, instance)
else:
# validate will raise if there is a duplicated order
self._validate_manual_order(instance, validated_data)
return super().update(instance, validated_data)

View file

@ -8,20 +8,18 @@ from apps.base.models.user_notification_policy import NotificationChannelPublicA
from common.api_helpers.custom_fields import UserIdField
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import EagerLoadingMixin
from common.ordered_model.serializer import OrderedModelSerializer
class PersonalNotificationRuleSerializer(EagerLoadingMixin, serializers.ModelSerializer):
class PersonalNotificationRuleSerializer(EagerLoadingMixin, OrderedModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
user_id = UserIdField(required=True, source="user")
position = serializers.IntegerField(required=False, source="order")
type = serializers.CharField(
required=False,
)
duration = serializers.ChoiceField(
required=False, source="wait_delay", choices=UserNotificationPolicy.DURATION_CHOICES
)
manual_order = serializers.BooleanField(default=False, write_only=True)
SELECT_RELATED = ["user"]
@ -31,7 +29,7 @@ class PersonalNotificationRuleSerializer(EagerLoadingMixin, serializers.ModelSer
class Meta:
model = UserNotificationPolicy
fields = ["id", "user_id", "position", "type", "duration", "manual_order", "important"]
fields = OrderedModelSerializer.Meta.fields + ["id", "user_id", "type", "duration", "important"]
def create(self, validated_data):
if "type" not in validated_data:
@ -44,17 +42,7 @@ class PersonalNotificationRuleSerializer(EagerLoadingMixin, serializers.ModelSer
if "wait_delay" in validated_data and validated_data["step"] != UserNotificationPolicy.Step.WAIT:
raise exceptions.ValidationError({"duration": "Duration can't be set"})
# Remove "manual_order" and "order" fields from validated_data, so they are not passed to create method.
# Policies are always created at the end of the list, and then moved to the desired position by _adjust_order.
manual_order = validated_data.pop("manual_order")
order = validated_data.pop("order", None)
instance = UserNotificationPolicy.objects.create(**validated_data)
if order is not None:
self._adjust_order(instance, manual_order, order, created=True)
return instance
return super().create(validated_data)
def to_internal_value(self, data):
if "duration" in data:
@ -119,33 +107,6 @@ class PersonalNotificationRuleSerializer(EagerLoadingMixin, serializers.ModelSer
raise exceptions.ValidationError({"type": "Invalid type"})
@staticmethod
def _adjust_order(instance, manual_order, order, created):
# Passing order=-1 means that the policy should be moved to the end of the list.
if order == -1:
if created:
# The policy was just created, so it is already at the end of the list.
return
order = instance.max_order()
# max_order() can't be None here because at least one instance exists the one we are moving.
assert order is not None
# Negative order is not allowed.
if order < 0:
raise BadRequest(detail="Invalid value for position field")
# manual_order=True is intended for use by Terraform provider only, and is not a documented feature.
# Orders are swapped instead of moved when using Terraform, because Terraform may issue concurrent requests
# to create / update / delete multiple policies. "Move to" operation is not deterministic in this case, and
# final order of policies may be different depending on the order in which requests are processed. On the other
# hand, the result of concurrent "swap" operations is deterministic and does not depend on the order in
# which requests are processed.
if manual_order:
instance.swap(order)
else:
instance.to(order)
class PersonalNotificationRuleUpdateSerializer(PersonalNotificationRuleSerializer):
user_id = UserIdField(read_only=True, source="user")
@ -165,10 +126,4 @@ class PersonalNotificationRuleUpdateSerializer(PersonalNotificationRuleSerialize
if "wait_delay" in validated_data and instance.step != UserNotificationPolicy.Step.WAIT:
raise exceptions.ValidationError({"duration": "Duration can't be set"})
# Remove "manual_order" and "order" fields from validated_data, so they are not passed to update method.
manual_order = validated_data.pop("manual_order")
order = validated_data.pop("order", None)
if order is not None:
self._adjust_order(instance, manual_order, order, created=False)
return super().update(instance, validated_data)

View file

@ -9,7 +9,7 @@ from common.api_helpers.mixins import EagerLoadingMixin
class ResolutionNoteSerializer(EagerLoadingMixin, serializers.ModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
alert_group_id = OrganizationFilteredPrimaryKeyRelatedField(
queryset=AlertGroup.unarchived_objects,
queryset=AlertGroup.objects,
source="alert_group",
filter_field="channel__organization",
)

View file

@ -6,12 +6,12 @@ from apps.api.serializers.alert_receive_channel import valid_jinja_template_for_
from apps.base.messaging import get_messaging_backend_from_id, get_messaging_backends
from common.api_helpers.custom_fields import OrganizationFilteredPrimaryKeyRelatedField
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import OrderedModelSerializerMixin
from common.jinja_templater.apply_jinja_template import JinjaTemplateError
from common.ordered_model.serializer import OrderedModelSerializer
from common.utils import is_regex_valid
class BaseChannelFilterSerializer(OrderedModelSerializerMixin, serializers.ModelSerializer):
class BaseChannelFilterSerializer(OrderedModelSerializer):
"""Base Channel Filter serializer with validation methods"""
def __init__(self, *args, **kwargs):
@ -148,7 +148,6 @@ class ChannelFilterSerializer(BaseChannelFilterSerializer):
telegram = serializers.DictField(required=False)
routing_type = RoutingTypeField(allow_null=False, required=False, source="filtering_term_type")
routing_regex = serializers.CharField(allow_null=False, required=True, source="filtering_term")
position = serializers.IntegerField(required=False, source="order")
integration_id = OrganizationFilteredPrimaryKeyRelatedField(
queryset=AlertReceiveChannel.objects, source="alert_receive_channel"
)
@ -159,39 +158,24 @@ class ChannelFilterSerializer(BaseChannelFilterSerializer):
)
is_the_last_route = serializers.BooleanField(read_only=True, source="is_default")
manual_order = serializers.BooleanField(default=False, write_only=True)
class Meta:
model = ChannelFilter
fields = [
fields = OrderedModelSerializer.Meta.fields + [
"id",
"integration_id",
"escalation_chain_id",
"routing_type",
"routing_regex",
"position",
"is_the_last_route",
"slack",
"telegram",
"manual_order",
]
read_only_fields = ["is_the_last_route"]
def create(self, validated_data):
validated_data = self._correct_validated_data(validated_data)
manual_order = validated_data.pop("manual_order")
if manual_order:
self._validate_manual_order(validated_data.get("order", None))
instance = super().create(validated_data)
else:
order = validated_data.pop("order", None)
alert_receive_channel_id = validated_data.get("alert_receive_channel")
# validate 'order' value before creation
self._validate_order(order, {"alert_receive_channel_id": alert_receive_channel_id, "is_default": False})
instance = super().create(validated_data)
self._change_position(order, instance)
return instance
return super().create(validated_data)
def validate(self, data):
filtering_term = data.get("routing_regex")
@ -224,18 +208,6 @@ class ChannelFilterUpdateSerializer(ChannelFilterSerializer):
def update(self, instance, validated_data):
validated_data = self._correct_validated_data(validated_data)
manual_order = validated_data.pop("manual_order")
if manual_order:
self._validate_manual_order(validated_data.get("order", None))
else:
order = validated_data.pop("order", None)
self._validate_order(
order,
{"alert_receive_channel_id": instance.alert_receive_channel_id, "is_default": instance.is_default},
)
self._change_position(order, instance)
if validated_data.get("notification_backends"):
validated_data["notification_backends"] = self._update_notification_backends(
validated_data["notification_backends"]

View file

@ -154,7 +154,7 @@ def test_create_escalation_policy_manual_order_duplicated_position(
escalation_policies_setup,
):
organization, user, token = make_organization_and_user_with_token()
escalation_chain, _, _ = escalation_policies_setup(organization, user)
escalation_chain, escalation_policies, _ = escalation_policies_setup(organization, user)
data_for_create = {
"escalation_chain_id": escalation_chain.public_primary_key,
@ -168,7 +168,43 @@ def test_create_escalation_policy_manual_order_duplicated_position(
url = reverse("api-public:escalation_policies-list")
response = client.post(url, data=data_for_create, format="json", HTTP_AUTHORIZATION=token)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert response.status_code == status.HTTP_201_CREATED
assert response.data["position"] == 0
for escalation_policy in escalation_policies:
escalation_policy.refresh_from_db()
orders = [escalation_policy.order for escalation_policy in escalation_policies]
assert orders == [3, 1, 2] # Check orders are swapped when manual_order is True
@pytest.mark.django_db
def test_create_escalation_policy_no_manual_order_duplicated_position(
make_organization_and_user_with_token,
escalation_policies_setup,
):
organization, user, token = make_organization_and_user_with_token()
escalation_chain, escalation_policies, _ = escalation_policies_setup(organization, user)
data_for_create = {
"escalation_chain_id": escalation_chain.public_primary_key,
"type": "notify_person_next_each_time",
"position": 0,
"persons_to_notify_next_each_time": [user.public_primary_key],
}
client = APIClient()
url = reverse("api-public:escalation_policies-list")
response = client.post(url, data=data_for_create, format="json", HTTP_AUTHORIZATION=token)
assert response.status_code == status.HTTP_201_CREATED
assert response.data["position"] == 0
for escalation_policy in escalation_policies:
escalation_policy.refresh_from_db()
orders = [escalation_policy.order for escalation_policy in escalation_policies]
assert orders == [1, 2, 3] # Check policies are moved down manual_order is False
@pytest.mark.django_db
@ -265,4 +301,10 @@ def test_update_escalation_policy_manual_order_duplicated_position(
data_to_change = {"position": 0, "manual_order": True}
response = client.put(url, data=data_to_change, format="json", HTTP_AUTHORIZATION=token)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert response.status_code == status.HTTP_200_OK
for escalation_policy in escalation_policies:
escalation_policy.refresh_from_db()
orders = [escalation_policy.order for escalation_policy in escalation_policies]
assert orders == [1, 0, 2] # Check orders are swapped when manual_order is True

View file

@ -93,7 +93,7 @@ def incident_public_api_setup(
@pytest.mark.django_db
def test_get_incidents(incident_public_api_setup):
token, _, _, _ = incident_public_api_setup
incidents = AlertGroup.unarchived_objects.all().order_by("-started_at")
incidents = AlertGroup.objects.all().order_by("-started_at")
client = APIClient()
expected_response = construct_expected_response_from_incidents(incidents)
@ -110,7 +110,7 @@ def test_get_incidents_filter_by_integration(
):
token, incidents, integrations, _ = incident_public_api_setup
formatted_webhook = integrations[1]
incidents = AlertGroup.unarchived_objects.filter(channel=formatted_webhook).order_by("-started_at")
incidents = AlertGroup.objects.filter(channel=formatted_webhook).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()
@ -128,7 +128,7 @@ def test_get_incidents_filter_by_state_new(
incident_public_api_setup,
):
token, _, _, _ = incident_public_api_setup
incidents = AlertGroup.unarchived_objects.filter(AlertGroup.get_new_state_filter()).order_by("-started_at")
incidents = AlertGroup.objects.filter(AlertGroup.get_new_state_filter()).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()
@ -144,7 +144,7 @@ def test_get_incidents_filter_by_state_acknowledged(
incident_public_api_setup,
):
token, _, _, _ = incident_public_api_setup
incidents = AlertGroup.unarchived_objects.filter(AlertGroup.get_acknowledged_state_filter()).order_by("-started_at")
incidents = AlertGroup.objects.filter(AlertGroup.get_acknowledged_state_filter()).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()
@ -160,7 +160,7 @@ def test_get_incidents_filter_by_state_silenced(
incident_public_api_setup,
):
token, _, _, _ = incident_public_api_setup
incidents = AlertGroup.unarchived_objects.filter(AlertGroup.get_silenced_state_filter()).order_by("-started_at")
incidents = AlertGroup.objects.filter(AlertGroup.get_silenced_state_filter()).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()
@ -176,7 +176,7 @@ def test_get_incidents_filter_by_state_resolved(
incident_public_api_setup,
):
token, _, _, _ = incident_public_api_setup
incidents = AlertGroup.unarchived_objects.filter(AlertGroup.get_resolved_state_filter()).order_by("-started_at")
incidents = AlertGroup.objects.filter(AlertGroup.get_resolved_state_filter()).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()
@ -220,7 +220,7 @@ def test_get_incidents_filter_by_route(
):
token, incidents, integrations, routes = incident_public_api_setup
grafana_non_default_route = routes[1]
incidents = AlertGroup.unarchived_objects.filter(channel_filter=grafana_non_default_route).order_by("-started_at")
incidents = AlertGroup.objects.filter(channel_filter=grafana_non_default_route).order_by("-started_at")
expected_response = construct_expected_response_from_incidents(incidents)
client = APIClient()

View file

@ -775,3 +775,45 @@ def test_get_list_integrations_link_and_inbound_email(
else:
assert integration_link == f"https://test.com/integrations/v1/{integration_type}/test123/"
assert integration_inbound_email is None
@pytest.mark.django_db
def test_create_integration_default_route(
make_organization_and_user_with_token,
make_escalation_chain,
):
organization, _, token = make_organization_and_user_with_token()
escalation_chain = make_escalation_chain(organization)
client = APIClient()
data_for_create = {
"type": "grafana",
"name": "grafana_created",
"team_id": None,
"default_route": {"escalation_chain_id": escalation_chain.public_primary_key},
}
url = reverse("api-public:integrations-list")
response = client.post(url, data=data_for_create, format="json", HTTP_AUTHORIZATION=f"{token}")
assert response.status_code == status.HTTP_201_CREATED
assert response.data["default_route"]["escalation_chain_id"] == escalation_chain.public_primary_key
@pytest.mark.django_db
def test_update_integration_default_route(
make_organization_and_user_with_token, make_escalation_chain, make_alert_receive_channel, make_channel_filter
):
organization, _, token = make_organization_and_user_with_token()
integration = make_alert_receive_channel(organization)
make_channel_filter(integration, is_default=True)
escalation_chain = make_escalation_chain(organization)
client = APIClient()
data_for_update = {
"default_route": {"escalation_chain_id": escalation_chain.public_primary_key},
}
url = reverse("api-public:integrations-detail", args=[integration.public_primary_key])
response = client.put(url, data=data_for_update, format="json", HTTP_AUTHORIZATION=f"{token}")
assert response.status_code == status.HTTP_200_OK
assert response.data["default_route"]["escalation_chain_id"] == escalation_chain.public_primary_key

View file

@ -444,7 +444,7 @@ def test_update_route_with_manual_ordering(
url = reverse("api-public:routes-detail", kwargs={"pk": channel_filter.public_primary_key})
# Test negative value. Note, that for "manual_order"=False, -1 is valud option (It will move route to the bottom)
# Test negative value. Note, that for "manual_order"=False, -1 is valid option (It will move route to the bottom)
data_to_update = {"position": -1, "manual_order": True}
response = client.put(url, format="json", HTTP_AUTHORIZATION=token, data=data_to_update)

View file

@ -50,7 +50,7 @@ class IncidentView(RateLimitHeadersMixin, mixins.ListModelMixin, mixins.DestroyM
integration_id = self.request.query_params.get("integration_id", None)
state = self.request.query_params.get("state", None)
queryset = AlertGroup.unarchived_objects.filter(
queryset = AlertGroup.objects.filter(
channel__organization=self.request.auth.organization,
).order_by("-started_at")
@ -84,7 +84,7 @@ class IncidentView(RateLimitHeadersMixin, mixins.ListModelMixin, mixins.DestroyM
public_primary_key = self.kwargs["pk"]
try:
return AlertGroup.unarchived_objects.filter(
return AlertGroup.objects.filter(
channel__organization=self.request.auth.organization,
).get(public_primary_key=public_primary_key)
except AlertGroup.DoesNotExist:

View file

@ -85,7 +85,7 @@ def on_alert_group_action_triggered_async(log_record_id):
)
def on_alert_group_update_log_report_async(alert_group_id):
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group = AlertGroup.all_objects.get(pk=alert_group_id)
alert_group = AlertGroup.objects.get(pk=alert_group_id)
logger.debug(f"Start on_alert_group_update_log_report for alert_group {alert_group_id}")
organization = alert_group.channel.organization
if alert_group.slack_message and organization.slack_team_identity:
@ -164,7 +164,7 @@ class AlertGroupSlackRepresentative(AlertGroupAbstractRepresentative):
alert_group_id = alert_group.pk
else:
alert_group_id = alert_group
alert_group = AlertGroup.all_objects.get(pk=alert_group_id)
alert_group = AlertGroup.objects.get(pk=alert_group_id)
logger.debug(
f"Received alert_group_update_log_report signal in SLACK representative for alert_group {alert_group_id}"

View file

@ -64,7 +64,7 @@ class UpdateAppearanceStep(scenario_step.ScenarioStep):
private_metadata = json.loads(payload["view"]["private_metadata"])
alert_group_pk = private_metadata["alert_group_pk"]
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
attachments = alert_group.render_slack_attachments()
blocks = alert_group.render_slack_blocks()

View file

@ -48,13 +48,13 @@ class AlertShootingStep(scenario_step.ScenarioStep):
# do not try to post alert group message to slack if its channel is rate limited
if alert.group.channel.is_rate_limited_in_slack:
logger.info("Skip posting or updating alert_group in Slack due to rate limit")
AlertGroup.all_objects.filter(
AlertGroup.objects.filter(
pk=alert.group.pk,
slack_message_sent=False,
).update(slack_message_sent=True, reason_to_skip_escalation=AlertGroup.RATE_LIMITED)
return
num_updated_rows = AlertGroup.all_objects.filter(pk=alert.group.pk, slack_message_sent=False).update(
num_updated_rows = AlertGroup.objects.filter(pk=alert.group.pk, slack_message_sent=False).update(
slack_message_sent=True
)
@ -63,7 +63,7 @@ class AlertShootingStep(scenario_step.ScenarioStep):
channel_id = alert.group.channel_filter.slack_channel_id_or_general_log_id
self._send_first_alert(alert, channel_id)
except SlackAPIException as e:
AlertGroup.all_objects.filter(pk=alert.group.pk).update(slack_message_sent=False)
AlertGroup.objects.filter(pk=alert.group.pk).update(slack_message_sent=False)
raise e
if alert.group.channel.maintenance_mode == AlertReceiveChannel.DEBUG_MAINTENANCE:
@ -212,6 +212,11 @@ class AlertShootingStep(scenario_step.ScenarioStep):
class InviteOtherPersonToIncident(AlertGroupActionsMixin, scenario_step.ScenarioStep):
"""
THIS SCENARIO STEP IS DEPRECATED AND WILL BE REMOVED IN THE FUTURE.
Check out apps/slack/scenarios/manage_responders.py for the new version that uses direct paging.
"""
REQUIRED_PERMISSIONS = [RBACPermission.Permissions.CHATOPS_WRITE]
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
@ -368,7 +373,7 @@ class SelectAttachGroupStep(AlertGroupActionsMixin, scenario_step.ScenarioStep):
).values_list("id", flat=True)
alert_groups_queryset = (
AlertGroup.unarchived_objects.prefetch_related(
AlertGroup.objects.prefetch_related(
"alerts",
"channel__organization",
)
@ -455,11 +460,11 @@ class AttachGroupStep(AlertGroupActionsMixin, scenario_step.ScenarioStep):
# submit selection in modal window
if payload["type"] == scenario_step.PAYLOAD_TYPE_VIEW_SUBMISSION:
alert_group_pk = json.loads(payload["view"]["private_metadata"])["alert_group_pk"]
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
root_alert_group_pk = payload["view"]["state"]["values"][SelectAttachGroupStep.routing_uid()][
AttachGroupStep.routing_uid()
]["selected_option"]["value"]
root_alert_group = AlertGroup.all_objects.get(pk=root_alert_group_pk)
root_alert_group = AlertGroup.objects.get(pk=root_alert_group_pk)
# old version of attach selection by dropdown
else:
try:
@ -467,7 +472,7 @@ class AttachGroupStep(AlertGroupActionsMixin, scenario_step.ScenarioStep):
except KeyError:
root_alert_group_pk = int(payload["actions"][0]["selected_option"]["value"])
root_alert_group = AlertGroup.all_objects.get(pk=root_alert_group_pk)
root_alert_group = AlertGroup.objects.get(pk=root_alert_group_pk)
alert_group = self.get_alert_group(slack_team_identity, payload)
alert_group.attach_by_user(self.user, root_alert_group, action_source=ActionSource.SLACK)
@ -490,6 +495,11 @@ class UnAttachGroupStep(AlertGroupActionsMixin, scenario_step.ScenarioStep):
class StopInvitationProcess(AlertGroupActionsMixin, scenario_step.ScenarioStep):
"""
THIS SCENARIO STEP IS DEPRECATED AND WILL BE REMOVED IN THE FUTURE.
Check out apps/slack/scenarios/manage_responders.py for the new version that uses direct paging.
"""
REQUIRED_PERMISSIONS = [RBACPermission.Permissions.CHATOPS_WRITE]
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
@ -703,7 +713,7 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group_id = payload["actions"][0]["value"].split("_")[1]
alert_group = AlertGroup.all_objects.get(pk=alert_group_id)
alert_group = AlertGroup.objects.get(pk=alert_group_id)
channel = payload["channel"]["id"]
message_ts = payload["message_ts"]

View file

@ -0,0 +1,265 @@
import json
from django.apps import apps
from apps.alerts.paging import check_user_availability, direct_paging, unpage_user
from apps.slack.scenarios import scenario_step
from apps.slack.scenarios.paging import (
DIRECT_PAGING_SCHEDULE_SELECT_ID,
DIRECT_PAGING_USER_SELECT_ID,
DIVIDER_BLOCK,
_generate_input_id_prefix,
_get_availability_warnings_view,
_get_schedules_select,
_get_select_field_value,
_get_users_select,
)
from apps.slack.scenarios.step_mixins import AlertGroupActionsMixin
MANAGE_RESPONDERS_USER_SELECT_ID = "responders_user_select"
MANAGE_RESPONDERS_SCHEDULE_SELECT_ID = "responders_schedule_select"
USER_DATA_KEY = "user"
ALERT_GROUP_DATA_KEY = "alert_group_pk"
# Slack scenario steps
class StartManageResponders(AlertGroupActionsMixin, scenario_step.ScenarioStep):
"""Handle "Responders" button click."""
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
alert_group = self.get_alert_group(slack_team_identity, payload)
if not self.is_authorized(alert_group):
self.open_unauthorized_warning(payload)
return
view = render_dialog(alert_group)
self._slack_client.api_call(
"views.open",
trigger_id=payload["trigger_id"],
view=view,
)
class ManageRespondersUserChange(scenario_step.ScenarioStep):
"""Handle user selection in responders modal."""
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
alert_group = _get_alert_group_from_payload(payload)
selected_user = _get_selected_user_from_payload(payload)
organization = alert_group.channel.organization
# check availability
availability_warnings = check_user_availability(selected_user)
if availability_warnings:
# display warnings and require additional confirmation
view = _get_availability_warnings_view(
availability_warnings,
organization,
selected_user,
ManageRespondersConfirmUserChange.routing_uid(),
json.dumps({USER_DATA_KEY: selected_user.id, ALERT_GROUP_DATA_KEY: alert_group.pk}),
)
self._slack_client.api_call(
"views.push",
trigger_id=payload["trigger_id"],
view=view,
)
else:
# no warnings, proceed with paging
direct_paging(
organization=organization,
team=alert_group.channel.team,
from_user=slack_user_identity.get_user(organization),
users=[(selected_user, False)],
alert_group=alert_group,
)
view = render_dialog(alert_group)
self._slack_client.api_call(
"views.update",
trigger_id=payload["trigger_id"],
view=view,
view_id=payload["view"]["id"],
)
class ManageRespondersConfirmUserChange(scenario_step.ScenarioStep):
"""Handle user confirmation on availability warnings modal."""
def process_scenario(self, slack_user_identity, slack_team_identity, payload):
alert_group = _get_alert_group_from_payload(payload)
selected_user = _get_selected_user_from_payload(payload)
organization = alert_group.channel.organization
direct_paging(
organization=organization,
team=alert_group.channel.team,
from_user=slack_user_identity.get_user(organization),
users=[(selected_user, False)],
alert_group=alert_group,
)
view = render_dialog(alert_group)
self._slack_client.api_call(
"views.update",
trigger_id=payload["trigger_id"],
view=view,
view_id=payload["view"]["previous_view_id"],
)
class ManageRespondersScheduleChange(scenario_step.ScenarioStep):
"""Handle schedule selection in responders modal."""
def process_scenario(self, slack_user_identity, slack_team_identity, payload, action=None):
alert_group = _get_alert_group_from_payload(payload)
selected_schedule = _get_selected_schedule_from_payload(payload)
organization = alert_group.channel.organization
direct_paging(
organization=organization,
team=alert_group.channel.team,
from_user=slack_user_identity.get_user(organization),
schedules=[(selected_schedule, False)],
alert_group=alert_group,
)
self._slack_client.api_call(
"views.update",
trigger_id=payload["trigger_id"],
view=render_dialog(alert_group),
view_id=payload["view"]["id"],
)
class ManageRespondersRemoveUser(scenario_step.ScenarioStep):
"""Handle user removal in responders modal."""
def process_scenario(self, slack_user_identity, slack_team_identity, payload, action=None):
alert_group = _get_alert_group_from_payload(payload)
selected_user = _get_selected_user_from_payload(payload)
from_user = slack_user_identity.get_user(alert_group.channel.organization)
unpage_user(alert_group, selected_user, from_user)
view = render_dialog(alert_group)
self._slack_client.api_call(
"views.update",
trigger_id=payload["trigger_id"],
view=view,
view_id=payload["view"]["id"],
)
# slack view/blocks rendering helpers
def render_dialog(alert_group):
blocks = []
# Show list of users that are currently paged
paged_users = alert_group.get_paged_users()
for user in alert_group.get_paged_users():
blocks += [
{
"type": "section",
"text": {"type": "mrkdwn", "text": f":bust_in_silhouette: *{user.name or user.username}*"},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Remove", "emoji": True},
"action_id": ManageRespondersRemoveUser.routing_uid(),
"value": str(user.pk),
},
}
]
if paged_users:
blocks += [DIVIDER_BLOCK]
# Show user and schedule dropdowns
input_id_prefix = _generate_input_id_prefix()
blocks += [
_get_users_select(alert_group.channel.organization, input_id_prefix, ManageRespondersUserChange.routing_uid())
]
blocks += [
_get_schedules_select(
alert_group.channel.organization, input_id_prefix, ManageRespondersScheduleChange.routing_uid()
)
]
view = {
"type": "modal",
"title": {
"type": "plain_text",
"text": "Additional responders",
},
"blocks": blocks,
"private_metadata": json.dumps({ALERT_GROUP_DATA_KEY: alert_group.pk, "input_id_prefix": input_id_prefix}),
}
return view
def _get_selected_user_from_payload(payload):
User = apps.get_model("user_management", "User")
try:
selected_user_id = payload["actions"][0]["value"] # "remove" button
except KeyError:
try:
# "confirm" button on availability warnings modal
selected_user_id = json.loads(payload["view"]["private_metadata"])[USER_DATA_KEY]
except KeyError:
# user select dropdown
input_id_prefix = json.loads(payload["view"]["private_metadata"])["input_id_prefix"]
selected_user_id = _get_select_field_value(
payload, input_id_prefix, ManageRespondersUserChange.routing_uid(), DIRECT_PAGING_USER_SELECT_ID
)
return User.objects.get(pk=selected_user_id)
def _get_selected_schedule_from_payload(payload):
OnCallSchedule = apps.get_model("schedules", "OnCallSchedule")
input_id_prefix = json.loads(payload["view"]["private_metadata"])["input_id_prefix"]
selected_schedule_id = _get_select_field_value(
payload, input_id_prefix, ManageRespondersScheduleChange.routing_uid(), DIRECT_PAGING_SCHEDULE_SELECT_ID
)
return OnCallSchedule.objects.get(pk=selected_schedule_id)
def _get_alert_group_from_payload(payload):
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group_pk = json.loads(payload["view"]["private_metadata"])[ALERT_GROUP_DATA_KEY]
return AlertGroup.objects.get(pk=alert_group_pk)
STEPS_ROUTING = [
{
"payload_type": scenario_step.PAYLOAD_TYPE_BLOCK_ACTIONS,
"block_action_type": scenario_step.BLOCK_ACTION_TYPE_STATIC_SELECT,
"block_action_id": ManageRespondersUserChange.routing_uid(),
"step": ManageRespondersUserChange,
},
{
"payload_type": scenario_step.PAYLOAD_TYPE_VIEW_SUBMISSION,
"view_callback_id": ManageRespondersConfirmUserChange.routing_uid(),
"step": ManageRespondersConfirmUserChange,
},
{
"payload_type": scenario_step.PAYLOAD_TYPE_BLOCK_ACTIONS,
"block_action_type": scenario_step.BLOCK_ACTION_TYPE_STATIC_SELECT,
"block_action_id": ManageRespondersScheduleChange.routing_uid(),
"step": ManageRespondersScheduleChange,
},
{
"payload_type": scenario_step.PAYLOAD_TYPE_BLOCK_ACTIONS,
"block_action_type": scenario_step.BLOCK_ACTION_TYPE_BUTTON,
"block_action_id": ManageRespondersRemoveUser.routing_uid(),
"step": ManageRespondersRemoveUser,
},
{
"payload_type": scenario_step.PAYLOAD_TYPE_BLOCK_ACTIONS,
"block_action_type": scenario_step.BLOCK_ACTION_TYPE_BUTTON,
"block_action_id": StartManageResponders.routing_uid(),
"step": StartManageResponders,
},
]

View file

@ -622,8 +622,8 @@ def _get_additional_responders_blocks(
]
if is_additional_responders_checked:
users_select = _get_users_select(organization, input_id_prefix)
schedules_select = _get_schedules_select(organization, input_id_prefix)
users_select = _get_users_select(organization, input_id_prefix, OnPagingUserChange.routing_uid())
schedules_select = _get_schedules_select(organization, input_id_prefix, OnPagingScheduleChange.routing_uid())
blocks += [users_select, schedules_select]
# selected items
@ -639,7 +639,7 @@ def _get_additional_responders_blocks(
return blocks
def _get_users_select(organization, input_id_prefix):
def _get_users_select(organization, input_id_prefix, action_id):
users = organization.users.all()
user_options = [
@ -659,12 +659,12 @@ def _get_users_select(organization, input_id_prefix):
user_select = {
"type": "section",
"text": {"type": "mrkdwn", "text": "Add users"},
"text": {"type": "mrkdwn", "text": "Notify user"},
"block_id": input_id_prefix + DIRECT_PAGING_USER_SELECT_ID,
"accessory": {
"type": "static_select",
"placeholder": {"type": "plain_text", "text": "Select a user", "emoji": True},
"action_id": OnPagingUserChange.routing_uid(),
"placeholder": {"type": "plain_text", "text": "Select user", "emoji": True},
"action_id": action_id,
},
}
MAX_STATIC_SELECT_OPTIONS = 100
@ -687,7 +687,7 @@ def _get_users_select(organization, input_id_prefix):
return user_select
def _get_schedules_select(organization, input_id_prefix):
def _get_schedules_select(organization, input_id_prefix, action_id):
schedules = organization.oncall_schedules.all()
schedule_options = [
@ -706,13 +706,13 @@ def _get_schedules_select(organization, input_id_prefix):
else:
schedule_select = {
"type": "section",
"text": {"type": "mrkdwn", "text": "Add schedules"},
"text": {"type": "mrkdwn", "text": "Notify schedule"},
"block_id": input_id_prefix + DIRECT_PAGING_SCHEDULE_SELECT_ID,
"accessory": {
"type": "static_select",
"placeholder": {"type": "plain_text", "text": "Select a schedule", "emoji": True},
"placeholder": {"type": "plain_text", "text": "Select schedule", "emoji": True},
"options": schedule_options,
"action_id": OnPagingScheduleChange.routing_uid(),
"action_id": action_id,
},
}
return schedule_select
@ -753,7 +753,25 @@ def _get_selected_entries_list(input_id_prefix, key, entries):
def _display_availability_warnings(payload, warnings, organization, user):
metadata = json.loads(payload["view"]["private_metadata"])
return _get_availability_warnings_view(
warnings,
organization,
user,
OnPagingConfirmUserChange.routing_uid(),
json.dumps(
{
"state": payload["view"]["state"],
"input_id_prefix": metadata["input_id_prefix"],
"channel_id": metadata["channel_id"],
"submit_routing_uid": metadata["submit_routing_uid"],
USERS_DATA_KEY: metadata[USERS_DATA_KEY],
SCHEDULES_DATA_KEY: metadata[SCHEDULES_DATA_KEY],
}
),
)
def _get_availability_warnings_view(warnings, organization, user, callback_id, private_metadata):
messages = []
for w in warnings:
if w["error"] == USER_IS_NOT_ON_CALL:
@ -772,7 +790,7 @@ def _display_availability_warnings(payload, warnings, organization, user):
return {
"type": "modal",
"callback_id": OnPagingConfirmUserChange.routing_uid(),
"callback_id": callback_id,
"title": {"type": "plain_text", "text": "Are you sure?"},
"submit": {"type": "plain_text", "text": "Confirm"},
"blocks": [
@ -785,16 +803,7 @@ def _display_availability_warnings(payload, warnings, organization, user):
}
for message in messages
],
"private_metadata": json.dumps(
{
"state": payload["view"]["state"],
"input_id_prefix": metadata["input_id_prefix"],
"channel_id": metadata["channel_id"],
"submit_routing_uid": metadata["submit_routing_uid"],
USERS_DATA_KEY: metadata[USERS_DATA_KEY],
SCHEDULES_DATA_KEY: metadata[SCHEDULES_DATA_KEY],
}
),
"private_metadata": private_metadata,
}

View file

@ -379,7 +379,7 @@ class ResolutionNoteModalStep(AlertGroupActionsMixin, scenario_step.ScenarioStep
if data:
# Argument "data" is used when step is called from other step, e.g. AddRemoveThreadMessageStep
AlertGroup = apps.get_model("alerts", "AlertGroup")
alert_group = AlertGroup.all_objects.get(pk=data["alert_group_pk"])
alert_group = AlertGroup.objects.get(pk=data["alert_group_pk"])
else:
# Handle "Add Resolution notes" button click
alert_group = self.get_alert_group(slack_team_identity, payload)
@ -686,7 +686,7 @@ class AddRemoveThreadMessageStep(UpdateResolutionNoteStep, scenario_step.Scenari
slack_thread_message = None
resolution_note = None
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
if slack_message_pk is not None:
slack_thread_message = ResolutionNoteSlackMessage.objects.get(pk=slack_message_pk)

View file

@ -31,9 +31,7 @@ class AlertGroupLogSlackRenderer:
result = ""
# check if escalation or invitation active
if not (
alert_group.resolved or alert_group.is_archived or alert_group.wiped_at or alert_group.root_alert_group
):
if not (alert_group.resolved or alert_group.wiped_at or alert_group.root_alert_group):
escalation_policies_plan = log_builder.get_incident_escalation_plan(for_slack=True)
if escalation_policies_plan:
result += "\n:arrow_down: :arrow_down: :arrow_down: Plan:\n\n"

View file

@ -104,7 +104,7 @@ class AlertGroupActionsMixin:
except (KeyError, TypeError):
return None
return AlertGroup.all_objects.get(pk=alert_group_pk)
return AlertGroup.objects.get(pk=alert_group_pk)
def _get_alert_group_from_message(self, payload: dict) -> AlertGroup | None:
"""
@ -134,7 +134,7 @@ class AlertGroupActionsMixin:
except (KeyError, TypeError):
continue
return AlertGroup.all_objects.get(pk=alert_group_pk)
return AlertGroup.objects.get(pk=alert_group_pk)
return None
def _get_alert_group_from_slack_message_in_db(

View file

@ -1,4 +1,5 @@
import logging
from typing import Optional, Tuple
from django.apps import apps
from django.utils import timezone
@ -54,6 +55,39 @@ class SlackClientWithErrorHandling(SlackClient):
return cumulative_response
def paginated_api_call_with_ratelimit(self, *args, **kwargs) -> Tuple[dict, Optional[str], bool]:
"""
This method do paginated api call and handle slack rate limit error in order to return collected data and have
ability to continue doing paginated requests from the last successful cursor. Return last successful cursor
instead of next cursor to avoid data loss during delay time
"""
# It's a key from response which is paginated. For example "users" or "channels"
listed_key = kwargs["paginated_key"]
cumulative_response = {}
cursor = kwargs.get("cursor")
rate_limited = False
try:
response = self.api_call(*args, **kwargs)
cumulative_response = response
cursor = response["response_metadata"]["next_cursor"]
while (
"response_metadata" in response
and "next_cursor" in response["response_metadata"]
and response["response_metadata"]["next_cursor"] != ""
):
next_cursor = response["response_metadata"]["next_cursor"]
kwargs["cursor"] = next_cursor
response = self.api_call(*args, **kwargs)
cumulative_response[listed_key] += response[listed_key]
cursor = next_cursor
except SlackAPIRateLimitException:
rate_limited = True
return cumulative_response, cursor, rate_limited
def api_call(self, *args, **kwargs):
DynamicSetting = apps.get_model("base", "DynamicSetting")

View file

@ -1,6 +1,8 @@
import logging
import random
from typing import Optional
from celery import uuid as celery_uuid
from celery.utils.log import get_task_logger
from django.apps import apps
from django.conf import settings
@ -13,7 +15,11 @@ from apps.slack.constants import CACHE_UPDATE_INCIDENT_SLACK_MESSAGE_LIFETIME, S
from apps.slack.scenarios.scenario_step import ScenarioStep
from apps.slack.slack_client import SlackClientWithErrorHandling
from apps.slack.slack_client.exceptions import SlackAPIException, SlackAPITokenException
from apps.slack.utils import get_cache_key_update_incident_slack_message, post_message_to_channel
from apps.slack.utils import (
get_cache_key_update_incident_slack_message,
get_populate_slack_channel_task_id_key,
post_message_to_channel,
)
from common.custom_celery_tasks import shared_dedicated_queue_retry_task
from common.utils import batch_queryset
@ -48,7 +54,7 @@ def update_incident_slack_message(slack_team_identity_pk, alert_group_pk):
AlertGroup = apps.get_model("alerts", "AlertGroup")
slack_team_identity = SlackTeamIdentity.objects.get(pk=slack_team_identity_pk)
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
if alert_group.skip_escalation_in_slack or alert_group.channel.is_rate_limited_in_slack:
return "Skip message update in Slack due to rate limit"
@ -73,7 +79,7 @@ def check_slack_message_exists_before_post_message_to_thread(
AlertGroupLogRecord = apps.get_model("alerts", "AlertGroupLogRecord")
EscalationPolicy = apps.get_model("alerts", "EscalationPolicy")
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
slack_team_identity = alert_group.channel.organization.slack_team_identity
# get escalation policy object if it exists to save it in log record
escalation_policy = EscalationPolicy.objects.filter(pk=escalation_policy_pk).first()
@ -144,7 +150,7 @@ def send_message_to_thread_if_bot_not_in_channel(alert_group_pk, slack_team_iden
SlackTeamIdentity = apps.get_model("slack", "SlackTeamIdentity")
slack_team_identity = SlackTeamIdentity.objects.get(pk=slack_team_identity_pk)
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
sc = SlackClientWithErrorHandling(slack_team_identity.bot_access_token)
@ -268,7 +274,7 @@ def post_or_update_log_report_message_task(alert_group_pk, slack_team_identity_p
UpdateLogReportMessageStep = ScenarioStep.get_step("distribute_alerts", "UpdateLogReportMessageStep")
slack_team_identity = SlackTeamIdentity.objects.get(pk=slack_team_identity_pk)
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
step = UpdateLogReportMessageStep(slack_team_identity, alert_group.channel.organization)
if alert_group.skip_escalation_in_slack or alert_group.channel.is_rate_limited_in_slack:
@ -501,22 +507,60 @@ def populate_slack_channels():
# increase delay to prevent slack ratelimit
if counter % 8 == 0:
delay += 60
populate_slack_channels_for_team.apply_async((slack_team_identity.pk,), countdown=delay)
start_populate_slack_channels_for_team(slack_team_identity.pk, delay)
def start_populate_slack_channels_for_team(
slack_team_identity_id: int, delay: int, cursor: Optional[str] = None
) -> None:
# save active task id in cache to make only one populate task active per team
task_id = celery_uuid()
cache_key = get_populate_slack_channel_task_id_key(slack_team_identity_id)
cache.set(cache_key, task_id)
populate_slack_channels_for_team.apply_async((slack_team_identity_id, cursor), countdown=delay, task_id=task_id)
@shared_dedicated_queue_retry_task(
autoretry_for=(Exception,), retry_backoff=True, max_retries=1 if settings.DEBUG else None
)
def populate_slack_channels_for_team(slack_team_identity_id):
def populate_slack_channels_for_team(slack_team_identity_id: int, cursor: Optional[str] = None) -> None:
"""
Make paginated request to get slack channels. On ratelimit - update info for got channels, save collected channels
ids in cache and restart the task with the last successful pagination cursor to avoid any data loss during delay
time.
"""
SlackTeamIdentity = apps.get_model("slack", "SlackTeamIdentity")
SlackChannel = apps.get_model("slack", "SlackChannel")
slack_team_identity = SlackTeamIdentity.objects.get(pk=slack_team_identity_id)
sc = SlackClientWithErrorHandling(slack_team_identity.bot_access_token)
active_task_id_key = get_populate_slack_channel_task_id_key(slack_team_identity_id)
active_task_id = cache.get(active_task_id_key)
current_task_id = populate_slack_channels_for_team.request.id
if active_task_id and active_task_id != current_task_id:
logger.info(
f"Stop populate_slack_channels_for_team for SlackTeamIdentity pk: {slack_team_identity_id} due to "
f"incorrect active task id"
)
return
collected_channels_key = f"SLACK_CHANNELS_TEAM_{slack_team_identity_id}"
collected_channels = cache.get(collected_channels_key, set())
if cursor and not collected_channels:
# means the task was restarted after rate limit exception but collected channels were lost
logger.warning(
f"Restart slack channel sync for SlackTeamIdentity pk: {slack_team_identity_id} due to empty "
f"'collected_channels' after rate limit"
)
delay = 60
return start_populate_slack_channels_for_team(slack_team_identity_id, delay)
try:
response = sc.paginated_api_call(
"conversations.list", types="public_channel,private_channel", paginated_key="channels", limit=1000
response, cursor, rate_limited = sc.paginated_api_call_with_ratelimit(
"conversations.list",
types="public_channel,private_channel",
paginated_key="channels",
limit=1000,
cursor=cursor,
)
except SlackAPITokenException as e:
logger.info(f"token revoked\n{e}")
@ -525,20 +569,11 @@ def populate_slack_channels_for_team(slack_team_identity_id):
logger.warning(
f"invalid_auth while populating slack channels, SlackTeamIdentity pk: {slack_team_identity.pk}"
)
# in some cases slack rate limit error looks like 'rate_limited', in some - 'ratelimited', be aware
elif e.response["error"] == "rate_limited" or e.response["error"] == "ratelimited":
delay = random.randint(5, 25) * 60
logger.warning(
f"'conversations.list' slack api error: rate_limited. SlackTeamIdentity pk: {slack_team_identity.pk}."
f"Delay populate_slack_channels_for_team task by {delay//60} min."
)
return populate_slack_channels_for_team.apply_async((slack_team_identity_id,), countdown=delay)
elif e.response["error"] == "missing_scope":
logger.warning(
f"conversations.list' slack api error: missing_scope. "
f"SlackTeamIdentity pk: {slack_team_identity.pk}.\n{e}"
)
return
else:
logger.error(f"'conversations.list' slack api error. SlackTeamIdentity pk: {slack_team_identity.pk}\n{e}")
raise e
@ -546,6 +581,8 @@ def populate_slack_channels_for_team(slack_team_identity_id):
today = timezone.now().date()
slack_channels = {channel["id"]: channel for channel in response["channels"]}
collected_channels.update(slack_channels.keys())
existing_channels = slack_team_identity.cached_channels.all()
existing_channel_ids = set(existing_channels.values_list("slack_id", flat=True))
@ -564,12 +601,8 @@ def populate_slack_channels_for_team(slack_team_identity_id):
)
SlackChannel.objects.bulk_create(channels_to_create, batch_size=5000)
# delete excess channels
channel_ids_to_delete = existing_channel_ids - slack_channels.keys()
slack_team_identity.cached_channels.filter(slack_id__in=channel_ids_to_delete).delete()
# update existing channels
channels_to_update = existing_channels.exclude(slack_id__in=channel_ids_to_delete)
channels_to_update = existing_channels.filter(slack_id__in=slack_channels.keys()).exclude(last_populated=today)
for channel in channels_to_update:
slack_channel = slack_channels[channel.slack_id]
channel.name = slack_channel["name"]
@ -580,6 +613,21 @@ def populate_slack_channels_for_team(slack_team_identity_id):
SlackChannel.objects.bulk_update(
channels_to_update, fields=("name", "is_archived", "is_shared", "last_populated"), batch_size=5000
)
if rate_limited:
# save collected channels ids to cache and restart the task with the current pagination cursor
cache.set(collected_channels_key, collected_channels, timeout=3600)
delay = random.randint(1, 3) * 60
logger.warning(
f"'conversations.list' slack api error: rate_limited. SlackTeamIdentity pk: {slack_team_identity_id}. "
f"Delay populate_slack_channels_for_team task for {delay//60} min."
)
start_populate_slack_channels_for_team(slack_team_identity_id, delay, cursor)
else:
# delete excess channels
assert collected_channels
channel_ids_to_delete = existing_channel_ids - collected_channels
slack_team_identity.cached_channels.filter(slack_id__in=channel_ids_to_delete).delete()
cache.delete(collected_channels_key)
@shared_dedicated_queue_retry_task(autoretry_for=(Exception,), retry_backoff=True, max_retries=0)

View file

@ -6,6 +6,7 @@ from django.conf import settings
from rest_framework import status
from rest_framework.test import APIClient
from apps.slack.scenarios.manage_responders import ManageRespondersUserChange
from apps.slack.scenarios.paging import OnPagingTeamChange
from apps.slack.scenarios.scenario_step import PAYLOAD_TYPE_BLOCK_ACTIONS
@ -164,3 +165,35 @@ def test_organization_not_found_scenario_doesnt_break_direct_paging(
assert response.status_code == status.HTTP_200_OK
mock_on_paging_team_change.assert_called_once()
@patch("apps.slack.views.SlackEventApiEndpointView.verify_signature", return_value=True)
@patch.object(ManageRespondersUserChange, "process_scenario")
@pytest.mark.django_db
def test_organization_not_found_scenario_doesnt_break_manage_responders(
mock_process_scenario,
_,
make_organization,
make_slack_user_identity,
make_user,
slack_team_identity,
):
"""
Check ManageRespondersUserChange.process_scenario is called when user is notified in manage responders dialog.
"""
organization = make_organization(slack_team_identity=slack_team_identity)
slack_user_identity = make_slack_user_identity(slack_team_identity=slack_team_identity, slack_id=SLACK_USER_ID)
make_user(organization=organization, slack_user_identity=slack_user_identity)
response = _make_request(
{
"team_id": SLACK_TEAM_ID,
"user_id": SLACK_USER_ID,
"type": "block_actions",
"actions": [{"action_id": ManageRespondersUserChange.routing_uid(), "type": "static_select"}],
"view": {"type": "modal"},
}
)
assert response.status_code == status.HTTP_200_OK
mock_process_scenario.assert_called_once()

View file

@ -23,14 +23,21 @@ def test_populate_slack_channels_for_team(make_organization_with_slack_team_iden
)
)
response = {
"channels": (
{"id": "C111111111", "name": "test1", "is_archived": False, "is_shared": False},
{"id": "C222222222", "name": "test_changed_name", "is_archived": False, "is_shared": True},
{"id": "C333333333", "name": "test3", "is_archived": False, "is_shared": True},
)
}
with patch.object(SlackClientWithErrorHandling, "paginated_api_call", return_value=response):
response, cursor, rate_limited = (
{
"channels": (
{"id": "C111111111", "name": "test1", "is_archived": False, "is_shared": False},
{"id": "C222222222", "name": "test_changed_name", "is_archived": False, "is_shared": True},
{"id": "C333333333", "name": "test3", "is_archived": False, "is_shared": True},
)
},
None,
False,
)
with patch.object(
SlackClientWithErrorHandling, "paginated_api_call_with_ratelimit", return_value=(response, cursor, rate_limited)
):
populate_slack_channels_for_team(slack_team_identity.pk)
channels = slack_team_identity.cached_channels.all()
@ -45,3 +52,123 @@ def test_populate_slack_channels_for_team(make_organization_with_slack_team_iden
assert second_channel.name == "test_changed_name"
assert not channels.filter(last_populated__lte=yesterday).exists()
@patch("apps.slack.tasks.start_populate_slack_channels_for_team")
@pytest.mark.django_db
def test_populate_slack_channels_for_team_ratelimit(
mocked_start_populate_slack_channels_for_team,
make_organization_with_slack_team_identity,
make_slack_channel,
):
organization, slack_team_identity = make_organization_with_slack_team_identity()
yesterday = (timezone.now() - timezone.timedelta(days=1)).date()
today = timezone.now().date()
_ = tuple(
make_slack_channel(
slack_team_identity=slack_team_identity, slack_id=slack_id, name=name, last_populated=yesterday
)
for slack_id, name in (
("C111111111", "test1"),
("C222222222", "test2"),
("C444444444", "test4"),
)
)
# first response with rate limit error
response_1, cursor_1, rate_limited_1 = (
{"channels": ({"id": "C111111111", "name": "test1", "is_archived": False, "is_shared": False},)},
"TESTCURSOR1",
True,
)
# second response with rate limit error
response_2, cursor_2, rate_limited_2 = (
{
"channels": (
{"id": "C111111111", "name": "test1", "is_archived": False, "is_shared": False},
{"id": "C222222222", "name": "test_changed_name", "is_archived": False, "is_shared": True},
)
},
"TESTCURSOR2",
True,
)
# third response without rate limit error
response_3, cursor_3, rate_limited_3 = (
{
"channels": (
{"id": "C222222222", "name": "test_changed_name", "is_archived": False, "is_shared": True},
{"id": "C333333333", "name": "test3", "is_archived": False, "is_shared": True},
)
},
"",
False,
)
# these channels should exist after finishing populate_slack_channels_for_team
expected_channel_ids = {"C111111111", "C222222222", "C333333333"}
with patch.object(
SlackClientWithErrorHandling,
"paginated_api_call_with_ratelimit",
return_value=(response_1, cursor_1, rate_limited_1),
):
populate_slack_channels_for_team(slack_team_identity.pk)
# expected only one channel to update and no channel to delete
# start_populate_slack_channels_for_team should be called
channels = slack_team_identity.cached_channels.all()
channel_1 = channels.get(slack_id="C111111111")
assert channel_1.last_populated == today
channel_2 = channels.get(slack_id="C222222222")
assert channel_2.last_populated == yesterday
assert channels.filter(slack_id="C444444444").exists()
assert mocked_start_populate_slack_channels_for_team.called
assert mocked_start_populate_slack_channels_for_team.call_count == 1
with patch.object(
SlackClientWithErrorHandling,
"paginated_api_call_with_ratelimit",
return_value=(response_2, cursor_2, rate_limited_2),
):
populate_slack_channels_for_team(slack_team_identity.pk, cursor_1)
# expected another one channel to update and no channel to delete
# start_populate_slack_channels_for_team should be called
channels = slack_team_identity.cached_channels.all()
channel_2 = channels.get(slack_id="C222222222")
assert channel_2.last_populated == today
assert channel_2.name == "test_changed_name"
assert not channels.filter(slack_id="C333333333").exists()
assert channels.filter(slack_id="C444444444").exists()
assert mocked_start_populate_slack_channels_for_team.called
assert mocked_start_populate_slack_channels_for_team.call_count == 2
with patch.object(
SlackClientWithErrorHandling,
"paginated_api_call_with_ratelimit",
return_value=(response_3, cursor_3, rate_limited_3),
):
populate_slack_channels_for_team(slack_team_identity.pk, cursor_1)
# expected one new channel and one deleted channel. List of channel ids in response and in db should be the same
# start_populate_slack_channels_for_team should NOT be called
channels = slack_team_identity.cached_channels.all()
actual_channel_ids = set(channels.values_list("slack_id", flat=True))
assert actual_channel_ids == expected_channel_ids
assert not channels.filter(slack_id="C444444444").exists()
channel_2 = channels.get(slack_id="C222222222")
assert channel_2.name == "test_changed_name"
assert not channels.filter(last_populated__lte=yesterday).exists()
assert mocked_start_populate_slack_channels_for_team.call_count == 2

View file

@ -0,0 +1,207 @@
import json
from unittest.mock import patch
import pytest
from django.utils import timezone
from apps.base.models import UserNotificationPolicy
from apps.schedules.models import CustomOnCallShift, OnCallScheduleWeb
from apps.slack.scenarios.manage_responders import (
ALERT_GROUP_DATA_KEY,
DIRECT_PAGING_SCHEDULE_SELECT_ID,
DIRECT_PAGING_USER_SELECT_ID,
USER_DATA_KEY,
ManageRespondersRemoveUser,
ManageRespondersScheduleChange,
ManageRespondersUserChange,
StartManageResponders,
)
ORGANIZATION_ID = 12
ALERT_GROUP_ID = 42
TRIGGER_ID = "111"
CHANNEL_ID = "123"
MESSAGE_TS = "67"
def make_slack_payload(
user=None,
schedule=None,
actions=None,
):
payload = {
"trigger_id": TRIGGER_ID,
"view": {
"id": "view-id",
"private_metadata": json.dumps({"input_id_prefix": "", ALERT_GROUP_DATA_KEY: ALERT_GROUP_ID}),
"state": {
"values": {
DIRECT_PAGING_USER_SELECT_ID: {
ManageRespondersUserChange.routing_uid(): {
"selected_option": {"value": user.pk} if user else None
}
},
DIRECT_PAGING_SCHEDULE_SELECT_ID: {
ManageRespondersScheduleChange.routing_uid(): {
"selected_option": {"value": schedule.pk} if schedule else None
}
},
}
},
},
}
if actions is not None:
payload["actions"] = actions
return payload
@pytest.fixture
def manage_responders_setup(
make_organization_and_user_with_slack_identities,
make_alert_receive_channel,
make_alert_group,
make_alert,
make_slack_channel,
make_slack_message,
):
organization, user, slack_team_identity, slack_user_identity = make_organization_and_user_with_slack_identities()
alert_receive_channel = make_alert_receive_channel(organization)
alert_group = make_alert_group(alert_receive_channel, pk=ALERT_GROUP_ID)
make_alert(alert_group, raw_request_data={})
slack_channel = make_slack_channel(slack_team_identity, slack_id=CHANNEL_ID)
slack_message = make_slack_message(alert_group=alert_group, channel_id=slack_channel.slack_id, slack_id=MESSAGE_TS)
slack_message.get_alert_group() # fix FKs
return organization, user, slack_team_identity, slack_user_identity
@pytest.mark.django_db
def test_initial_state(manage_responders_setup):
payload = {
"trigger_id": TRIGGER_ID,
"actions": [
{
"type": "button",
"value": json.dumps({"organization_id": ORGANIZATION_ID, "alert_group_pk": ALERT_GROUP_ID}),
}
],
}
organization, user, slack_team_identity, slack_user_identity = manage_responders_setup
step = StartManageResponders(slack_team_identity, organization, user)
with patch.object(step._slack_client, "api_call") as mock_slack_api_call:
step.process_scenario(slack_user_identity, slack_team_identity, payload)
assert mock_slack_api_call.call_args.args == ("views.open",)
metadata = json.loads(mock_slack_api_call.call_args.kwargs["view"]["private_metadata"])
assert metadata[ALERT_GROUP_DATA_KEY] == ALERT_GROUP_ID
@pytest.mark.django_db
def test_add_user_no_warning(manage_responders_setup, make_schedule, make_on_call_shift, make_user_notification_policy):
organization, user, slack_team_identity, slack_user_identity = manage_responders_setup
# set up schedule: user is on call
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
team=None,
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
data = {
"start": start_date,
"rotation_start": start_date,
"duration": timezone.timedelta(hours=23, minutes=59, seconds=59),
"priority_level": 1,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
schedule.refresh_ical_file()
# setup notification policy
make_user_notification_policy(
user=user,
step=UserNotificationPolicy.Step.NOTIFY,
notify_by=UserNotificationPolicy.NotificationChannel.SMS,
)
payload = make_slack_payload(user=user)
step = ManageRespondersUserChange(slack_team_identity, organization, user)
with patch.object(step._slack_client, "api_call") as mock_slack_api_call:
step.process_scenario(slack_user_identity, slack_team_identity, payload)
assert mock_slack_api_call.call_args.args == ("views.update",)
# check there's a delete button for the user
assert mock_slack_api_call.call_args.kwargs["view"]["blocks"][0]["accessory"]["value"] == str(user.pk)
@pytest.mark.django_db
def test_add_user_raise_warning(manage_responders_setup):
organization, user, slack_team_identity, slack_user_identity = manage_responders_setup
# user is not on call
payload = make_slack_payload(user=user)
step = ManageRespondersUserChange(slack_team_identity, organization, user)
with patch.object(step._slack_client, "api_call") as mock_slack_api_call:
step.process_scenario(slack_user_identity, slack_team_identity, payload)
assert mock_slack_api_call.call_args.args == ("views.push",)
assert mock_slack_api_call.call_args.kwargs["view"]["callback_id"] == "ManageRespondersConfirmUserChange"
text_from_blocks = "".join(
b["text"]["text"] for b in mock_slack_api_call.call_args.kwargs["view"]["blocks"] if b["type"] == "section"
)
assert f"*{user.username}* is not on-call" in text_from_blocks
metadata = json.loads(mock_slack_api_call.call_args.kwargs["view"]["private_metadata"])
assert metadata[USER_DATA_KEY] == user.pk
@pytest.mark.django_db
def test_add_schedule(manage_responders_setup, make_schedule, make_on_call_shift):
organization, user, slack_team_identity, slack_user_identity = manage_responders_setup
schedule = make_schedule(organization, schedule_class=OnCallScheduleWeb, team=None)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
data = {
"start": start_date,
"rotation_start": start_date,
"duration": timezone.timedelta(hours=23, minutes=59, seconds=59),
"priority_level": 1,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
schedule.refresh_ical_file()
payload = make_slack_payload(schedule=schedule)
step = ManageRespondersScheduleChange(slack_team_identity, organization, user)
with patch.object(step._slack_client, "api_call") as mock_slack_api_call:
step.process_scenario(slack_user_identity, slack_team_identity, payload)
assert mock_slack_api_call.call_args.args == ("views.update",)
assert mock_slack_api_call.call_args.kwargs["view"]["blocks"][0]["accessory"]["value"] == str(user.pk)
@pytest.mark.django_db
def test_remove_user(manage_responders_setup):
organization, user, slack_team_identity, slack_user_identity = manage_responders_setup
payload = make_slack_payload(actions=[{"value": user.pk}])
step = ManageRespondersRemoveUser(slack_team_identity, organization, user)
with patch.object(step._slack_client, "api_call") as mock_slack_api_call:
step.process_scenario(slack_user_identity, slack_team_identity, payload)
assert mock_slack_api_call.call_args.args == ("views.update",)
# check there's no list of users in the view
assert mock_slack_api_call.call_args.kwargs["view"]["blocks"][0]["accessory"]["type"] != "button"

View file

@ -65,22 +65,18 @@ def test_slack_renderer_unresolve_button(make_organization, make_alert_receive_c
@pytest.mark.django_db
def test_slack_renderer_invite_action(
def test_slack_renderer_responders_button(
make_organization, make_user, make_alert_receive_channel, make_alert_group, make_alert
):
organization = make_organization()
user = make_user(organization=organization)
alert_receive_channel = make_alert_receive_channel(organization)
alert_group = make_alert_group(alert_receive_channel)
make_alert(alert_group=alert_group, raw_request_data={})
elements = AlertGroupSlackRenderer(alert_group).render_alert_group_attachments()[0]["blocks"][0]["elements"]
ack_button = elements[2]
assert ack_button["placeholder"]["text"] == "Invite..."
# Check only user_id is passed. Otherwise, if there are a lot of users, the payload could be unnecessarily large.
assert json.loads(ack_button["options"][0]["value"]) == {"user_id": user.pk}
button = elements[3]
assert button["text"]["text"] == "Responders"
@pytest.mark.django_db
@ -113,7 +109,7 @@ def test_slack_renderer_silence_button(make_organization, make_alert_receive_cha
elements = AlertGroupSlackRenderer(alert_group).render_alert_group_attachments()[0]["blocks"][0]["elements"]
button = elements[3]
button = elements[2]
assert button["placeholder"]["text"] == "Silence"
values = [json.loads(option["value"]) for option in button["options"]]
@ -131,7 +127,7 @@ def test_slack_renderer_unsilence_button(make_organization, make_alert_receive_c
make_alert(alert_group=alert_group, raw_request_data={})
elements = AlertGroupSlackRenderer(alert_group).render_alert_group_attachments()[0]["blocks"][0]["elements"]
button = elements[3]
button = elements[2]
assert button["text"]["text"] == "Unsilence"
assert json.loads(button["value"]) == {

View file

@ -65,3 +65,7 @@ def format_datetime_to_slack(timestamp, format="date_short"):
def get_cache_key_update_incident_slack_message(alert_group_pk):
CACHE_KEY_PREFIX = "update_incident_slack_message"
return f"{CACHE_KEY_PREFIX}_{alert_group_pk}"
def get_populate_slack_channel_task_id_key(slack_team_identity_id):
return f"SLACK_CHANNELS_TASK_ID_TEAM_{slack_team_identity_id}"

View file

@ -22,6 +22,7 @@ from apps.slack.scenarios.alertgroup_appearance import STEPS_ROUTING as ALERTGRO
from apps.slack.scenarios.declare_incident import STEPS_ROUTING as DECLARE_INCIDENT_ROUTING
from apps.slack.scenarios.distribute_alerts import STEPS_ROUTING as DISTRIBUTION_STEPS_ROUTING
from apps.slack.scenarios.invited_to_channel import STEPS_ROUTING as INVITED_TO_CHANNEL_ROUTING
from apps.slack.scenarios.manage_responders import STEPS_ROUTING as MANAGE_RESPONDERS_ROUTING
from apps.slack.scenarios.manual_incident import STEPS_ROUTING as MANUAL_INCIDENT_ROUTING
from apps.slack.scenarios.notified_user_not_in_channel import STEPS_ROUTING as NOTIFIED_USER_NOT_IN_CHANNEL_ROUTING
from apps.slack.scenarios.onboarding import STEPS_ROUTING as ONBOARDING_STEPS_ROUTING
@ -75,6 +76,7 @@ SCENARIOS_ROUTES.extend(CHANNEL_ROUTING)
SCENARIOS_ROUTES.extend(PROFILE_UPDATE_ROUTING)
SCENARIOS_ROUTES.extend(MANUAL_INCIDENT_ROUTING)
SCENARIOS_ROUTES.extend(DIRECT_PAGE_ROUTING)
SCENARIOS_ROUTES.extend(MANAGE_RESPONDERS_ROUTING)
SCENARIOS_ROUTES.extend(DECLARE_INCIDENT_ROUTING)
SCENARIOS_ROUTES.extend(NOTIFIED_USER_NOT_IN_CHANNEL_ROUTING)

View file

@ -3,17 +3,14 @@ from urllib.parse import urljoin
from django.apps import apps
from django.conf import settings
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.http import HttpResponse
from rest_framework import status
from social_core.exceptions import AuthForbidden
from apps.slack.tasks import populate_slack_channels_for_team, populate_slack_usergroups_for_team
from apps.social_auth.exceptions import InstallMultiRegionSlackException
from common.constants.slack_auth import (
REDIRECT_AFTER_SLACK_INSTALL,
SLACK_AUTH_SLACK_USER_ALREADY_CONNECTED_ERROR,
SLACK_AUTH_WRONG_WORKSPACE_ERROR,
)
from common.constants.slack_auth import SLACK_AUTH_SLACK_USER_ALREADY_CONNECTED_ERROR, SLACK_AUTH_WRONG_WORKSPACE_ERROR
from common.insight_log import ChatOpsEvent, ChatOpsTypePlug, write_chatops_insight_log
from common.oncall_gateway import check_slack_installation_possible, create_slack_connector
@ -41,26 +38,24 @@ def connect_user_to_slack(response, backend, strategy, user, organization, *args
slack_team_identity = organization.slack_team_identity
slack_user_id = response["authed_user"]["id"]
redirect_to = "/a/grafana-oncall-app/users/me/"
base_url_to_redirect = urljoin(organization.grafana_url, redirect_to)
if slack_team_identity is None:
# means that organization doesn't have slack integration, so user cannot connect their account to slack
return HttpResponse(status=status.HTTP_400_BAD_REQUEST)
if slack_team_identity.slack_id != response["team"]["id"]:
# means that user authed in another slack workspace that is not connected to their organization
# change redirect url to show user error message and save it in session param
url = urljoin(
strategy.session[REDIRECT_AFTER_SLACK_INSTALL],
f"?page=users&slack_error={SLACK_AUTH_WRONG_WORKSPACE_ERROR}",
)
strategy.session[REDIRECT_AFTER_SLACK_INSTALL] = url
url = base_url_to_redirect + f"?slack_error={SLACK_AUTH_WRONG_WORKSPACE_ERROR}"
strategy.session[REDIRECT_FIELD_NAME] = url
return HttpResponse(status=status.HTTP_400_BAD_REQUEST)
if organization.users.filter(slack_user_identity__slack_id=slack_user_id).exists():
# means that slack user has already been connected to another user in current organization
url = urljoin(
strategy.session[REDIRECT_AFTER_SLACK_INSTALL],
f"?page=users&slack_error={SLACK_AUTH_SLACK_USER_ALREADY_CONNECTED_ERROR}",
)
strategy.session[REDIRECT_AFTER_SLACK_INSTALL] = url
url = base_url_to_redirect + f"?slack_error={SLACK_AUTH_SLACK_USER_ALREADY_CONNECTED_ERROR}"
strategy.session[REDIRECT_FIELD_NAME] = url
return HttpResponse(status=status.HTTP_400_BAD_REQUEST)
slack_user_identity, _ = SlackUserIdentity.objects.get_or_create(

View file

@ -64,7 +64,7 @@ class AlertGroupTelegramRepresentative(AlertGroupAbstractRepresentative):
logger.info("AlertGroupTelegramRepresentative UPDATE LOG REPORT SIGNAL")
alert_group = kwargs["alert_group"]
if not isinstance(alert_group, AlertGroup):
alert_group = AlertGroup.all_objects.get(pk=alert_group)
alert_group = AlertGroup.objects.get(pk=alert_group)
messages_to_edit = alert_group.telegram_messages.filter(
message_type__in=(

View file

@ -87,7 +87,7 @@ def send_link_to_channel_message_or_fallback_to_full_alert_group(
try:
user_connector = TelegramToUserConnector.objects.get(pk=user_connector_pk)
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
notification_policy = UserNotificationPolicy.objects.get(pk=notification_policy_pk)
# probably telegram message just didn't appear in Telegram channel yet

View file

@ -65,7 +65,7 @@ class ButtonPressHandler(UpdateHandler):
args = CallbackQueryFactory.decode_data(data)
alert_group_pk = args[0]
alert_group = AlertGroup.all_objects.get(pk=alert_group_pk)
alert_group = AlertGroup.objects.get(pk=alert_group_pk)
action_value = args[1]
try:

View file

@ -0,0 +1,18 @@
# Generated by Django 3.2.20 on 2023-07-18 06:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('user_management', '0012_auto_20230711_1554'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='acknowledge_remind_timeout',
field=models.IntegerField(choices=[(0, 'Never remind'), (1, 'Remind every 1 hour'), (2, 'Remind every 3 hours'), (3, 'Remind every 5 hours'), (4, 'Remind every 10 hours')], default=0),
),
]

View file

@ -30,7 +30,7 @@ ACTION_TO_TRIGGER_TYPE = {
)
def alert_group_created(self, alert_group_id):
try:
alert_group = AlertGroup.unarchived_objects.get(pk=alert_group_id)
alert_group = AlertGroup.objects.get(pk=alert_group_id)
except AlertGroup.DoesNotExist:
return
@ -50,7 +50,7 @@ def alert_group_created(self, alert_group_id):
)
def alert_group_status_change(self, action_type, alert_group_id, user_id):
try:
alert_group = AlertGroup.unarchived_objects.get(pk=alert_group_id)
alert_group = AlertGroup.objects.get(pk=alert_group_id)
except AlertGroup.DoesNotExist:
return

View file

@ -177,7 +177,7 @@ def execute_webhook(webhook_pk, alert_group_id, user_id, escalation_policy_id):
type=UserNotificationPolicyLogRecord.TYPE_PERSONAL_NOTIFICATION_SUCCESS,
).select_related("author")
alert_group = (
AlertGroup.unarchived_objects.prefetch_related(
AlertGroup.objects.prefetch_related(
Prefetch("personal_log_records", queryset=personal_log_records, to_attr="sent_notifications")
)
.select_related("channel")

View file

@ -49,7 +49,7 @@ def test_alert_group_created_for_team(
@pytest.mark.django_db
def test_alert_group_created_does_not_exist(make_organization, make_custom_webhook):
assert AlertGroup.all_objects.filter(pk=53).first() is None
assert AlertGroup.objects.filter(pk=53).first() is None
organization = make_organization()
# make sure there is a webhook setup
make_custom_webhook(organization=organization, trigger_type=Webhook.TRIGGER_ALERT_GROUP_CREATED)
@ -98,7 +98,7 @@ def test_alert_group_status_change(
@pytest.mark.django_db
def test_alert_group_status_change_does_not_exist(make_organization, make_custom_webhook):
assert AlertGroup.all_objects.filter(pk=53).first() is None
assert AlertGroup.objects.filter(pk=53).first() is None
organization = make_organization()
# make sure there is a webhook setup
make_custom_webhook(organization=organization, trigger_type=Webhook.TRIGGER_ACKNOWLEDGE)

View file

@ -141,38 +141,6 @@ class RateLimitHeadersMixin:
return super().handle_exception(exc)
class OrderedModelSerializerMixin:
def _change_position(self, order, instance):
if order is not None:
if order >= 0:
instance.to(order)
elif order == -1:
instance.bottom()
else:
raise BadRequest(detail="Invalid value for position field")
def _validate_order(self, order, filter_kwargs):
if order is not None and (self.instance is None or self.instance.order != order):
last_instance = self.Meta.model.objects.filter(**filter_kwargs).order_by("order").last()
max_order = last_instance.order if last_instance else -1
if self.instance is None:
max_order += 1
if order > max_order:
raise BadRequest(detail="Invalid value for position field")
def _validate_manual_order(self, order):
"""
For manual ordering validate just that order is valid PositiveIntegrer.
User of manual ordering is responsible for correct ordering.
However, manual ordering not intended for use somewhere, except terraform provider.
"""
# https://docs.djangoproject.com/en/4.1/ref/models/fields/#positiveintegerfield
MAX_POSITIVE_INTEGER = 2147483647
if order is not None and order < 0 or order > MAX_POSITIVE_INTEGER:
raise BadRequest(detail="Invalid value for position field")
class PublicPrimaryKeyMixin:
def get_object(self):
pk = self.kwargs["pk"]

View file

View file

@ -0,0 +1,69 @@
from rest_framework import serializers
from common.api_helpers.exceptions import BadRequest
class OrderedModelSerializer(serializers.ModelSerializer):
"""Ordered model serializer to be used in public API."""
position = serializers.IntegerField(required=False, source="order")
# manual_order=True is intended for use by Terraform provider only, and is not a documented feature.
manual_order = serializers.BooleanField(default=False, write_only=True)
class Meta:
fields = ["position", "manual_order"]
def create(self, validated_data):
# Remove "manual_order" and "order" fields from validated_data, so they are not passed to create method.
manual_order = validated_data.pop("manual_order", False)
order = validated_data.pop("order", None)
# Create the instance.
# Instances are always created at the end of the list, and then moved to the desired position by _adjust_order.
instance = super().create(validated_data)
# Adjust order of the instance if necessary.
if order is not None:
self._adjust_order(instance, manual_order, order, created=True)
return instance
def update(self, instance, validated_data):
# Remove "manual_order" and "order" fields from validated_data, so they are not passed to update method.
manual_order = validated_data.pop("manual_order", False)
order = validated_data.pop("order", None)
# Adjust order of the instance if necessary.
if order is not None:
self._adjust_order(instance, manual_order, order, created=False)
# Proceed with the update.
return super().update(instance, validated_data)
@staticmethod
def _adjust_order(instance, manual_order, order, created):
# Passing order=-1 means that the policy should be moved to the end of the list.
# Works only for public API but not for Terraform provider.
if order == -1 and not manual_order:
if created:
# The policy was just created, so it is already at the end of the list.
return
order = instance.max_order()
# max_order() can't be None here because at least one instance exists the one we are moving.
assert order is not None
# Check the order is in the valid range.
# https://docs.djangoproject.com/en/4.1/ref/models/fields/#positiveintegerfield
if order < 0 or order > 2147483647:
raise BadRequest(detail="Invalid value for position field")
# Orders are swapped instead of moved when using Terraform, because Terraform may issue concurrent requests
# to create / update / delete multiple policies. "Move to" operation is not deterministic in this case, and
# final order of policies may be different depending on the order in which requests are processed. On the other
# hand, the result of concurrent "swap" operations is deterministic and does not depend on the order in
# which requests are processed.
if manual_order:
instance.swap(order)
else:
instance.to(order)

Some files were not shown because too many files have changed in this diff Show more