Merge branch 'dev' into 191-connectivity-warning

This commit is contained in:
Yulia Shanyrova 2022-09-09 13:09:27 +02:00
commit 351d7d2b94
102 changed files with 7399 additions and 9001 deletions

View file

@ -1,5 +1,17 @@
# Change Log
## v1.0.35 (2022-09-07)
- Bug fixes
## v1.0.34 (2022-09-06)
- Fix schedule notification spam
## v1.0.33 (2022-09-06)
- Add raw alert view
- Add GitHub star button for OSS installations
- Restore alert group search functionality
- Bug fixes
## v1.0.32 (2022-09-01)
- Bug fixes

View file

@ -16,16 +16,16 @@ weight: 100
# Slack integration for Grafana OnCall
The Slack integration for Grafana OnCall incorporates your Slack workspace directly into your incident response workflow to help your team focus on alert resolution with less friction.
The Slack integration for Grafana OnCall incorporates your Slack workspace directly into your incident response workflow to help your team focus on alert resolution with less friction.
Integrating your Slack workspace with Grafana OnCall allows users and teams to be notified of alerts directly in Slack with automated alert escalation steps and user notification preferences. There are a number of alert actions that users can take directly from Slack, including acknowledge, resolve, add resolution notes, and more.
## Before you begin
To install the Slack integration, you must have Admin permissions in your Grafana instance as well as the Slack workspace that youd like to integrate with.
To install the Slack integration, you must have Admin permissions in your Grafana instance as well as the Slack workspace that youd like to integrate with.
For Open Source Grafana OnCall Slack installation guidance, refer to [Open Source Grafana OnCall]({{< relref "../open-source.md" >}}).
For Open Source Grafana OnCall Slack installation guidance, refer to [Open Source Grafana OnCall]({{< relref "../open-source" >}}).
## Install Slack integration for Grafana OnCall
@ -41,23 +41,23 @@ For Open Source Grafana OnCall Slack installation guidance, refer to [Open Sourc
Configure the following additional settings to ensure Grafana OnCall alerts are routed to the intended Slack channels and users:
1. From your **Slack integration** settings, select a default slack channel in the first dropdown menu. This is where alerts will be sent unless otherwise specified in escalation chains.
2. In **Additional Settings**, configure alert reminders for alerts to retrigger after being acknowledged for some amount of time.
2. In **Additional Settings**, configure alert reminders for alerts to retrigger after being acknowledged for some amount of time.
3. Ensure all users verify their slack account in their Grafana OnCall **users info**.
### Configure Escalation Chains with Slack notifications
Once your Slack integration is configured you can configure Escalation Chains to notify via Slack messages for alerts in Grafana OnCall.
Once your Slack integration is configured you can configure Escalation Chains to notify via Slack messages for alerts in Grafana OnCall.
There are two Slack notification options that you can configure into escalation chains, notify whole Slack channel and notify Slack user group:
1. In Grafana OnCall, navigate to the **Escalation Chains** tab then select an existing escalation chain or click **+ New escalation chain**.
2. Click the dropdown for **Add escalation step**.
3. Configure your escalation chain with automated Slack notifications.
3. Configure your escalation chain with automated Slack notifications.
### Configure user notifications with Slack mentions
To be notified of alerts in Grafana OnCall via Slack mentions:
1. Navigate to the **Users** tab in Grafana OnCall, click **Edit** next to a user.
2. In the **User Info** tab, edit or configure notification steps by clicking + Add Notification step
2. In the **User Info** tab, edit or configure notification steps by clicking + Add Notification step
3. select **Notify by** in the first dropdown and select **Slack mentions** in the second dropdown to receive alert notifications via Slack mentions.
### Configure on-call notifications in Slack

View file

@ -24,16 +24,16 @@ The following diagram details an example alert workflow with Grafana OnCall:
These procedures introduce you to initial Grafana OnCall configuration steps, including monitoring system integration, how to set up escalation chains, and how to use your calendar service for on-call scheduling.
## Before you begin
## Before you begin
Grafana OnCall is available for Grafana Cloud as well as Grafana open source users. You must have a Grafana Cloud account or [Open Source Grafana OnCall]({{< relref "open-source.md" >}})
Grafana OnCall is available for Grafana Cloud as well as Grafana open source users. You must have a Grafana Cloud account or [Open Source Grafana OnCall]({{< relref "../open-source" >}})
For more information, see [Grafana Pricing](https://grafana.com/pricing/) for details.
## Install Open Source Grafana OnCall
For Open Source Grafana OnCall installation guidance, refer to [Open Source Grafana OnCall]({{< relref "open-source.md" >}})
For Open Source Grafana OnCall installation guidance, refer to [Open Source Grafana OnCall]({{< relref "../open-source" >}})
>**Note:** If you are using Grafana OnCall with your Grafana Cloud instance there are no install steps. Access Grafana OnCall from your Grafana Cloud account and skip ahead to “Get alerts into Grafana OnCall”
@ -53,13 +53,13 @@ Regardless of where your alerts originate, you can send them to Grafana OnCall v
4. Complete any necessary configurations in your monitoring system to send alerts to Grafana OnCall.
#### Send a demo alert
#### Send a demo alert
1. In the integration tab, click **Send demo alert** then navigate to the **Alert Groups** tab to see your test alert firing.
2. Explore the alert by clicking on the title of the alert.
3. Acknowledge and resolve the test alert.
For more information on Grafana OnCall integrations and further configuration guidance, refer to, [Connect to Grafana OnCall]({{< relref "integrations/" >}})
For more information on Grafana OnCall integrations and further configuration guidance, refer to, [Connect to Grafana OnCall]({{< relref "../integrations" >}})
### Configure Escalation Chains
@ -72,18 +72,18 @@ To configure Escalation Chains:
1. Navigate to the **Escalation Chains** tab and click **+ New Escalation Chain**
2. Give your Escalation Chain a useful name and click **Create**
3. Add a series of escalation steps from the available dropdown options.
4. To link your Escalation Chain to your integration, navigate back to the **Integrations tab**, Select your newly created Escalation Chain from the “**Escalate to**” dropdown.
4. To link your Escalation Chain to your integration, navigate back to the **Integrations tab**, Select your newly created Escalation Chain from the “**Escalate to**” dropdown.
Alerts from this integration will now follow the escalation steps configured in your Escalation Chain.
For more information on Escalation Chains and more ways to customize them, refer to [Configure and manage Escalation Chains]({{< relref "escalation-policies/configure-escalation-chains/" >}})
For more information on Escalation Chains and more ways to customize them, refer to [Configure and manage Escalation Chains]({{< relref "../escalation-policies/configure-escalation-chains" >}})
## Get notified of an alert
In order for Grafana OnCall to notify you of an alert, you must configure how you want to be notified. Personal notification policies, chatops integrations, and on-call schedules allow you to automate how users are notified of alerts.
### Configure personal notification policies
Personal notification policies determine how a user is notified for a certain type of alert. Get notified by SMS, phone call, or Slack mentions. Administrators can configure how users receive notification for certain types of alerts. For more information on personal notification policies, refer to [Manage users and teams for Grafana OnCall]({{< relref "configure-user-settings/" >}})
### Configure personal notification policies
Personal notification policies determine how a user is notified for a certain type of alert. Get notified by SMS, phone call, or Slack mentions. Administrators can configure how users receive notification for certain types of alerts. For more information on personal notification policies, refer to [Manage users and teams for Grafana OnCall]({{< relref "../configure-user-settings" >}})
To configure users personal notification policies:
@ -94,7 +94,7 @@ To configure users personal notification policies:
### Configure Slack for Grafana OnCall
Grafana OnCall integrates closely with your Slack workspace to deliver alert notifications to individuals, user groups, and channels. Slack notifications can be triggered by steps in an escalation chain or as a step in users personal notification policies.
Grafana OnCall integrates closely with your Slack workspace to deliver alert notifications to individuals, user groups, and channels. Slack notifications can be triggered by steps in an escalation chain or as a step in users personal notification policies.
To configure Slack for Grafana OnCall:
@ -105,20 +105,20 @@ To configure Slack for Grafana OnCall:
5. Click Allow to allow Grafana OnCall to access Slack.
6. Ensure users verify their Slack accounts in their user profile in Grafana OnCall.
For further instruction on connecting to your Slack workspace, refer to [Connect Slack to Grafana OnCall]({{< relref "chat-options/configure-slack/" >}})
For further instruction on connecting to your Slack workspace, refer to [Connect Slack to Grafana OnCall]({{< relref "../chat-options/configure-slack" >}})
### Add your on-call schedule
Grafana OnCall allows you to manage your on-call schedule in your preferred calendar app such as Google Calendar or Microsoft Outlook.
Grafana OnCall allows you to manage your on-call schedule in your preferred calendar app such as Google Calendar or Microsoft Outlook.
To integrate your on-call calendar with Grafana OnCall:
1. In the **Schedules** tab of Grafana OnCall, click **+ Add team schedule for on-call rotation**.
2. Provide a schedule name.
3. Copy the iCal URL associated with your on-call calendar from your calendar integration settings.
3. Copy the iCal URL associated with your on-call calendar from your calendar integration settings.
4. Configure the rest of the schedule settings and click Create Schedule
For more information on on-call schedules, refer to [Configure and manage on-call schedules]({{< relref "calendar-schedules/" >}})
For more information on on-call schedules, refer to [Configure and manage on-call schedules]({{< relref "../calendar-schedules" >}})

View file

@ -23,7 +23,7 @@ class AlertAdmin(CustomModelAdmin):
@admin.register(AlertGroup)
class AlertGroupAdmin(CustomModelAdmin):
list_display = ("id", "public_primary_key", "verbose_name", "channel", "channel_filter", "state", "started_at")
list_display = ("id", "public_primary_key", "web_title_cache", "channel", "channel_filter", "state", "started_at")
list_filter = ("started_at",)
def get_queryset(self, request):

View file

@ -7,6 +7,7 @@ from dateutil.parser import parse
from django.apps import apps
from django.utils import timezone
from django.utils.functional import cached_property
from rest_framework.exceptions import ValidationError
from apps.alerts.constants import NEXT_ESCALATION_DELAY
from apps.alerts.escalation_snapshot.snapshot_classes import (
@ -189,7 +190,10 @@ class EscalationSnapshotMixin:
escalation_snapshot_object = None
raw_escalation_snapshot = self.raw_escalation_snapshot
if raw_escalation_snapshot is not None:
escalation_snapshot_object = self._deserialize_escalation_snapshot(raw_escalation_snapshot)
try:
escalation_snapshot_object = self._deserialize_escalation_snapshot(raw_escalation_snapshot)
except ValidationError as e:
logger.error(f"Error trying to deserialize raw escalation snapshot: {e}")
return escalation_snapshot_object
def _deserialize_escalation_snapshot(self, raw_escalation_snapshot) -> EscalationSnapshot:

View file

@ -92,7 +92,7 @@ class GrafanaAlertingSyncManager:
datasources, response_info = self.client.get_datasources()
if datasources is None:
logger.warning(
f"Failed to get datasource list for organization {self.alert_receive_channel.organization.org_title}, "
f"Failed to get datasource list for organization {self.alert_receive_channel.organization.stack_slug}, "
f"{response_info}"
)
return

View file

@ -24,7 +24,7 @@ class AlertGroupSmsRenderer(AlertGroupBaseRenderer):
incident_link = self.alert_group.web_link
return (
f"You are invited to check an incident #{self.alert_group.inside_organization_number} with title "
f'"{title}" in Grafana OnCall organization: "{self.alert_group.channel.organization.org_title}", '
f'"{title}" in Grafana OnCall organization: "{self.alert_group.channel.organization.stack_slug}", '
f"alert channel: {self.alert_group.channel.short_name}, "
f"alerts registered: {self.alert_group.alerts.count()}, "
f"{incident_link}\n"

View file

@ -69,7 +69,6 @@ class IntegrationOptionsMixin:
"grouping_id",
"resolve_condition",
"acknowledge_condition",
"group_verbose_name",
"source_link",
]

View file

@ -0,0 +1,28 @@
# Generated by Django 3.2.15 on 2022-09-01 16:54
from django.db import migrations
from apps.alerts.models import AlertReceiveChannel
from apps.alerts.tasks import update_web_title_cache_for_alert_receive_channel
def populate_web_title_cache(apps, _):
pks = AlertReceiveChannel.objects_with_deleted.values_list("pk", flat=True)
for pk in pks:
update_web_title_cache_for_alert_receive_channel.delay(pk)
class Migration(migrations.Migration):
dependencies = [
('alerts', '0006_alertgroup_alerts_aler_channel_ee84a7_idx'),
]
operations = [
migrations.RenameField(
model_name='alertgroup',
old_name='verbose_name',
new_name='web_title_cache',
),
migrations.RunPython(populate_web_title_cache, migrations.RunPython.noop),
]

View file

@ -179,19 +179,19 @@ class Alert(models.Model):
is_resolve_signal = False
is_acknowledge_signal = False
group_distinction = None
group_verbose_name = "Incident"
acknowledge_condition_template = template_manager.get_attr_template(
"acknowledge_condition", alert_receive_channel
)
resolve_condition_template = template_manager.get_attr_template("resolve_condition", alert_receive_channel)
grouping_id_template = template_manager.get_attr_template("grouping_id", alert_receive_channel)
# use get_default_attr_template because there is no ability to customize group_verbose_name, only default value
group_verbose_name_template = template_manager.get_default_attr_template(
"group_verbose_name", alert_receive_channel
)
if group_verbose_name_template is not None:
group_verbose_name, _ = apply_jinja_template(group_verbose_name_template, raw_request_data)
# set web_title_cache to web title to allow alert group searching based on web_title_cache
web_title_template = template_manager.get_attr_template("title", alert_receive_channel, render_for="web")
if web_title_template:
web_title_cache = apply_jinja_template(web_title_template, raw_request_data)[0] or None
else:
web_title_cache = None
if grouping_id_template is not None:
group_distinction, _ = apply_jinja_template(grouping_id_template, raw_request_data)
@ -220,7 +220,7 @@ class Alert(models.Model):
is_resolve_signal=is_resolve_signal,
is_acknowledge_signal=is_acknowledge_signal,
group_distinction=group_distinction,
group_verbose_name=group_verbose_name,
web_title_cache=web_title_cache,
)
@staticmethod

View file

@ -82,7 +82,7 @@ class AlertGroupQuerySet(models.QuerySet):
# Create a new group if we couldn't group it to any existing ones
try:
return (
self.create(**search_params, is_open_for_grouping=True, verbose_name=group_data.group_verbose_name),
self.create(**search_params, is_open_for_grouping=True, web_title_cache=group_data.web_title_cache),
True,
)
except IntegrityError:
@ -134,7 +134,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
STATUS_CHOICES = ((NEW, "New"), (ACKNOWLEDGED, "Acknowledged"), (RESOLVED, "Resolved"), (SILENCED, "Silenced"))
GroupData = namedtuple(
"GroupData", ["is_resolve_signal", "group_distinction", "group_verbose_name", "is_acknowledge_signal"]
"GroupData", ["is_resolve_signal", "group_distinction", "web_title_cache", "is_acknowledge_signal"]
)
SOURCE, USER, NOT_YET, LAST_STEP, ARCHIVED, WIPED, DISABLE_MAINTENANCE = range(7)
@ -177,7 +177,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
# For example different types of alerts from the same channel should go to different groups.
# Distinction is what describes their difference.
distinction = models.CharField(max_length=100, null=True, default=None, db_index=True)
verbose_name = models.TextField(null=True, default=None)
web_title_cache = models.TextField(null=True, default=None)
inside_organization_number = models.IntegerField(default=0)
@ -357,7 +357,7 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
]
def __str__(self):
return f"{self.pk}: {self.verbose_name}"
return f"{self.pk}: {self.web_title_cache}"
@property
def is_maintenance_incident(self):
@ -899,13 +899,13 @@ class AlertGroup(AlertGroupSlackRenderingMixin, EscalationSnapshotMixin, models.
self.resolve(resolved_by=AlertGroup.WIPED)
self.stop_escalation()
self.distinction = ""
self.verbose_name = "Wiped incident"
self.web_title_cache = None
self.wiped_at = timezone.now()
self.wiped_by = user
for alert in self.alerts.all():
alert.wipe(wiped_by=self.wiped_by, wiped_at=self.wiped_at)
self.save(update_fields=["distinction", "verbose_name", "wiped_at", "wiped_by"])
self.save(update_fields=["distinction", "web_title_cache", "wiped_at", "wiped_by"])
log_record = self.log_records.create(
type=AlertGroupLogRecord.TYPE_WIPED,

View file

@ -131,7 +131,7 @@ class MaintainableObject(models.Model):
if mode == AlertReceiveChannel.MAINTENANCE:
group = AlertGroup.all_objects.create(
distinction=uuid4(),
verbose_name=f"Maintenance of {verbal} for {maintenance_duration}",
web_title_cache=f"Maintenance of {verbal} for {maintenance_duration}",
maintenance_uuid=maintenance_uuid,
channel_filter_id=maintenance_integration.default_channel_filter.pk,
channel=maintenance_integration,

View file

@ -1,4 +1,8 @@
from .acknowledge_reminder import acknowledge_reminder_task # noqa: F401
from .alert_group_web_title_cache import ( # noqa:F401
update_web_title_cache,
update_web_title_cache_for_alert_receive_channel,
)
from .calculcate_escalation_finish_time import calculate_escalation_finish_time # noqa
from .call_ack_url import call_ack_url # noqa: F401
from .check_escalation_finished import check_escalation_finished_task # noqa: F401

View file

@ -0,0 +1,87 @@
from django.db.models import Min
from apps.alerts.incident_appearance.templaters import TemplateLoader
from apps.alerts.tasks.task_logger import task_logger
from common.custom_celery_tasks import shared_dedicated_queue_retry_task
from common.jinja_templater import apply_jinja_template
# BATCH_SIZE is how many alert groups will be processed per second (for every individual alert receive channel)
BATCH_SIZE = 1000
def batch_ids(queryset, cursor):
return list(queryset.filter(id__gt=cursor).order_by("id").values_list("id", flat=True)[:BATCH_SIZE])
@shared_dedicated_queue_retry_task
def update_web_title_cache_for_alert_receive_channel(alert_receive_channel_pk):
"""
Update the web_title_cache field for all alert groups of alert receive channel with pk = alert_receive_channel_pk.
Note that it's not invoked on web title template change due to performance considerations.
"""
task_logger.debug(
f"Starting update_web_title_cache_for_alert_receive_channel, alert_receive_channel_pk: {alert_receive_channel_pk}"
)
from apps.alerts.models import AlertGroup
countdown = 0
cursor = 0
queryset = AlertGroup.all_objects.filter(channel_id=alert_receive_channel_pk)
ids = batch_ids(queryset, cursor)
while ids:
update_web_title_cache.apply_async((alert_receive_channel_pk, ids), countdown=countdown)
cursor = ids[-1]
ids = batch_ids(queryset, cursor)
countdown += 1
@shared_dedicated_queue_retry_task
def update_web_title_cache(alert_receive_channel_pk, alert_group_pks):
"""
Update the web_title_cache field for alert groups with pk in alert_group_pks,
for alert receive channel with pk = alert_receive_channel_pk.
"""
task_logger.debug(
f"Starting update_web_title_cache, alert_receive_channel_pk: {alert_receive_channel_pk}, "
f"first alert_group_pk: {alert_group_pks[0]}, last alert_group_pk: {alert_group_pks[-1]}"
)
from apps.alerts.models import Alert, AlertGroup, AlertReceiveChannel
try:
alert_receive_channel = AlertReceiveChannel.objects_with_deleted.get(pk=alert_receive_channel_pk)
except AlertReceiveChannel.DoesNotExist:
task_logger.warning(f"AlertReceiveChannel {alert_receive_channel_pk} doesn't exist")
return
alert_groups = AlertGroup.all_objects.filter(pk__in=alert_group_pks).only("pk")
# get first alerts in 2 SQL queries
alerts_info = (
Alert.objects.values("group_id").filter(group_id__in=alert_group_pks).annotate(first_alert_id=Min("id"))
)
alerts_info_map = {info["group_id"]: info for info in alerts_info}
first_alert_ids = [info["first_alert_id"] for info in alerts_info_map.values()]
first_alerts = Alert.objects.filter(pk__in=first_alert_ids).values("group_id", "raw_request_data")
first_alert_map = {alert["group_id"]: alert for alert in first_alerts}
template_manager = TemplateLoader()
web_title_template = template_manager.get_attr_template("title", alert_receive_channel, render_for="web")
for alert_group in alert_groups:
if web_title_template:
if alert_group.pk in first_alert_map:
raw_request_data = first_alert_map[alert_group.pk]["raw_request_data"]
web_title_cache = apply_jinja_template(web_title_template, raw_request_data)[0] or None
else:
web_title_cache = None
else:
web_title_cache = None
alert_group.web_title_cache = web_title_cache
AlertGroup.all_objects.bulk_update(alert_groups, ["web_title_cache"])

View file

@ -9,9 +9,9 @@ from django.utils import timezone
from apps.schedules.ical_events import ical_events
from apps.schedules.ical_utils import (
calculate_shift_diff,
event_start_end_all_day_with_respect_to_type,
get_icalendar_tz_or_utc,
get_usernames_from_ical_event,
ical_date_to_datetime,
is_icals_equal,
memoized_users_in_ical,
)
@ -35,12 +35,7 @@ def get_current_shifts_from_ical(calendar, schedule, min_priority=0):
usernames, priority = get_usernames_from_ical_event(event)
users = memoized_users_in_ical(tuple(usernames), schedule.organization)
if len(users) > 0:
event_start, start_all_day = ical_date_to_datetime(
event["DTSTART"].dt,
calendar_tz,
start=True,
)
event_end, end_all_day = ical_date_to_datetime(event["DTEND"].dt, calendar_tz, start=False)
event_start, event_end, all_day_event = event_start_end_all_day_with_respect_to_type(event, calendar_tz)
if event["UID"] in shifts:
existing_event = shifts[event["UID"]]
@ -50,7 +45,7 @@ def get_current_shifts_from_ical(calendar, schedule, min_priority=0):
"users": [u.pk for u in users],
"start": event_start,
"end": event_end,
"all_day": start_all_day,
"all_day": all_day_event,
"priority": priority + min_priority, # increase priority for overrides
"priority_increased_by": min_priority,
}
@ -70,19 +65,14 @@ def get_next_shifts_from_ical(calendar, schedule, min_priority=0, days_to_lookup
usernames, priority = get_usernames_from_ical_event(event)
users = memoized_users_in_ical(tuple(usernames), schedule.organization)
if len(users) > 0:
event_start, start_all_day = ical_date_to_datetime(
event["DTSTART"].dt,
calendar_tz,
start=True,
)
event_end, end_all_day = ical_date_to_datetime(event["DTEND"].dt, calendar_tz, start=False)
event_start, event_end, all_day_event = event_start_end_all_day_with_respect_to_type(event, calendar_tz)
# next_shifts are not stored in db so we can use User objects directly
shifts[f"{event_start.timestamp()}_{event['UID']}"] = {
"users": users,
"start": event_start,
"end": event_end,
"all_day": start_all_day,
"all_day": all_day_event,
"priority": priority + min_priority, # increase priority for overrides
"priority_increased_by": min_priority,
}
@ -265,7 +255,7 @@ def notify_ical_schedule_shift(schedule_pk):
for prev_ical_file, current_ical_file in prev_and_current_ical_files:
if prev_ical_file is not None and (
current_ical_file is None or not is_icals_equal(current_ical_file, prev_ical_file)
current_ical_file is None or not is_icals_equal(current_ical_file, prev_ical_file, schedule)
):
# If icals are not equal then compare current_events from them
is_prev_ical_diff = True

View file

@ -92,7 +92,6 @@ def test_render_group_data_templates(
assert group_data.group_distinction == template_module.tests.get("group_distinction")
assert group_data.is_resolve_signal == template_module.tests.get("is_resolve_signal")
assert group_data.is_acknowledge_signal == template_module.tests.get("is_acknowledge_signal")
assert group_data.group_verbose_name == template_module.tests.get("group_verbose_name")
def test_default_templates_are_valid():

View file

@ -61,7 +61,6 @@ class AlertGroupListSerializer(EagerLoadingMixin, serializers.ModelSerializer):
"pk",
"alerts_count",
"inside_organization_number",
"verbose_name",
"alert_receive_channel",
"resolved",
"resolved_by",

View file

@ -96,7 +96,7 @@ class ChannelFilterSerializer(OrderedModelSerializerMixin, EagerLoadingMixin, se
organization = self.context["request"].auth.organization
if not isinstance(notification_backends, dict):
raise serializers.ValidationError(["Invalid messaging backend data"])
current = self.instance.notification_backends or {}
updated = self.instance.notification_backends or {}
for backend_id in notification_backends:
backend = get_messaging_backend_from_id(backend_id)
if backend is None:
@ -106,7 +106,8 @@ class ChannelFilterSerializer(OrderedModelSerializerMixin, EagerLoadingMixin, se
notification_backends[backend_id],
)
# update existing backend data
notification_backends[backend_id] = current.get(backend_id, {}) | updated_data
updated[backend_id] = updated.get(backend_id, {}) | updated_data
notification_backends = updated
return notification_backends

View file

@ -18,6 +18,7 @@ class ScheduleBaseSerializer(EagerLoadingMixin, serializers.ModelSerializer):
user_group = UserGroupSerializer()
warnings = serializers.SerializerMethodField()
on_call_now = serializers.SerializerMethodField()
number_of_escalation_chains = serializers.SerializerMethodField()
class Meta:
fields = [
@ -33,6 +34,7 @@ class ScheduleBaseSerializer(EagerLoadingMixin, serializers.ModelSerializer):
"notify_empty_oncall",
"mention_oncall_start",
"mention_oncall_next",
"number_of_escalation_chains",
]
SELECT_RELATED = ["organization"]
@ -71,6 +73,11 @@ class ScheduleBaseSerializer(EagerLoadingMixin, serializers.ModelSerializer):
else:
return []
def get_number_of_escalation_chains(self, obj):
# num_escalation_chains param added in queryset via annotate. Check ScheduleView.get_queryset
# return 0 for just created schedules
return getattr(obj, "num_escalation_chains", 0)
def validate(self, attrs):
if "slack_channel_id" in attrs:
slack_channel_id = attrs.pop("slack_channel_id", None)

View file

@ -437,7 +437,10 @@ def test_channel_filter_update_notification_backends_updates_existing_data(
):
organization, user, token = make_organization_and_user_with_plugin_token()
alert_receive_channel = make_alert_receive_channel(organization)
existing_notification_backends = {"TESTONLY": {"enabled": True, "channel": "ABCDEF"}}
existing_notification_backends = {
"TESTONLY": {"enabled": True, "channel": "ABCDEF"},
"ANOTHERONE": {"enabled": False, "channel": "123456"},
}
channel_filter = make_channel_filter(alert_receive_channel, notification_backends=existing_notification_backends)
client = APIClient()
@ -448,7 +451,13 @@ def test_channel_filter_update_notification_backends_updates_existing_data(
"notification_backends": notification_backends_update,
}
response = client.put(url, data=data_for_update, format="json", **make_user_auth_headers(user, token))
class FakeBackend:
def validate_channel_filter_data(self, organization, data):
return data
with patch("apps.api.serializers.channel_filter.get_messaging_backend_from_id") as mock_get_backend:
mock_get_backend.return_value = FakeBackend()
response = client.put(url, data=data_for_update, format="json", **make_user_auth_headers(user, token))
channel_filter.refresh_from_db()

View file

@ -9,6 +9,7 @@ from apps.api.views.features import (
FEATURE_LIVE_SETTINGS,
FEATURE_SLACK,
FEATURE_TELEGRAM,
FEATURE_WEB_SCHEDULES,
)
@ -42,6 +43,7 @@ def test_select_features_all_enabled(
settings.FEATURE_LIVE_SETTINGS_ENABLED = True
settings.FEATURE_GRAFANA_CLOUD_CONNECTION = True
settings.FEATURE_GRAFANA_CLOUD_NOTIFICATIONS = True
settings.FEATURE_WEB_SCHEDULES_ENABLED = True
client = APIClient()
url = reverse("api-internal:features")
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -53,6 +55,7 @@ def test_select_features_all_enabled(
FEATURE_GRAFANA_CLOUD_CONNECTION,
FEATURE_LIVE_SETTINGS,
FEATURE_GRAFANA_CLOUD_NOTIFICATIONS,
FEATURE_WEB_SCHEDULES,
]
@ -69,6 +72,7 @@ def test_select_features_all_disabled(
settings.FEATURE_LIVE_SETTINGS_ENABLED = False
settings.FEATURE_GRAFANA_CLOUD_CONNECTION = False
settings.FEATURE_GRAFANA_CLOUD_NOTIFICATIONS = FEATURE_GRAFANA_CLOUD_NOTIFICATIONS
settings.FEATURE_WEB_SCHEDULES_ENABLED = False
client = APIClient()
url = reverse("api-internal:features")
response = client.get(url, format="json", **make_user_auth_headers(user, token))

View file

@ -79,7 +79,14 @@ def test_create_on_call_shift_override(on_call_shift_internal_api_setup, make_us
}
response = client.post(url, data, format="json", **make_user_auth_headers(user1, token))
expected_payload = data | {"id": response.data["id"], "updated_shift": None}
returned_rolling_users = response.data["rolling_users"]
assert len(returned_rolling_users) == 1
assert sorted(returned_rolling_users[0]) == sorted(data["rolling_users"][0])
expected_payload = data | {
"id": response.data["id"],
"updated_shift": None,
"rolling_users": returned_rolling_users,
}
assert response.status_code == status.HTTP_201_CREATED
assert response.json() == expected_payload
@ -1311,3 +1318,394 @@ def test_on_call_shift_preview(
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_on_call_shift_preview_without_users(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_user_auth_headers,
make_schedule,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
request_date = start_date
user = make_user_for_organization(organization)
url = "{}?date={}&days={}".format(
reverse("api-internal:oncall_shifts-preview"), request_date.strftime("%Y-%m-%d"), 1
)
shift_start = (start_date + timezone.timedelta(hours=12)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_end = (start_date + timezone.timedelta(hours=13)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_data = {
"schedule": schedule.public_primary_key,
"type": CustomOnCallShift.TYPE_ROLLING_USERS_EVENT,
"rotation_start": shift_start,
"shift_start": shift_start,
"shift_end": shift_end,
# passing empty users
"rolling_users": [],
"priority_level": 2,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
}
response = client.post(url, shift_data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
# check rotation events
rotation_events = response.json()["rotation"]
expected_rotation_events = [
{
"calendar_type": OnCallSchedule.TYPE_ICAL_PRIMARY,
"start": shift_start,
"end": shift_end,
"all_day": False,
"is_override": False,
"is_empty": True,
"is_gap": False,
"priority_level": None,
"missing_users": [],
"users": [],
"source": "web",
}
]
# there isn't a saved shift, we don't care/know the temp pk
_ = [r.pop("shift") for r in rotation_events]
assert rotation_events == expected_rotation_events
# check final schedule events
final_events = response.json()["final"]
expected_events = []
returned_events = [
{
"end": e["end"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
"is_empty": e["is_empty"],
}
for e in final_events
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_on_call_shift_preview_merge_events(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_user_auth_headers,
make_schedule,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
request_date = start_date
user = make_user_for_organization(organization)
other_user = make_user_for_organization(organization)
url = "{}?date={}&days={}".format(
reverse("api-internal:oncall_shifts-preview"), request_date.strftime("%Y-%m-%d"), 1
)
shift_start = (start_date + timezone.timedelta(hours=12)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_end = (start_date + timezone.timedelta(hours=13)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_data = {
"schedule": schedule.public_primary_key,
"type": CustomOnCallShift.TYPE_ROLLING_USERS_EVENT,
"rotation_start": shift_start,
"shift_start": shift_start,
"shift_end": shift_end,
"rolling_users": [[user.public_primary_key, other_user.public_primary_key]],
"priority_level": 2,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
}
response = client.post(url, shift_data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
# check rotation events
rotation_events = response.json()["rotation"]
expected_rotation_events = [
{
"calendar_type": OnCallSchedule.TYPE_ICAL_PRIMARY,
"start": shift_start,
"end": shift_end,
"all_day": False,
"is_override": False,
"is_empty": False,
"is_gap": False,
"priority_level": 2,
"missing_users": [],
"source": "web",
}
]
expected_users = sorted([user.username, other_user.username])
returned_event = rotation_events[0]
# there isn't a saved shift, we don't care/know the temp pk
returned_event.pop("shift")
returned_users = sorted(u["display_name"] for u in returned_event.pop("users"))
assert sorted(returned_users) == expected_users
assert rotation_events == expected_rotation_events
# check final schedule events
final_events = response.json()["final"]
expected = (
# start (h), duration (H), users, priority
(12, 1, expected_users, 2), # 12-13 other_user
)
expected_events = [
{
"end": (start_date + timezone.timedelta(hours=start + duration)).strftime("%Y-%m-%dT%H:%M:%SZ"),
"priority_level": priority,
"start": (start_date + timezone.timedelta(hours=start, milliseconds=1 if start == 0 else 0)).strftime(
"%Y-%m-%dT%H:%M:%SZ"
),
"users": users,
}
for start, duration, users, priority in expected
]
returned_events = [
{
"end": e["end"],
"priority_level": e["priority_level"],
"start": e["start"],
"users": sorted(u["display_name"] for u in e["users"]),
}
for e in final_events
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_on_call_shift_preview_update(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_user_auth_headers,
make_schedule,
make_on_call_shift,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
request_date = start_date
user = make_user_for_organization(organization)
other_user = make_user_for_organization(organization)
data = {
"start": start_date + timezone.timedelta(hours=8),
"rotation_start": start_date + timezone.timedelta(hours=8),
"duration": timezone.timedelta(hours=1),
"priority_level": 1,
"interval": 4,
"frequency": CustomOnCallShift.FREQUENCY_HOURLY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
url = "{}?date={}&days={}".format(
reverse("api-internal:oncall_shifts-preview"), request_date.strftime("%Y-%m-%d"), 1
)
shift_start = (start_date + timezone.timedelta(hours=10)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_end = (start_date + timezone.timedelta(hours=18)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_data = {
"schedule": schedule.public_primary_key,
"shift_pk": on_call_shift.public_primary_key,
"type": CustomOnCallShift.TYPE_ROLLING_USERS_EVENT,
"rotation_start": shift_start,
"shift_start": shift_start,
"shift_end": shift_end,
"rolling_users": [[other_user.public_primary_key]],
"priority_level": 1,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
}
response = client.post(url, shift_data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
# check rotation events
rotation_events = response.json()["rotation"]
# previewing an update does not reuse shift PK if rotation already started
shift_pk = rotation_events[0]["shift"]["pk"]
assert shift_pk != on_call_shift.public_primary_key
expected_rotation_events = [
{
"calendar_type": OnCallSchedule.TYPE_ICAL_PRIMARY,
"shift": {"pk": shift_pk},
"start": shift_start,
"end": shift_end,
"all_day": False,
"is_override": False,
"is_empty": False,
"is_gap": False,
"priority_level": 1,
"missing_users": [],
"users": [{"display_name": other_user.username, "pk": other_user.public_primary_key}],
"source": "web",
},
]
assert rotation_events == expected_rotation_events
# check final schedule events
final_events = response.json()["final"]
expected = (
# start (h), duration (H), user, priority
(8, 1, user.username, 1), # 8-9 user
(10, 8, other_user.username, 1), # 10-18 other_user
)
expected_events = [
{
"end": (start_date + timezone.timedelta(hours=start + duration)).strftime("%Y-%m-%dT%H:%M:%SZ"),
"priority_level": priority,
"start": (start_date + timezone.timedelta(hours=start, milliseconds=1 if start == 0 else 0)).strftime(
"%Y-%m-%dT%H:%M:%SZ"
),
"user": user,
}
for start, duration, user, priority in expected
]
returned_events = [
{
"end": e["end"],
"priority_level": e["priority_level"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
}
for e in final_events
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_on_call_shift_preview_update_not_started_reuse_pk(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_user_auth_headers,
make_schedule,
make_on_call_shift,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now + timezone.timedelta(days=7)
request_date = start_date
user = make_user_for_organization(organization)
other_user = make_user_for_organization(organization)
data = {
"start": start_date + timezone.timedelta(hours=8),
"rotation_start": start_date + timezone.timedelta(hours=8),
"duration": timezone.timedelta(hours=1),
"priority_level": 1,
"interval": 4,
"frequency": CustomOnCallShift.FREQUENCY_HOURLY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
url = "{}?date={}&days={}".format(
reverse("api-internal:oncall_shifts-preview"), request_date.strftime("%Y-%m-%d"), 1
)
shift_start = (start_date + timezone.timedelta(hours=6)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_end = (start_date + timezone.timedelta(hours=18)).strftime("%Y-%m-%dT%H:%M:%SZ")
shift_data = {
"schedule": schedule.public_primary_key,
"shift_pk": on_call_shift.public_primary_key,
"type": CustomOnCallShift.TYPE_ROLLING_USERS_EVENT,
"rotation_start": shift_start,
"shift_start": shift_start,
"shift_end": shift_end,
"rolling_users": [[other_user.public_primary_key]],
"priority_level": 1,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
}
response = client.post(url, shift_data, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
# check rotation events
rotation_events = response.json()["rotation"]
# previewing an update reuses shift PK when rotation is not started
expected_rotation_events = [
{
"calendar_type": OnCallSchedule.TYPE_ICAL_PRIMARY,
"shift": {"pk": on_call_shift.public_primary_key},
"start": shift_start,
"end": shift_end,
"all_day": False,
"is_override": False,
"is_empty": False,
"is_gap": False,
"priority_level": 1,
"missing_users": [],
"users": [{"display_name": other_user.username, "pk": other_user.public_primary_key}],
"source": "web",
},
]
assert rotation_events == expected_rotation_events
# check final schedule events
final_events = response.json()["final"]
expected = (
# start (h), duration (H), user, priority
(6, 12, other_user.username, 1), # 6-18 other_user
)
expected_events = [
{
"end": (start_date + timezone.timedelta(hours=start + duration)).strftime("%Y-%m-%dT%H:%M:%SZ"),
"priority_level": priority,
"start": (start_date + timezone.timedelta(hours=start, milliseconds=1 if start == 0 else 0)).strftime(
"%Y-%m-%dT%H:%M:%SZ"
),
"user": user,
}
for start, duration, user, priority in expected
]
returned_events = [
{
"end": e["end"],
"priority_level": e["priority_level"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
}
for e in final_events
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events

View file

@ -9,6 +9,7 @@ from rest_framework.response import Response
from rest_framework.serializers import ValidationError
from rest_framework.test import APIClient
from apps.alerts.models import EscalationPolicy
from apps.schedules.models import (
CustomOnCallShift,
OnCallSchedule,
@ -57,11 +58,21 @@ def schedule_internal_api_setup(
@pytest.mark.django_db
def test_get_list_schedules(schedule_internal_api_setup, make_user_auth_headers):
def test_get_list_schedules(
schedule_internal_api_setup, make_escalation_chain, make_escalation_policy, make_user_auth_headers
):
user, token, calendar_schedule, ical_schedule, web_schedule, slack_channel = schedule_internal_api_setup
client = APIClient()
url = reverse("api-internal:schedule-list")
# setup escalation chain linked to web schedule
escalation_chain = make_escalation_chain(user.organization)
make_escalation_policy(
escalation_chain=escalation_chain,
escalation_policy_step=EscalationPolicy.STEP_NOTIFY_SCHEDULE,
notify_schedule=web_schedule,
)
expected_payload = [
{
"id": calendar_schedule.public_primary_key,
@ -79,6 +90,7 @@ def test_get_list_schedules(schedule_internal_api_setup, make_user_auth_headers)
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 0,
},
{
"id": ical_schedule.public_primary_key,
@ -96,6 +108,7 @@ def test_get_list_schedules(schedule_internal_api_setup, make_user_auth_headers)
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 0,
},
{
"id": web_schedule.public_primary_key,
@ -112,6 +125,7 @@ def test_get_list_schedules(schedule_internal_api_setup, make_user_auth_headers)
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 1,
},
]
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -141,6 +155,7 @@ def test_get_detail_calendar_schedule(schedule_internal_api_setup, make_user_aut
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 0,
}
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -170,6 +185,7 @@ def test_get_detail_ical_schedule(schedule_internal_api_setup, make_user_auth_he
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 0,
}
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -178,8 +194,18 @@ def test_get_detail_ical_schedule(schedule_internal_api_setup, make_user_auth_he
@pytest.mark.django_db
def test_get_detail_web_schedule(schedule_internal_api_setup, make_user_auth_headers):
def test_get_detail_web_schedule(
schedule_internal_api_setup, make_escalation_chain, make_escalation_policy, make_user_auth_headers
):
user, token, _, _, web_schedule, _ = schedule_internal_api_setup
# setup escalation chain linked to web schedule
escalation_chain = make_escalation_chain(user.organization)
make_escalation_policy(
escalation_chain=escalation_chain,
escalation_policy_step=EscalationPolicy.STEP_NOTIFY_SCHEDULE,
notify_schedule=web_schedule,
)
client = APIClient()
url = reverse("api-internal:schedule-detail", kwargs={"pk": web_schedule.public_primary_key})
@ -198,6 +224,7 @@ def test_get_detail_web_schedule(schedule_internal_api_setup, make_user_auth_hea
"mention_oncall_start": True,
"notify_empty_oncall": 0,
"notify_oncall_shift_freq": 1,
"number_of_escalation_chains": 1,
}
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -230,6 +257,7 @@ def test_create_calendar_schedule(schedule_internal_api_setup, make_user_auth_he
# modify initial data by adding id and None for optional fields
schedule = OnCallSchedule.objects.get(public_primary_key=response.data["id"])
data["id"] = schedule.public_primary_key
data["number_of_escalation_chains"] = 0
assert response.status_code == status.HTTP_201_CREATED
assert response.data == data
@ -262,6 +290,7 @@ def test_create_ical_schedule(schedule_internal_api_setup, make_user_auth_header
# modify initial data by adding id and None for optional fields
schedule = OnCallSchedule.objects.get(public_primary_key=response.data["id"])
data["id"] = schedule.public_primary_key
data["number_of_escalation_chains"] = 0
assert response.status_code == status.HTTP_201_CREATED
assert response.data == data
@ -290,6 +319,7 @@ def test_create_web_schedule(schedule_internal_api_setup, make_user_auth_headers
# modify initial data by adding id and None for optional fields
schedule = OnCallSchedule.objects.get(public_primary_key=response.data["id"])
data["id"] = schedule.public_primary_key
data["number_of_escalation_chains"] = 0
assert response.status_code == status.HTTP_201_CREATED
assert response.data == data
@ -817,7 +847,7 @@ def test_next_shifts_per_user(
)
tomorrow = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0) + timezone.timedelta(days=1)
user_a, user_b, user_c = (make_user_for_organization(organization, username=i) for i in "ABC")
user_a, user_b, user_c, user_d = (make_user_for_organization(organization, username=i) for i in "ABCD")
shifts = (
# user, priority, start time (h), duration (hs)
@ -841,6 +871,19 @@ def test_next_shifts_per_user(
)
on_call_shift.users.add(user)
# override in the past: 17-18 / D
# won't be listed, but user D will still be included in the response
override_data = {
"start": tomorrow - timezone.timedelta(days=3),
"rotation_start": tomorrow - timezone.timedelta(days=3),
"duration": timezone.timedelta(hours=1),
"schedule": schedule,
}
override = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_OVERRIDE, **override_data
)
override.add_rolling_users([[user_d]])
# override: 17-18 / C
override_data = {
"start": tomorrow + timezone.timedelta(hours=17),
@ -853,7 +896,7 @@ def test_next_shifts_per_user(
)
override.add_rolling_users([[user_c]])
# final schedule: 7-12: B, 15-16: A, 16-17: B, 17-18: C (override), 18-20: C
# final sdhedule: 7-12: B, 15-16: A, 16-17: B, 17-18: C (override), 18-20: C
url = reverse("api-internal:schedule-next-shifts-per-user", kwargs={"pk": schedule.public_primary_key})
response = client.get(url, format="json", **make_user_auth_headers(user, token))
@ -863,11 +906,57 @@ def test_next_shifts_per_user(
user_a.public_primary_key: (tomorrow + timezone.timedelta(hours=15), tomorrow + timezone.timedelta(hours=16)),
user_b.public_primary_key: (tomorrow + timezone.timedelta(hours=7), tomorrow + timezone.timedelta(hours=12)),
user_c.public_primary_key: (tomorrow + timezone.timedelta(hours=17), tomorrow + timezone.timedelta(hours=18)),
user_d.public_primary_key: None,
}
returned_data = {
u: (ev["start"], ev["end"]) if ev is not None else None for u, ev in response.data["users"].items()
}
returned_data = {u: (ev["start"], ev["end"]) for u, ev in response.data["users"].items()}
assert returned_data == expected
@pytest.mark.django_db
def test_related_escalation_chains(
make_organization_and_user_with_plugin_token,
make_user_auth_headers,
make_schedule,
make_escalation_chain,
make_escalation_policy,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
# setup escalation chains linked to web schedule
escalation_chains = []
for i in range(3):
chain = make_escalation_chain(user.organization)
make_escalation_policy(
escalation_chain=chain,
escalation_policy_step=EscalationPolicy.STEP_NOTIFY_SCHEDULE,
notify_schedule=schedule,
)
escalation_chains.append(chain)
# setup other unrelated schedule
other_schedule = make_schedule(organization, schedule_class=OnCallScheduleWeb)
other_chain = make_escalation_chain(user.organization)
make_escalation_policy(
escalation_chain=other_chain,
escalation_policy_step=EscalationPolicy.STEP_NOTIFY_SCHEDULE,
notify_schedule=other_schedule,
)
url = reverse("api-internal:schedule-related-escalation-chains", kwargs={"pk": schedule.public_primary_key})
response = client.get(url, format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_200_OK
expected = [{"name": chain.name, "pk": chain.public_primary_key} for chain in escalation_chains]
assert sorted(response.data, key=lambda e: e["name"]) == sorted(expected, key=lambda e: e["name"])
@pytest.mark.django_db
def test_merging_same_shift_events(
make_organization_and_user_with_plugin_token,

View file

@ -191,8 +191,7 @@ class AlertGroupView(
pagination_class = TwentyFiveCursorPaginator
filter_backends = [SearchFilter, filters.DjangoFilterBackend]
# todo: add ability to search by templated title
search_fields = ["public_primary_key", "inside_organization_number"]
search_fields = ["public_primary_key", "inside_organization_number", "web_title_cache"]
filterset_class = AlertGroupFilter

View file

@ -12,6 +12,7 @@ FEATURE_LIVE_SETTINGS = "live_settings"
MOBILE_APP_PUSH_NOTIFICATIONS = "mobile_app"
FEATURE_GRAFANA_CLOUD_NOTIFICATIONS = "grafana_cloud_notifications"
FEATURE_GRAFANA_CLOUD_CONNECTION = "grafana_cloud_connection"
FEATURE_WEB_SCHEDULES = "web_schedules"
class FeaturesAPIView(APIView):
@ -56,4 +57,7 @@ class FeaturesAPIView(APIView):
if live_settings.GRAFANA_CLOUD_NOTIFICATIONS_ENABLED:
enabled_features.append(FEATURE_GRAFANA_CLOUD_NOTIFICATIONS)
if settings.FEATURE_WEB_SCHEDULES_ENABLED:
enabled_features.append(FEATURE_WEB_SCHEDULES)
return enabled_features

View file

@ -89,9 +89,13 @@ class OnCallShiftView(PublicPrimaryKeyMixin, UpdateSerializerMixin, ModelViewSet
validated_data = serializer._correct_validated_data(
serializer.validated_data["type"], serializer.validated_data
)
updated_shift_pk = self.request.data.get("shift_pk")
shift = CustomOnCallShift(**validated_data)
schedule = shift.schedule
shift_events, final_events = schedule.preview_shift(shift, user_tz, starting_date, days)
shift_events, final_events = schedule.preview_shift(
shift, user_tz, starting_date, days, updated_shift_pk=updated_shift_pk
)
data = {
"rotation": shift_events,
"final": final_events,

View file

@ -1,6 +1,6 @@
import pytz
from django.core.exceptions import ObjectDoesNotExist
from django.db.models import OuterRef, Subquery
from django.db.models import Count, OuterRef, Subquery
from django.db.utils import IntegrityError
from django.urls import reverse
from django.utils import dateparse, timezone
@ -12,6 +12,7 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.views import Response
from rest_framework.viewsets import ModelViewSet
from apps.alerts.models import EscalationChain
from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission, AnyRole, IsAdmin, IsAdminOrEditor
from apps.api.serializers.schedule_base import ScheduleFastSerializer
from apps.api.serializers.schedule_polymorphic import (
@ -59,6 +60,7 @@ class ScheduleView(
"notify_empty_oncall_options",
"notify_oncall_shift_freq_options",
"mention_options",
"related_escalation_chains",
),
}
@ -90,6 +92,23 @@ class ScheduleView(
context.update({"can_update_user_groups": self.can_update_user_groups})
return context
def _annotate_queryset(self, queryset):
"""Annotate queryset with additional schedule metadata."""
organization = self.request.auth.organization
slack_channels = SlackChannel.objects.filter(
slack_team_identity=organization.slack_team_identity,
slack_id=OuterRef("channel"),
)
queryset = queryset.annotate(
slack_channel_name=Subquery(slack_channels.values("name")[:1]),
slack_channel_pk=Subquery(slack_channels.values("public_primary_key")[:1]),
num_escalation_chains=Count(
"escalation_policies__escalation_chain",
distinct=True,
),
)
return queryset
def get_queryset(self):
is_short_request = self.request.query_params.get("short", "false") == "true"
organization = self.request.auth.organization
@ -98,14 +117,7 @@ class ScheduleView(
team=self.request.user.current_team,
)
if not is_short_request:
slack_channels = SlackChannel.objects.filter(
slack_team_identity=organization.slack_team_identity,
slack_id=OuterRef("channel"),
)
queryset = queryset.annotate(
slack_channel_name=Subquery(slack_channels.values("name")[:1]),
slack_channel_pk=Subquery(slack_channels.values("public_primary_key")[:1]),
)
queryset = self._annotate_queryset(queryset)
queryset = self.serializer_class.setup_eager_loading(queryset)
return queryset
@ -113,14 +125,10 @@ class ScheduleView(
# Override this method because we want to get object from organization instead of concrete team.
pk = self.kwargs["pk"]
organization = self.request.auth.organization
slack_channels = SlackChannel.objects.filter(
slack_team_identity=organization.slack_team_identity,
slack_id=OuterRef("channel"),
)
queryset = organization.oncall_schedules.filter(public_primary_key=pk,).annotate(
slack_channel_name=Subquery(slack_channels.values("name")[:1]),
slack_channel_pk=Subquery(slack_channels.values("public_primary_key")[:1]),
queryset = organization.oncall_schedules.filter(
public_primary_key=pk,
)
queryset = self._annotate_queryset(queryset)
try:
obj = queryset.get()
@ -234,9 +242,6 @@ class ScheduleView(
else: # return final schedule
events = schedule.final_events(user_tz, starting_date, days)
# combine multiple-users same-shift events into one
events = self._merge_events(events)
result = {
"id": schedule.public_primary_key,
"name": schedule.name,
@ -245,25 +250,6 @@ class ScheduleView(
}
return Response(result, status=status.HTTP_200_OK)
def _merge_events(self, events):
"""Merge user groups same-shift events."""
if events:
merged = [events[0]]
current = merged[0]
for next_event in events[1:]:
if (
current["start"] == next_event["start"]
and current["shift"]["pk"] is not None
and current["shift"]["pk"] == next_event["shift"]["pk"]
):
current["users"] += next_event["users"]
current["missing_users"] += next_event["missing_users"]
else:
merged.append(next_event)
current = next_event
events = merged
return events
@action(detail=True, methods=["get"])
def next_shifts_per_user(self, request, pk):
"""Return next shift for users in schedule."""
@ -273,15 +259,24 @@ class ScheduleView(
schedule = self.original_get_object()
events = schedule.final_events(user_tz, starting_date, days=30)
users = {}
users = {u: None for u in schedule.related_users()}
for e in events:
user = e["users"][0]["pk"] if e["users"] else None
if user is not None and user not in users and e["end"] > now:
if user is not None and users.get(user) is None and e["end"] > now:
users[user] = e
result = {"users": users}
return Response(result, status=status.HTTP_200_OK)
@action(detail=True, methods=["get"])
def related_escalation_chains(self, request, pk):
"""Return escalation chains associated to schedule."""
schedule = self.original_get_object()
escalation_chains = EscalationChain.objects.filter(escalation_policies__notify_schedule=schedule).distinct()
result = [{"name": e.name, "pk": e.public_primary_key} for e in escalation_chains]
return Response(result, status=status.HTTP_200_OK)
@action(detail=False, methods=["get"])
def type_options(self, request):
# TODO: check if it needed

View file

@ -69,7 +69,8 @@ class UserNotificationPolicyLogRecord(models.Model):
ERROR_NOTIFICATION_IN_SLACK_RATELIMIT,
ERROR_NOTIFICATION_MESSAGING_BACKEND_ERROR,
ERROR_NOTIFICATION_NOT_ALLOWED_USER_ROLE,
) = range(26)
ERROR_NOTIFICATION_TELEGRAM_USER_IS_DEACTIVATED,
) = range(27)
# for this errors we want to send message to general log channel
ERRORS_TO_SEND_IN_SLACK_CHANNEL = [
@ -272,6 +273,11 @@ class UserNotificationPolicyLogRecord(models.Model):
self.notification_error_code == UserNotificationPolicyLogRecord.ERROR_NOTIFICATION_NOT_ALLOWED_USER_ROLE
):
result += f"failed to notify {user_verbal}, not allowed role"
elif (
self.notification_error_code
== UserNotificationPolicyLogRecord.ERROR_NOTIFICATION_TELEGRAM_USER_IS_DEACTIVATED
):
result += f"failed to send telegram message to {user_verbal} because user has been deactivated"
else:
# TODO: handle specific backend errors
try:

View file

@ -259,15 +259,10 @@ def list_of_empty_shifts_in_schedule(schedule, start_date, end_date):
checked_events.add(event_hash)
all_day = False
if type(event[ICAL_DATETIME_START].dt) == datetime.date:
# Convert all-day events start and end from date to datetime with calendar's tz
start, _ = ical_date_to_datetime(event["DTSTART"].dt, calendar_tz, start=True)
end, _ = ical_date_to_datetime(event["DTEND"].dt, calendar_tz, start=False)
all_day = True
else:
start = event[ICAL_DATETIME_START].dt.astimezone(pytz.UTC)
end = event[ICAL_DATETIME_END].dt.astimezone(pytz.UTC)
start, end, all_day = event_start_end_all_day_with_respect_to_type(event, calendar_tz)
if not all_day:
start = start.astimezone(pytz.UTC)
end = end.astimezone(pytz.UTC)
empty_shifts_per_calendar.append(
EmptyShift(
@ -367,6 +362,9 @@ def parse_event_uid(string):
match = RE_EVENT_UID_V1.match(string)
if match:
_, _, _, source = match.groups()
else:
# fallback to use the UID string as the rotation ID
pk = string
if source is not None:
source = int(source)
@ -408,7 +406,7 @@ def get_users_from_ical_event(event, organization):
return users
def is_icals_equal(first, second):
def is_icals_equal_line_by_line(first, second):
first = first.split("\n")
second = second.split("\n")
if len(first) != len(second):
@ -425,6 +423,30 @@ def is_icals_equal(first, second):
return True
def is_icals_equal(first, second, schedule):
from apps.schedules.models import OnCallScheduleICal # noqa
if isinstance(schedule, OnCallScheduleICal):
first_cal = Calendar.from_ical(first)
second_cal = Calendar.from_ical(second)
first_subcomponents = first_cal.subcomponents
second_subcomponents = second_cal.subcomponents
if len(first_subcomponents) != len(second_subcomponents):
return False
for idx, first_cmp in enumerate(first_cal.subcomponents):
second_cmp = second_subcomponents[idx]
if first_cmp.name == second_cmp.name == "VEVENT":
first_uid, first_seq = first_cmp.get("UID", None), first_cmp.get("SEQUENCE", None)
second_uid, second_seq = second_cmp.get("UID", None), second_cmp.get("SEQUENCE", None)
if first_uid != second_uid:
return False
elif first_seq != second_seq:
return False
return True
else:
return is_icals_equal_line_by_line(first, second)
def ical_date_to_datetime(date, tz, start):
datetime_to_combine = datetime.time.min
all_day = False
@ -551,7 +573,7 @@ def list_of_gaps_in_schedule(schedule, start_date, end_date):
end_datetime,
)
for event in events:
start, end = start_end_with_respect_to_all_day(event, calendar_tz)
start, end, _ = event_start_end_all_day_with_respect_to_type(event, calendar_tz)
intervals.append(DatetimeInterval(start, end))
return detect_gaps(intervals, start_datetime, end_datetime)
@ -588,6 +610,16 @@ def start_end_with_respect_to_all_day(event, calendar_tz):
return start, end
def event_start_end_all_day_with_respect_to_type(event, calendar_tz):
all_day = False
if type(event[ICAL_DATETIME_START].dt) == datetime.date:
start, end = start_end_with_respect_to_all_day(event, calendar_tz)
all_day = True
else:
start, end = ical_events.get_start_and_end_with_respect_to_event_type(event)
return start, end, all_day
def convert_windows_timezone_to_iana(tz_name):
"""
Conversion info taken from https://raw.githubusercontent.com/unicode-org/cldr/main/common/supplemental/windowsZones.xml

View file

@ -273,7 +273,7 @@ class CustomOnCallShift(models.Model):
return is_finished
def convert_to_ical(self, time_zone="UTC"):
def convert_to_ical(self, time_zone="UTC", allow_empty_users=False):
result = ""
# use shift time_zone if it exists, otherwise use schedule or default time_zone
time_zone = self.time_zone if self.time_zone is not None else time_zone
@ -285,8 +285,10 @@ class CustomOnCallShift(models.Model):
all_rotation_checked = False
users_queue = self.get_rolling_users()
if not users_queue:
if not users_queue and not allow_empty_users:
return result
if not users_queue and allow_empty_users:
users_queue = [[None]]
if self.frequency is None:
users_queue = users_queue[:1]
@ -354,7 +356,10 @@ class CustomOnCallShift(models.Model):
current_event = Event.from_ical(event_ical)
# take shift interval, not event interval. For rolling_users shift it is not the same.
interval = self.interval or 1
current_event["rrule"]["INTERVAL"] = interval
if "rrule" in current_event:
# when triggering shift previews, there could be no rrule information yet
# (e.g. initial empty weekly rotation has no rrule set)
current_event["rrule"]["INTERVAL"] = interval
current_event_start = current_event["DTSTART"].dt
next_event_start = current_event_start
# Calculate the minimum start date for the next event based on rotation frequency. We don't need to do this
@ -482,7 +487,8 @@ class CustomOnCallShift(models.Model):
rolling_users = self.rolling_users
for users_dict in rolling_users:
users_list = list(users.filter(pk__in=users_dict.keys()))
users_queue.append(users_list)
if users_list:
users_queue.append(users_list)
return users_queue
def add_rolling_users(self, rolling_users_list):

View file

@ -1,4 +1,5 @@
import datetime
import functools
import itertools
import icalendar
@ -196,6 +197,10 @@ class OnCallSchedule(PolymorphicModel):
self.cached_ical_file_overrides = None
self.save(update_fields=["cached_ical_file_overrides", "prev_ical_file_overrides"])
def related_users(self):
"""Return public primary keys for all users referenced in the schedule."""
return set()
def filter_events(self, user_timezone, starting_date, days, with_empty=False, with_gap=False, filter_by=None):
"""Return filtered events from schedule."""
shifts = (
@ -233,6 +238,9 @@ class OnCallSchedule(PolymorphicModel):
}
events.append(shift_json)
# combine multiple-users same-shift events into one
events = self._merge_events(events)
return events
def final_events(self, user_tz, starting_date, days):
@ -246,14 +254,23 @@ class OnCallSchedule(PolymorphicModel):
if not events:
return []
# sort schedule events by (type desc, priority desc, start timestamp asc)
events.sort(
key=lambda e: (
def event_cmp_key(e):
"""Sorting key criteria for events."""
return (
-e["calendar_type"] if e["calendar_type"] else 0, # overrides: 1, shifts: 0, gaps: None
-e["priority_level"] if e["priority_level"] else 0,
e["start"],
)
)
def insort_event(eventlist, e):
"""Insert event keeping ordering criteria into already sorted event list."""
idx = 0
for i in eventlist:
if event_cmp_key(e) > event_cmp_key(i):
idx += 1
else:
break
eventlist.insert(idx, e)
def _merge_intervals(evs):
"""Keep track of scheduled intervals."""
@ -269,24 +286,30 @@ class OnCallSchedule(PolymorphicModel):
result.append(interval)
return result
# sort schedule events by (type desc, priority desc, start timestamp asc)
events.sort(key=event_cmp_key)
# iterate over events, reserving schedule slots based on their priority
# if the expected slot was already scheduled for a higher priority event,
# split the event, or fix start/end timestamps accordingly
# include overrides from start
resolved = [e for e in events if e["calendar_type"] == OnCallSchedule.TYPE_ICAL_OVERRIDES]
intervals = _merge_intervals(resolved)
pending = events[len(resolved) :]
if not pending:
return resolved
current_event_idx = 0 # current event to resolve
intervals = []
resolved = []
pending = events
current_interval_idx = 0 # current scheduled interval being checked
current_priority = pending[0]["priority_level"] # current priority level being resolved
current_priority = None # current priority level being resolved
while current_event_idx < len(pending):
ev = pending[current_event_idx]
while pending:
ev = pending.pop(0)
if ev["is_empty"]:
# exclude events without active users
continue
if ev["calendar_type"] == OnCallSchedule.TYPE_ICAL_OVERRIDES:
# include overrides from start
resolved.append(ev)
continue
if ev["priority_level"] != current_priority:
# update scheduled intervals on priority change
@ -299,11 +322,11 @@ class OnCallSchedule(PolymorphicModel):
if current_interval_idx >= len(intervals):
# event outside scheduled intervals, add to resolved
resolved.append(ev)
current_event_idx += 1
elif ev["start"] < intervals[current_interval_idx][0] and ev["end"] <= intervals[current_interval_idx][0]:
# event starts and ends outside an already scheduled interval, add to resolved
resolved.append(ev)
current_event_idx += 1
elif ev["start"] < intervals[current_interval_idx][0] and ev["end"] > intervals[current_interval_idx][0]:
# event starts outside interval but overlaps with an already scheduled interval
# 1. add a split event copy to schedule the time before the already scheduled interval
@ -315,12 +338,16 @@ class OnCallSchedule(PolymorphicModel):
# event ends after current interval, update event start timestamp to match the interval end
# and process the updated event as any other event
ev["start"] = intervals[current_interval_idx][1]
else:
# done, go to next event
current_event_idx += 1
# reorder pending events after updating current event start date
# (ie. insert the event where it should be to keep the order criteria)
# TODO: switch to bisect insert on python 3.10 (or consider heapq)
insort_event(pending, ev)
# done, go to next event
elif ev["start"] >= intervals[current_interval_idx][0] and ev["end"] <= intervals[current_interval_idx][1]:
# event inside an already scheduled interval, ignore (go to next)
current_event_idx += 1
continue
elif (
ev["start"] >= intervals[current_interval_idx][0]
and ev["start"] < intervals[current_interval_idx][1]
@ -329,15 +356,39 @@ class OnCallSchedule(PolymorphicModel):
# event starts inside a scheduled interval but ends out of it
# update the event start timestamp to match the interval end
ev["start"] = intervals[current_interval_idx][1]
# move to next interval and process the updated event as any other event
current_interval_idx += 1
# unresolved, re-add to pending
# TODO: switch to bisect insert on python 3.10 (or consider heapq)
insort_event(pending, ev)
elif ev["start"] >= intervals[current_interval_idx][1]:
# event starts after the current interval, move to next interval and go through it
current_interval_idx += 1
# unresolved, re-add to pending
# TODO: switch to bisect insert on python 3.10 (or consider heapq)
insort_event(pending, ev)
resolved.sort(key=lambda e: (e["start"], e["shift"]["pk"]))
return resolved
def _merge_events(self, events):
"""Merge user groups same-shift events."""
if events:
merged = [events[0]]
current = merged[0]
for next_event in events[1:]:
if (
current["start"] == next_event["start"]
and current["shift"]["pk"] is not None
and current["shift"]["pk"] == next_event["shift"]["pk"]
):
current["users"] += next_event["users"]
current["missing_users"] += next_event["missing_users"]
else:
merged.append(next_event)
current = next_event
events = merged
return events
# Insight logs
@property
def insight_logs_verbal(self):
@ -529,7 +580,7 @@ class OnCallScheduleCalendar(OnCallSchedule):
class OnCallScheduleWeb(OnCallSchedule):
time_zone = models.CharField(max_length=100, default="UTC")
def _generate_ical_file_from_shifts(self, qs, extra_shifts=None):
def _generate_ical_file_from_shifts(self, qs, extra_shifts=None, allow_empty_users=False):
"""Generate iCal events file from custom on-call shifts."""
ical = None
if qs.exists() or extra_shifts is not None:
@ -544,7 +595,7 @@ class OnCallScheduleWeb(OnCallSchedule):
ical = ical_file.replace(end_line, "").strip()
ical = f"{ical}\r\n"
for event in itertools.chain(qs.all(), extra_shifts):
ical += event.convert_to_ical(self.time_zone)
ical += event.convert_to_ical(self.time_zone, allow_empty_users=allow_empty_users)
ical += f"{end_line}\r\n"
return ical
@ -582,7 +633,22 @@ class OnCallScheduleWeb(OnCallSchedule):
self.cached_ical_file_overrides = self._generate_ical_file_overrides()
self.save(update_fields=["cached_ical_file_overrides", "prev_ical_file_overrides"])
def preview_shift(self, custom_shift, user_tz, starting_date, days):
def related_users(self):
"""Return public primary keys for all users referenced in the schedule."""
rolling_users = self.custom_shifts.values_list("rolling_users", flat=True)
users = functools.reduce(
set.union,
(
set(g.values())
for rolling_groups in rolling_users
if rolling_groups is not None
for g in rolling_groups
if g is not None
),
)
return users
def preview_shift(self, custom_shift, user_tz, starting_date, days, updated_shift_pk=None):
"""Return unsaved rotation and final schedule preview events."""
if custom_shift.type == CustomOnCallShift.TYPE_OVERRIDE:
qs = self.custom_shifts.filter(type=CustomOnCallShift.TYPE_OVERRIDE)
@ -602,7 +668,22 @@ class OnCallScheduleWeb(OnCallSchedule):
except AttributeError:
pass
ical_file = self._generate_ical_file_from_shifts(qs, extra_shifts=[custom_shift])
extra_shifts = [custom_shift]
if updated_shift_pk is not None:
try:
update_shift = qs.get(public_primary_key=updated_shift_pk)
except CustomOnCallShift.DoesNotExist:
pass
else:
if update_shift.event_is_started:
update_shift.until = custom_shift.rotation_start
extra_shifts.append(update_shift)
else:
# only reuse PK for preview when updating a rotation that won't be started after the update
custom_shift.public_primary_key = updated_shift_pk
qs = qs.exclude(public_primary_key=updated_shift_pk)
ical_file = self._generate_ical_file_from_shifts(qs, extra_shifts=extra_shifts, allow_empty_users=True)
original_value = getattr(self, ical_attr)
_invalidate_cache(self, ical_property)

View file

@ -46,7 +46,9 @@ def refresh_ical_file(schedule_pk):
run_task_primary = True
task_logger.info(f"run_task_primary {schedule_pk} {run_task_primary} prev_ical_file_primary is None")
else:
run_task_primary = not is_icals_equal(schedule.cached_ical_file_primary, schedule.prev_ical_file_primary)
run_task_primary = not is_icals_equal(
schedule.cached_ical_file_primary, schedule.prev_ical_file_primary, schedule
)
task_logger.info(f"run_task_primary {schedule_pk} {run_task_primary} icals not equal")
run_task_overrides = False
if schedule.cached_ical_file_overrides is not None:
@ -55,7 +57,7 @@ def refresh_ical_file(schedule_pk):
task_logger.info(f"run_task_overrides {schedule_pk} {run_task_primary} prev_ical_file_overrides is None")
else:
run_task_overrides = not is_icals_equal(
schedule.cached_ical_file_overrides, schedule.prev_ical_file_overrides
schedule.cached_ical_file_overrides, schedule.prev_ical_file_overrides, schedule
)
task_logger.info(f"run_task_overrides {schedule_pk} {run_task_primary} icals not equal")
run_task = run_task_primary or run_task_overrides

View file

@ -78,3 +78,11 @@ def test_parse_event_uid_v2():
pk, source = parse_event_uid(event_uid)
assert pk == pk_value
assert source == "slack"
def test_parse_event_uid_fallback():
# use ical existing UID for imported events
event_uid = "someid@google.com"
pk, source = parse_event_uid(event_uid)
assert pk == event_uid
assert source is None

View file

@ -261,9 +261,9 @@ def test_final_schedule_events(make_organization, make_user_for_organization, ma
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_RECURRENT_EVENT, **data
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.users.add(user)
on_call_shift.add_rolling_users([[user]])
# override: 22-23 / E
override_data = {
@ -322,6 +322,141 @@ def test_final_schedule_events(make_organization, make_user_for_organization, ma
assert returned_events == expected_events
@pytest.mark.django_db
def test_final_schedule_splitting_events(
make_organization, make_user_for_organization, make_on_call_shift, make_schedule
):
organization = make_organization()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
user_a, user_b, user_c = (make_user_for_organization(organization, username=i) for i in "ABC")
shifts = (
# user, priority, start time (h), duration (hs)
(user_a, 1, 10, 10), # r1-1: 10-20 / A
(user_b, 1, 12, 4), # r1-2: 12-16 / B
(user_c, 2, 15, 3), # r2-1: 15-18 / C
)
for user, priority, start_h, duration in shifts:
data = {
"start": start_date + timezone.timedelta(hours=start_h),
"rotation_start": start_date + timezone.timedelta(hours=start_h),
"duration": timezone.timedelta(hours=duration),
"priority_level": priority,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
returned_events = schedule.final_events("UTC", start_date, days=1)
expected = (
# start (h), duration (H), user, priority
(10, 5, "A", 1), # 10-15 A
(12, 3, "B", 1), # 12-15 B
(15, 3, "C", 2), # 15-18 C
(18, 2, "A", 1), # 18-20 A
)
expected_events = [
{
"end": start_date + timezone.timedelta(hours=start + duration),
"priority_level": priority,
"start": start_date + timezone.timedelta(hours=start),
"user": user,
}
for start, duration, user, priority in expected
]
returned_events = [
{
"end": e["end"],
"priority_level": e["priority_level"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
}
for e in returned_events
if not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_final_schedule_splitting_same_time_events(
make_organization, make_user_for_organization, make_on_call_shift, make_schedule
):
organization = make_organization()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
user_a, user_b, user_c = (make_user_for_organization(organization, username=i) for i in "ABC")
shifts = (
# user, priority, start time (h), duration (hs)
(user_a, 1, 10, 10), # r1-1: 10-20 / A
(user_b, 1, 10, 10), # r1-2: 10-20 / B
(user_c, 2, 10, 3), # r2-1: 10-13 / C
)
for user, priority, start_h, duration in shifts:
data = {
"start": start_date + timezone.timedelta(hours=start_h),
"rotation_start": start_date + timezone.timedelta(hours=start_h),
"duration": timezone.timedelta(hours=duration),
"priority_level": priority,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
returned_events = schedule.final_events("UTC", start_date, days=1)
expected = (
# start (h), duration (H), user, priority
(10, 3, "C", 2), # 10-13 C
(13, 7, "A", 1), # 13-20 A
(13, 7, "B", 1), # 13-20 B
)
expected_events = [
{
"end": start_date + timezone.timedelta(hours=start + duration),
"priority_level": priority,
"start": start_date + timezone.timedelta(hours=start),
"user": user,
}
for start, duration, user, priority in expected
]
returned_events = [
{
"end": e["end"],
"priority_level": e["priority_level"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
}
for e in sorted(
returned_events, key=lambda e: (e["start"], e["users"][0]["display_name"] if e["users"] else None)
)
if not e["is_gap"]
]
assert returned_events == expected_events
@pytest.mark.django_db
def test_preview_shift(make_organization, make_user_for_organization, make_schedule, make_on_call_shift):
organization = make_organization()
@ -417,6 +552,71 @@ def test_preview_shift(make_organization, make_user_for_organization, make_sched
assert schedule._ical_file_primary == schedule_primary_ical
@pytest.mark.django_db
def test_preview_shift_no_user(make_organization, make_user_for_organization, make_schedule, make_on_call_shift):
organization = make_organization()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
schedule_primary_ical = schedule._ical_file_primary
# proposed shift
new_shift = CustomOnCallShift(
type=CustomOnCallShift.TYPE_ROLLING_USERS_EVENT,
organization=organization,
schedule=schedule,
name="testing",
start=start_date + timezone.timedelta(hours=12),
rotation_start=start_date + timezone.timedelta(hours=12),
duration=timezone.timedelta(seconds=3600),
frequency=CustomOnCallShift.FREQUENCY_DAILY,
priority_level=2,
rolling_users=[],
)
rotation_events, final_events = schedule.preview_shift(new_shift, "UTC", start_date, days=1)
# check rotation events
expected_rotation_events = [
{
"calendar_type": OnCallSchedule.TYPE_ICAL_PRIMARY,
"start": new_shift.start,
"end": new_shift.start + new_shift.duration,
"all_day": False,
"is_override": False,
"is_empty": True,
"is_gap": False,
"priority_level": None,
"missing_users": [],
"users": [],
"shift": {"pk": new_shift.public_primary_key},
"source": "api",
}
]
assert rotation_events == expected_rotation_events
expected_events = []
returned_events = [
{
"end": e["end"],
"start": e["start"],
"user": e["users"][0]["display_name"] if e["users"] else None,
"is_empty": e["is_empty"],
}
for e in final_events
if not e["is_override"] and not e["is_gap"]
]
assert returned_events == expected_events
# final ical schedule didn't change
assert schedule._ical_file_primary == schedule_primary_ical
@pytest.mark.django_db
def test_preview_override_shift(make_organization, make_user_for_organization, make_schedule, make_on_call_shift):
organization = make_organization()
@ -510,3 +710,53 @@ def test_preview_override_shift(make_organization, make_user_for_organization, m
# final ical schedule didn't change
assert schedule._ical_file_overrides == schedule_overrides_ical
@pytest.mark.django_db
def test_schedule_related_users(make_organization, make_user_for_organization, make_on_call_shift, make_schedule):
organization = make_organization()
schedule = make_schedule(
organization,
schedule_class=OnCallScheduleWeb,
name="test_web_schedule",
)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
start_date = now - timezone.timedelta(days=7)
user_a, _, _, user_d, user_e = (make_user_for_organization(organization, username=i) for i in "ABCDE")
shifts = (
# user, priority, start time (h), duration (hs)
(user_a, 1, 10, 5), # r1-1: 10-15 / A
(user_d, 2, 20, 3), # r2-4: 20-23 / D
)
for user, priority, start_h, duration in shifts:
data = {
"start": start_date + timezone.timedelta(hours=start_h),
"rotation_start": start_date + timezone.timedelta(hours=start_h),
"duration": timezone.timedelta(hours=duration),
"priority_level": priority,
"frequency": CustomOnCallShift.FREQUENCY_DAILY,
"schedule": schedule,
}
on_call_shift = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_RECURRENT_EVENT, **data
)
on_call_shift.add_rolling_users([[user]])
# override: 22-23 / E
override_data = {
"start": start_date - timezone.timedelta(hours=22),
"rotation_start": start_date - timezone.timedelta(hours=22),
"duration": timezone.timedelta(hours=1),
"schedule": schedule,
}
override = make_on_call_shift(
organization=organization, shift_type=CustomOnCallShift.TYPE_OVERRIDE, **override_data
)
override.add_rolling_users([[user_e]])
schedule.refresh_from_db()
users = schedule.related_users()
assert users == set(u.public_primary_key for u in [user_a, user_d, user_e])

View file

@ -436,7 +436,7 @@ def _get_organization_select(slack_team_identity, slack_user_identity, value, in
{
"text": {
"type": "plain_text",
"text": f"{org.org_title}",
"text": f"{org.stack_slug}",
"emoji": True,
},
"value": f"{org.pk}",

View file

@ -87,12 +87,3 @@ class NotificationDeliveryStep(scenario_step.ScenarioStep):
print(e)
else:
raise e
def get_color_id(self, color):
if color == "red":
color_id = "#FF0000"
elif color == "yellow":
color_id = "#c6c000"
else:
color_id = color
return color_id

View file

@ -111,6 +111,13 @@ class TelegramToUserConnector(models.Model):
notification_policy,
UserNotificationPolicyLogRecord.ERROR_NOTIFICATION_TELEGRAM_TOKEN_ERROR,
)
elif e.message == "Forbidden: user is deactivated":
TelegramToUserConnector.create_telegram_notification_error(
alert_group,
self.user,
notification_policy,
UserNotificationPolicyLogRecord.ERROR_NOTIFICATION_TELEGRAM_USER_IS_DEACTIVATED,
)
else:
raise e
else:

View file

@ -119,7 +119,17 @@ def send_link_to_channel_message_or_fallback_to_full_incident(
@ignore_bot_deleted
def send_log_and_actions_message(self, channel_chat_id, group_chat_id, channel_message_id, reply_to_message_id):
with OkToRetry(task=self, exc=TelegramMessage.DoesNotExist, num_retries=5):
channel_message = TelegramMessage.objects.get(chat_id=channel_chat_id, message_id=channel_message_id)
try:
channel_message = TelegramMessage.objects.get(chat_id=channel_chat_id, message_id=channel_message_id)
except TelegramMessage.DoesNotExist:
if self.request.retries <= 5:
raise
else:
logger.warning(
f"Could not send log and actions message, telegram message does not exist "
f" chat_id={channel_chat_id} message_id={channel_message_id}"
)
return
if channel_message.discussion_group_message_id is None:
channel_message.discussion_group_message_id = reply_to_message_id

View file

@ -64,7 +64,7 @@ class PhoneManager:
def notify_about_changed_verified_phone_number(self, phone_number, connected=False):
text = (
f"This phone number has been {'connected to' if connected else 'disconnected from'} Grafana OnCall team "
f'"{self.user.organization.org_title}"\nYour Grafana OnCall <3'
f'"{self.user.organization.stack_slug}"\nYour Grafana OnCall <3'
)
try:
twilio_client.send_message(text, phone_number)

View file

@ -116,8 +116,6 @@ resolve_condition = """\
acknowledge_condition = None
group_verbose_name = "Incident"
tests = {
"payload": {
"endsAt": "0001-01-01T00:00:00Z",

View file

@ -61,6 +61,4 @@ resolve_condition = """\
acknowledge_condition = None
group_verbose_name = "Incident"
example_payload = {"message": "This alert was sent by user for the demonstration purposes"}

View file

@ -50,8 +50,6 @@ resolve_condition = '{{ payload.get("state", "").upper() == "OK" }}'
acknowledge_condition = None
group_verbose_name = web_title
example_payload = {
"alert_uid": "08d6891a-835c-e661-39fa-96b6a9e26552",
"title": "TestAlert: The whole system is down",

View file

@ -143,10 +143,6 @@ resolve_condition = """\
acknowledge_condition = None
group_verbose_name = """\
{{ payload.get("ruleName", "Incident") }}
"""
tests = {
"payload": {
"endsAt": "0001-01-01T00:00:00Z",
@ -257,7 +253,6 @@ tests = {
"group_distinction": "c6bf5494a2d3052459b4dac837e41455",
"is_resolve_signal": False,
"is_acknowledge_signal": False,
"group_verbose_name": "Incident",
}
# Miscellaneous

View file

@ -120,8 +120,6 @@ resolve_condition = """\
acknowledge_condition = None
group_verbose_name = "Incident"
tests = {
"payload": {
"endsAt": "0001-01-01T00:00:00Z",

View file

@ -26,6 +26,4 @@ resolve_condition = '{{ payload.get("is_resolve", False) == True }}'
acknowledge_condition = None
group_verbose_name = '{{ payload.get("title", "Title") }}'
example_payload = {"foo": "bar"}

View file

@ -49,5 +49,3 @@ grouping_id = '{{ payload.get("title", "")}}'
resolve_condition = '{{ payload.get("state", "").upper() == "OK" }}'
acknowledge_condition = None
group_verbose_name = web_title

View file

@ -56,8 +56,6 @@ resolve_condition = '{{ payload.get("level", "").startswith("OK") }}'
acknowledge_condition = None
group_verbose_name = '{{ payload.get("id", "") }}'
example_payload = {
"id": "TestAlert",
"message": "This alert was sent by user for the demonstration purposes",

View file

@ -49,5 +49,3 @@ grouping_id = None
resolve_condition = None
acknowledge_condition = None
group_verbose_name = "Incident"

View file

@ -58,5 +58,3 @@ grouping_id = """{{ payload }}"""
resolve_condition = None
acknowledge_condition = None
group_verbose_name = web_title

View file

@ -39,6 +39,4 @@ resolve_condition = None
acknowledge_condition = None
group_verbose_name = '<#{{ payload.get("channel", "") }}>'
source_link = '{{ payload.get("amixr_mixin", {}).get("permalink", "")}}'

View file

@ -60,6 +60,4 @@ resolve_condition = """\
{%- endif %}"""
acknowledge_condition = None
group_verbose_name = web_title
example_payload = {"message": "This alert was sent by user for the demonstration purposes"}

View file

@ -52,6 +52,7 @@ FEATURE_LIVE_SETTINGS_ENABLED = getenv_boolean("FEATURE_LIVE_SETTINGS_ENABLED",
FEATURE_TELEGRAM_INTEGRATION_ENABLED = getenv_boolean("FEATURE_TELEGRAM_INTEGRATION_ENABLED", default=True)
FEATURE_EMAIL_INTEGRATION_ENABLED = getenv_boolean("FEATURE_EMAIL_INTEGRATION_ENABLED", default=False)
FEATURE_SLACK_INTEGRATION_ENABLED = getenv_boolean("FEATURE_SLACK_INTEGRATION_ENABLED", default=True)
FEATURE_WEB_SCHEDULES_ENABLED = getenv_boolean("FEATURE_WEB_SCHEDULES_ENABLED", default=False)
GRAFANA_CLOUD_ONCALL_HEARTBEAT_ENABLED = getenv_boolean("GRAFANA_CLOUD_ONCALL_HEARTBEAT_ENABLED", default=True)
GRAFANA_CLOUD_NOTIFICATIONS_ENABLED = getenv_boolean("GRAFANA_CLOUD_NOTIFICATIONS_ENABLED", default=True)

View file

@ -139,6 +139,8 @@ CELERY_TASK_ROUTES = {
"apps.schedules.tasks.drop_cached_ical.drop_cached_ical_for_custom_events_for_organization": {"queue": "critical"},
"apps.schedules.tasks.drop_cached_ical.drop_cached_ical_task": {"queue": "critical"},
# LONG
"apps.alerts.tasks.alert_group_web_title_cache.update_web_title_cache_for_alert_receive_channel": {"queue": "long"},
"apps.alerts.tasks.alert_group_web_title_cache.update_web_title_cache": {"queue": "long"},
"apps.alerts.tasks.check_escalation_finished.check_escalation_finished_task": {"queue": "long"},
"apps.grafana_plugin.tasks.sync.start_sync_organizations": {"queue": "long"},
"apps.grafana_plugin.tasks.sync.sync_organization_async": {"queue": "long"},

View file

@ -14,6 +14,7 @@ module.exports = {
'react/jsx-key': 'warn',
'react/no-unescaped-entities': 'warn',
'react/jsx-no-target-blank': 'warn',
'react-hooks/exhaustive-deps': 'warn',
'no-restricted-imports': 'warn',
eqeqeq: 'warn',
'no-duplicate-imports': 'warn',

View file

@ -1,5 +1,17 @@
# Change Log
## v1.0.35 (2022-09-07)
- Bug fixes
## v1.0.34 (2022-09-06)
- Fix schedule notification spam
## v1.0.33 (2022-09-06)
- Add raw alert view
- Add GitHub star button for OSS installations
- Restore alert group search functionality
- Bug fixes
## v1.0.32 (2022-09-01)
- Bug fixes

View file

@ -39,22 +39,39 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"devDependencies": {
"@grafana/data": "7.5.7",
"@grafana/runtime": "7.5.7",
"@grafana/toolkit": "7.5.7",
"@grafana/ui": "8.2.1",
"@types/dompurify": "^2.0.2",
"@types/lodash-es": "^4.17.3",
"@types/moment-timezone": "^0.5.12",
"@types/react-copy-to-clipboard": "^4.3.0",
"@types/react-responsive": "^8.0.2",
"@types/react-router-dom": "^5.1.5",
"@types/throttle-debounce": "^2.1.0",
"copy-webpack-plugin": "5.1.2",
"@babel/plugin-proposal-class-properties": "^7.18.6",
"@babel/plugin-proposal-decorators": "^7.18.10",
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.18.6",
"@babel/plugin-proposal-object-rest-spread": "^7.18.9",
"@babel/plugin-proposal-optional-chaining": "^7.18.9",
"@babel/plugin-syntax-decorators": "^7.18.6",
"@babel/plugin-syntax-dynamic-import": "^7.8.3",
"@babel/plugin-transform-react-constant-elements": "^7.18.12",
"@babel/plugin-transform-typescript": "^7.18.12",
"@babel/preset-env": "^7.18.10",
"@babel/preset-react": "^7.18.6",
"@babel/preset-typescript": "^7.18.6",
"@grafana/data": "^9.1.1",
"@grafana/runtime": "^9.1.1",
"@grafana/toolkit": "^9.1.1",
"@grafana/ui": "^9.1.1",
"@types/dompurify": "^2.3.4",
"@types/lodash-es": "^4.17.6",
"@types/react-copy-to-clipboard": "^5.0.4",
"@types/react-dom": "^18.0.6",
"@types/react-responsive": "^8.0.5",
"@types/react-router-dom": "^5.3.3",
"@types/throttle-debounce": "^5.0.0",
"copy-webpack-plugin": "^11.0.0",
"dompurify": "^2.3.12",
"eslint-plugin-rulesdir": "^0.2.1",
"lint-staged": "^10.2.11",
"lodash-es": "^4.17.21",
"moment-timezone": "^0.5.35",
"plop": "^2.7.4",
"webpack-bundle-analyzer": "^4.4.2"
"postcss-loader": "^7.0.1",
"ts-loader": "^9.3.1",
"webpack-bundle-analyzer": "^4.6.1"
},
"engines": {
"node": ">=14"
@ -63,12 +80,9 @@
"@types/query-string": "^6.3.0",
"change-case": "^4.1.1",
"circular-dependency-plugin": "^5.2.2",
"dompurify": "^2.0.12",
"eslint-plugin-import": "^2.25.4",
"lodash-es": "^4.17.15",
"mobx": "^5.13.0",
"mobx-react": "^6.1.1",
"moment-timezone": "^0.5.34",
"mobx": "5.13.0",
"mobx-react": "6.1.1",
"rc-table": "^7.17.1",
"react-copy-to-clipboard": "^5.0.2",
"react-emoji-render": "^1.2.4",
@ -76,6 +90,7 @@
"react-router-dom": "^5.2.0",
"react-sortable-hoc": "^1.11.0",
"react-string-replace": "^0.4.4",
"sass-loader": "^13.0.2",
"stylelint": "^13.13.1",
"stylelint-config-standard": "^22.0.0",
"throttle-debounce": "^2.1.0"

View file

@ -57,3 +57,7 @@
.autoresolve-label {
margin-bottom: 0 !important;
}
.web-title-message {
margin-top: 8px;
}

View file

@ -197,7 +197,7 @@ const AlertTemplatesForm = (props: AlertTemplatesFormProps) => {
<VerticalGroup>
<Text type="secondary">
<p>
<a href="https://jinja.palletsprojects.com/en/3.0.x/" target="_blank">
<a href="https://jinja.palletsprojects.com/en/3.0.x/" target="_blank" rel="noreferrer">
Jinja2
</a>
{activeGroup === 'slack' && ', Slack markdown'}
@ -240,6 +240,15 @@ const AlertTemplatesForm = (props: AlertTemplatesFormProps) => {
<Text type="secondary">
Press <Text keyboard>Ctrl</Text>+<Text keyboard>Space</Text> to get suggestions
</Text>
{activeGroup === 'web' && activeTemplate.name == 'web_title_template' && (
<div className={cx('web-title-message')}>
<Text type="secondary" size="small">
Please note that after changing the web title template new alert groups will be searchable by
new title. Alert groups created before the template was changed will be still searchable by
old title only.
</Text>
</div>
)}
</div>
</div>
))}

View file

@ -15,6 +15,7 @@ interface CollapseProps {
className?: string;
contentClassName?: string;
headerWithBackground?: boolean;
children?: any
}
const cx = cn.bind(styles);

View file

@ -11,6 +11,7 @@ interface PluginLinkProps extends LocationUpdate {
disabled?: boolean;
className?: string;
wrap?: boolean;
children: any
}
const cx = cn.bind(styles);

View file

@ -10,6 +10,7 @@ const cx = cn.bind(styles);
interface PolicyNoteProps {
type?: 'success' | 'info' | 'danger';
children?: any;
}
function getIcon(type: PolicyNoteProps['type']) {

View file

@ -13,6 +13,7 @@ const cx = cn.bind(styles);
interface SourceCodeProps {
noMaxHeight?: boolean;
showCopyToClipboard?: boolean;
children?: any
}
const SourceCode: FC<SourceCodeProps> = (props) => {

View file

@ -7,6 +7,7 @@ import styles from 'components/Tag/Tag.module.css';
interface TagProps {
color: string;
className?: string;
children?: any;
}
const cx = cn.bind(styles);

View file

@ -1,70 +0,0 @@
.root {
display: inline;
}
.type_secondary {
color: var(--secondary-text-color);
}
.type_primary {
color: var(--primary-text-color);
}
.type_disabled {
color: var(--disabled-text-color);
}
.type_warning {
color: var(--warning-text-color);
}
.type_link {
color: var(--primary-text-link);
cursor: pointer;
}
.type_success {
color: #6ccf8e;
}
.strong {
font-weight: bold;
}
.underline {
text-decoration: underline;
}
.no-wrap {
white-space: nowrap;
}
.keyboard {
margin: 0 0.2em;
padding: 0.15em 0.4em 0.1em;
font-size: 90%;
background: hsla(0, 0%, 58.8%, 0.06);
border: solid hsla(0, 0%, 39.2%, 0.2);
border-width: 1px 1px 2px;
border-radius: 3px;
}
.link {
text-decoration: underline;
}
.size_small {
font-size: 12px;
}
.title {
margin: 0;
}
.icon-button {
margin-left: 4px;
}
.size_large {
font-size: 20px;
}

View file

@ -0,0 +1,59 @@
.root {
display: inline;
}
.text {
&--primary {
color: var(--primary-text-color);
}
&--secondary {
color: var(--secondary-text-color);
}
&--disabled {
color: var(--disabled-text-color);
}
&--warning {
color: var(--warning-text-color);
}
&--link {
color: var(--primary-text-link);
text-decoration: underline;
}
&--success {
color: var(--green-5);
}
&--strong {
font-weight: bold;
}
&--underline {
text-decoration: underline;
}
&--small {
font-size: 12px;
}
&--large {
font-size: 20px;
}
}
.no-wrap {
white-space: nowrap;
}
.keyboard {
margin: 0 0.2em;
padding: 0.15em 0.4em 0.1em;
font-size: 90%;
background: hsla(0, 0%, 58.8%, 0.06);
border: solid hsla(0, 0%, 39.2%, 0.2);
border-width: 1px 1px 2px;
border-radius: 3px;
}
.title {
margin: 0;
}
.icon-button {
margin-left: 4px;
}

View file

@ -1,15 +1,12 @@
import React, { FC, HTMLAttributes, ChangeEvent, useState, useCallback } from 'react';
import { IconButton, Modal, Field, Input, HorizontalGroup, Button, Icon, VerticalGroup } from '@grafana/ui';
import { IconButton, Modal, Input, HorizontalGroup, Button, VerticalGroup } from '@grafana/ui';
import cn from 'classnames/bind';
import CopyToClipboard from 'react-copy-to-clipboard';
import { TimelineProps } from 'components/Timeline/Timeline';
import { TimelineItemProps } from 'components/Timeline/TimelineItem';
import { useStore } from 'state/useStore';
import { openNotification } from 'utils';
import styles from './Text.module.css';
import styles from './Text.module.scss';
interface TextProps extends HTMLAttributes<HTMLElement> {
type?: 'primary' | 'secondary' | 'disabled' | 'link' | 'success' | 'warning';
@ -78,13 +75,13 @@ const Text: TextType = (props) => {
return (
<span
onClick={onClick}
className={cx('root', className, {
[`type_${type}`]: true,
[`size_${size}`]: true,
strong,
underline,
keyboard,
className={cx('root', 'text', className, {
[`text--${type}`]: true,
[`text--${size}`]: true,
'text--strong': strong,
'text--underline': underline,
'no-wrap': !wrap,
keyboard
})}
>
{hidden ? PLACEHOLDER : children}

View file

@ -2,7 +2,6 @@ import React, { useCallback, useEffect, useMemo, useState } from 'react';
import { HorizontalGroup, TimeOfDayPicker } from '@grafana/ui';
import cn from 'classnames/bind';
import { Moment } from 'moment';
import moment from 'moment-timezone';
import styles from './TimeRange.module.css';
@ -39,7 +38,7 @@ function getMoments(from: string, to: string) {
return [fromMoment, toMoment];
}
function getRangeStrings(from: Moment, to: Moment) {
function getRangeStrings(from: moment.Moment, to: moment.Moment) {
const fromString = from.clone().utc().format('HH:mm:00');
const toString = to.clone().utc().format('HH:mm:00');
@ -49,8 +48,10 @@ function getRangeStrings(from: Moment, to: Moment) {
const TimeRange = (props: TimeRangeProps) => {
const { className, from: f, to: t, onChange, disabled } = props;
const [from, setFrom] = useState<Moment>(getMoments(f, t)[0]);
const [to, setTo] = useState<Moment>(getMoments(f, t)[1]);
// @ts-ignore
const [from, setFrom] = useState<moment.Moment>(getMoments(f, t)[0]);
// @ts-ignore
const [to, setTo] = useState<moment.Moment>(getMoments(f, t)[1]);
useEffect(() => {
if (!f || !t) {
@ -59,7 +60,7 @@ const TimeRange = (props: TimeRangeProps) => {
}, []);
const handleChangeFrom = useCallback(
(value: Moment) => {
(value: moment.Moment) => {
setFrom(value);
if (value.isSame(to, 'minute')) {
@ -74,7 +75,7 @@ const TimeRange = (props: TimeRangeProps) => {
);
const handleChangeTo = useCallback(
(value: Moment) => {
(value: moment.Moment) => {
setTo(value);
if (value.isSame(from, 'minute')) {

View file

@ -10,6 +10,7 @@ const cx = cn.bind(styles);
export interface TimelineProps {
className?: string;
children?: any;
}
interface TimelineType extends React.FC<TimelineProps> {

View file

@ -12,6 +12,7 @@ export interface TimelineItemProps {
color?: string;
number?: number;
badge?: number;
children?: any;
}
const TimelineItem: React.FC<TimelineItemProps> = (props) => {

View file

@ -41,6 +41,7 @@ export default VerticalTabsBar;
interface TabProps {
id: string;
children?: any
}
export const VerticalTab: FC<TabProps> = ({ children }) => {

View file

@ -81,7 +81,7 @@ const ChannelFilterForm = observer((props: ChannelFilterFormProps) => {
description={
<>
Use{' '}
<a href="https://regex101.com/" target="_blank">
<a href="https://regex101.com/" target="_blank" rel="noreferrer">
python style
</a>{' '}
regex to filter incidents based on a expression

View file

@ -23,7 +23,9 @@ import styles from './DefaultPageLayout.module.css';
const cx = cn.bind(styles);
interface DefaultPageLayoutProps extends AppRootProps {}
interface DefaultPageLayoutProps extends AppRootProps {
children?: any;
}
enum AlertID {
CONNECTIVITY_WARNING = 'Connectivity Warning',
@ -111,7 +113,7 @@ const DefaultPageLayout: FC<DefaultPageLayoutProps> = observer((props) => {
{`Current plugin version: ${plugin.version}, current engine version: ${store.backendVersion}`}
<br />
Please see{' '}
<a href={'https://grafana.com/docs/oncall/latest/open-source/#update-grafana-oncall-oss'} target="_blank">
<a href={'https://grafana.com/docs/oncall/latest/open-source/#update-grafana-oncall-oss'} target="_blank" rel="noreferrer">
the update instructions
</a>
.

View file

@ -55,9 +55,9 @@ const EscalationChainSteps = observer((props: EscalationChainStepsProps) => {
const escalationPolicyIds = escalationPolicyStore.escalationChainToEscalationPolicy[id];
const isSlackInstalled = Boolean(store.teamStore.currentTeam?.slack_team_identity);
const isTelegramInstalled = Boolean(store.telegramChannelStore?.currentTeamToTelegramChannel?.length > 0);
return (
// @ts-ignore
<SortableList useDragHandle className={cx('steps')} axis="y" lockAxis="y" onSortEnd={handleSortEnd}>
{addonBefore}
{escalationPolicyIds ? (
@ -79,6 +79,7 @@ const EscalationChainSteps = observer((props: EscalationChainStepsProps) => {
<EscalationPolicy
key={`item-${escalationPolicy.id}`}
index={index}
// @ts-ignore
data={escalationPolicy}
number={index + offset + 1}
color={STEP_COLORS[index] || COLOR_RED}

View file

@ -107,7 +107,7 @@ const HeartbeatForm = observer(({ alertReceveChannelId, onUpdate }: HeartBeatMod
<p>
<Text>
Use the following unique Grafana link to send GET and POST requests:{' '}
<a href={heartbeat?.link} target="_blank">
<a href={heartbeat?.link} target="_blank" rel="noreferrer">
{heartbeat?.link}
</a>
</Text>

View file

@ -2,7 +2,6 @@ import React, { Component } from 'react';
import { SelectableValue, TimeRange } from '@grafana/data';
import {
HorizontalGroup,
IconButton,
InlineSwitch,
MultiSelect,
@ -10,14 +9,13 @@ import {
Select,
LoadingPlaceholder,
Input,
VerticalGroup,
Icon,
} from '@grafana/ui';
import { capitalCase } from 'change-case';
import cn from 'classnames/bind';
import { debounce, isEmpty, isUndefined, omit, omitBy, pickBy } from 'lodash-es';
import { debounce, isEmpty, isUndefined, omitBy } from 'lodash-es';
import { observer } from 'mobx-react';
import moment from 'moment';
import moment from 'moment-timezone';
import Emoji from 'react-emoji-render';
import CardButton from 'components/CardButton/CardButton';

View file

@ -55,6 +55,7 @@ const Autoresolve = ({ alertReceiveChannelId, onSwitchToTemplate, alertGroupId }
store.alertReceiveChannelStore.templates[alertReceiveChannelId],
'resolve_condition_template'
);
// @ts-ignore
if (autoresolveCondition == ['invalid template']) {
setAutoresolveConditionInvalid(true);
}

View file

@ -116,6 +116,7 @@ const PersonalNotificationSettings = observer((props: PersonalNotificationSettin
return (
<div className={cx('root')}>
{title}
{/* @ts-ignore */}
<SortableList
helperClass={cx('sortable-helper')}
className={cx('steps')}
@ -126,6 +127,7 @@ const PersonalNotificationSettings = observer((props: PersonalNotificationSettin
>
{notificationPolicies.map((notificationPolicy: NotificationPolicyType, index: number) => (
<NotificationPolicy
// @ts-ignore
userAction={userAction}
key={notificationPolicy.id}
index={index}

View file

@ -250,7 +250,7 @@ export const PluginConfigPage = (props: Props) => {
<VerticalGroup>
<Text type="secondary">
Run hobby, dev or production backend:{' '}
<a href="https://github.com/grafana/oncall#getting-started" target="_blank">
<a href="https://github.com/grafana/oncall#getting-started" target="_blank" rel="noreferrer">
<Text type="link">getting started.</Text>
</a>
</Text>
@ -259,15 +259,15 @@ export const PluginConfigPage = (props: Props) => {
<Text type="secondary">
Need help?
<br />- Talk to the OnCall team in the #grafana-oncall channel at{' '}
<a href="https://slack.grafana.com/" target="_blank">
<a href="https://slack.grafana.com/" target="_blank" rel="noreferrer">
<Text type="link">Slack</Text>
</a>
<br />- Ask questions at{' '}
<a href="https://github.com/grafana/oncall/discussions/categories/q-a" target="_blank">
<a href="https://github.com/grafana/oncall/discussions/categories/q-a" target="_blank" rel="noreferrer">
<Text type="link">GitHub Discussions</Text>
</a>{' '}
or file bugs at{' '}
<a href="https://github.com/grafana/oncall/issues" target="_blank">
<a href="https://github.com/grafana/oncall/issues" target="_blank" rel="noreferrer">
<Text type="link">GitHub Issues</Text>
</a>
</Text>
@ -285,7 +285,7 @@ Seek for such a line: “Your invite token: <<LONG TOKEN>> , use it in the Graf
>
<>
<Input id="onCallInvitationToken" onChange={handleInvitationTokenChange} />
<a href="https://github.com/grafana/oncall/blob/dev/DEVELOPER.md#frontend-setup" target="_blank">
<a href="https://github.com/grafana/oncall/blob/dev/DEVELOPER.md#frontend-setup" target="_blank" rel="noreferrer">
<Text size="small" type="link">
How to re-issue the invite token?
</Text>

View file

@ -122,7 +122,7 @@ const TelegramModal = (props: TelegramModalProps) => {
<div className={cx('telegram-instruction-container')}>
<Text>
5. Click{' '}
<a className={cx('telegram-bot')} href={botLink} target="_blank">
<a className={cx('telegram-bot')} href={botLink} target="_blank" rel="noreferrer">
{botLink}
</a>{' '}
to add the OnCall bot to your contacts. Add the bot to your channel as an Admin. Allow it to{' '}

View file

@ -37,7 +37,7 @@ const TelegramInfo = observer((props: TelegramInfoProps) => {
<>
{telegramConfigured || !store.hasFeature(AppFeature.LiveSettings) ? (
<VerticalGroup>
<a href={`${botLink}/?start=${verificationCode}`} target="_blank">
<a href={`${botLink}/?start=${verificationCode}`} target="_blank" rel="noreferrer">
<Button size="sm" fill="outline">
Connect automatically
</Button>
@ -46,7 +46,7 @@ const TelegramInfo = observer((props: TelegramInfoProps) => {
<HorizontalGroup>
<Text>
1) Go to{' '}
<a className={cx('verification-code')} href={botLink} target="_blank">
<a className={cx('verification-code')} href={botLink} target="_blank" rel="noreferrer">
{botLink}
</a>
</Text>

View file

@ -11,3 +11,8 @@ declare module '*.css';
declare module '*.jpg';
declare module '*.png';
declare module '*.svg';
declare module '*.scss' {
const content: Record<string, string>;
export default content;
}

View file

@ -414,8 +414,7 @@ export class AlertGroupStore extends BaseStore {
console.log('undoAction', undoAction);
} catch (e) {
this.updateAlert(alertId, { loading: false });
openErrorNotification(e.response.data?.detail);
openErrorNotification(e.response.data?.detail || e.response.data);
}
}

View file

@ -72,7 +72,6 @@ export interface Alert {
silenced_until: string;
started_at: string;
last_alert_at: string;
verbose_name: string;
dependent_alert_groups: Alert[];
status: IncidentStatus;
short?: boolean;

View file

@ -297,7 +297,7 @@ class IncidentPage extends React.Component<IncidentPageProps, IncidentPageState>
Copy Link
</Button>
</CopyToClipboard>
<a href={incident.permalink} target="_blank">
<a href={incident.permalink} target="_blank" rel="noreferrer">
<Button variant="primary" size="sm" icon="slack">
View in Slack
</Button>

View file

@ -1,6 +1,7 @@
import React, { useCallback, useEffect } from 'react';
import React, { useCallback } from 'react';
import { IconButton, ValuePicker, WithContextMenu, ButtonCascader } from '@grafana/ui';
import { ButtonCascader } from '@grafana/ui';
import { ComponentSize } from '@grafana/ui/types/size';
import { observer } from 'mobx-react';
import { WithPermissionControl } from 'containers/WithPermissionControl/WithPermissionControl';
@ -19,7 +20,7 @@ const SilenceDropdown = observer((props: SilenceDropdownProps) => {
const { onSelect, className, disabled = false, buttonSize } = props;
const onSelectCallback = useCallback(
([value, ...rest]) => {
([value]) => {
onSelect(Number(value));
},
[onSelect]
@ -44,7 +45,7 @@ const SilenceDropdown = observer((props: SilenceDropdownProps) => {
label: silenceOption.display_name,
}))}
value={undefined}
buttonProps={{ size: buttonSize }}
buttonProps={{ size: buttonSize as ComponentSize }}
>
Silence
</ButtonCascader>

View file

@ -91,7 +91,7 @@ class MigrationToolPage extends React.Component<MigrationToolProps, MigrationToo
<ol>
<li>
Ask all users from your Amixr.IO workspace to{' '}
<a href="https://grafana.com/auth/sign-up/create-user" target="_blank">
<a href="https://grafana.com/auth/sign-up/create-user" target="_blank" rel="noreferrer">
sign up
</a>{' '}
in the Grafana Cloud.
@ -101,7 +101,7 @@ class MigrationToolPage extends React.Component<MigrationToolProps, MigrationToo
</p>
<p>
For any technical assistance please reach out to our team in{' '}
<a href="https://slack.grafana.com/" target="_blank">
<a href="https://slack.grafana.com/" target="_blank" rel="noreferrer">
Grafana Slack channel #grafana-oncall
</a>
. Well be happy to give you a hand and help you with migration on a call.
@ -112,13 +112,13 @@ class MigrationToolPage extends React.Component<MigrationToolProps, MigrationToo
<ul>
<li>
Matvey Kukuy (ex-CEO of Amixr):{' '}
<a href="mailto:matvey.kukuy@grafana.com" target="_blank">
<a href="mailto:matvey.kukuy@grafana.com" target="_blank" rel="noreferrer">
matvey.kukuy@grafana.com
</a>
</li>
<li>
Ildar Iskhakov (ex-CTO of Amixr):{' '}
<a href="mailto:ildar.iskhakov@grafana.com" target="_blank">
<a href="mailto:ildar.iskhakov@grafana.com" target="_blank" rel="noreferrer">
ildar.iskhakov@grafana.com
</a>
</li>

View file

@ -1,19 +1,18 @@
import { Moment } from 'moment';
import moment from 'moment-timezone';
import { Schedule } from 'models/schedule/schedule.types';
const DATE_FORMAT = 'HH:mm YYYY-MM-DD';
function isToday(m: Moment, currentMoment: Moment) {
function isToday(m: moment.Moment) {
return m.isSame('day');
}
function isYesterday(m: Moment, currentMoment: Moment) {
function isYesterday(m: moment.Moment, currentMoment: moment.Moment) {
return m.diff(currentMoment, 'days') === -1;
}
function isTomorrow(m: Moment, currentMoment: Moment) {
function isTomorrow(m: moment.Moment, currentMoment: moment.Moment) {
return m.diff(currentMoment, 'days') === 1;
}
@ -25,8 +24,8 @@ export function prepareForEdit(schedule: Schedule) {
};
}
function humanize(m: Moment, currentMoment: Moment) {
if (isToday(m, currentMoment)) {
function humanize(m: moment.Moment, currentMoment: moment.Moment) {
if (isToday(m)) {
return 'Today';
}
if (isYesterday(m, currentMoment)) {

View file

@ -1,5 +1,6 @@
:root {
--maintenance-background: repeating-linear-gradient(45deg, #f6ba52, #f6ba52 20px, #ffd180 20px, #ffd180 40px);
--gren-5: #6ccf8e;
--green-6: #73d13d;
--red-5: #ff4d4f;
--orange-5: #ffa940;

View file

@ -9,5 +9,6 @@
"noUnusedLocals": false,
"strict": false,
"resolveJsonModule": true,
"noImplicitAny": false
}
}

View file

@ -1,9 +1,7 @@
const path = require('path');
const fs = require('fs');
const CopyWebpackPlugin = require('copy-webpack-plugin');
const CircularDependencyPlugin = require('circular-dependency-plugin');
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
const MONACO_DIR = path.resolve(__dirname, './node_modules/monaco-editor');
@ -13,20 +11,78 @@ Object.defineProperty(RegExp.prototype, 'toJSON', {
module.exports.getWebpackConfig = (config, options) => {
const cssLoader = config.module.rules.find((rule) => rule.test.toString() === '/\\.css$/');
const tsxLoader = config.module.rules.find((rule) => rule.test.toString() === '/\\.tsx?$/');
cssLoader.exclude.push(/\.module\.css$/, MONACO_DIR);
const grafanaRules = config.module.rules.filter((a) => a.test.toString() !== /\.s[ac]ss$/.toString());
const newConfig = {
...config,
module: {
...config.module,
rules: [
...config.module.rules,
...grafanaRules,
{
test: /\.(ts|tsx)$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: {
cacheDirectory: true,
cacheCompression: false,
presets: [
[
'@babel/preset-env',
{
modules: false,
},
],
[
'@babel/preset-typescript',
{
allowNamespaces: true,
allowDeclareFields: true,
},
],
['@babel/preset-react'],
],
plugins: [
[
'@babel/plugin-transform-typescript',
{
allowNamespaces: true,
allowDeclareFields: true,
},
],
'@babel/plugin-proposal-class-properties',
[
'@babel/plugin-proposal-object-rest-spread',
{
loose: true,
},
],
[
'@babel/plugin-proposal-decorators',
{
legacy: true,
},
],
'@babel/plugin-transform-react-constant-elements',
'@babel/plugin-proposal-nullish-coalescing-operator',
'@babel/plugin-proposal-optional-chaining',
'@babel/plugin-syntax-dynamic-import',
],
},
},
'ts-loader',
],
},
{
test: /\.module\.css$/,
exclude: /node_modules/,
//use: ['style-loader', 'css-loader?modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!postcss-loader'],
use: [
'style-loader',
{
@ -41,8 +97,29 @@ module.exports.getWebpackConfig = (config, options) => {
},
],
},
{
test: /\.module\.scss$/i,
exclude: /node_modules/,
use: [
'style-loader',
{
loader: 'css-loader',
options: {
importLoaders: 1,
sourceMap: true,
modules: {
localIdentName: options.production ? '[name]__[hash:base64]' : '[path][name]__[local]',
},
},
},
'postcss-loader',
'sass-loader',
],
},
],
},
plugins: [
...config.plugins,
new CircularDependencyPlugin({
@ -56,9 +133,10 @@ module.exports.getWebpackConfig = (config, options) => {
allowAsyncCycles: false,
// set the current working directory for displaying module paths
cwd: process.cwd(),
}),
//new BundleAnalyzerPlugin(),
})
// new BundleAnalyzerPlugin(),
],
resolve: {
...config.resolve,
symlinks: false,
@ -66,7 +144,7 @@ module.exports.getWebpackConfig = (config, options) => {
},
};
/* fs.writeFile('webpack-conf.json', JSON.stringify(newConfig.resolve, null, 2), function (err) {
/* fs.writeFile('webpack-conf.json', JSON.stringify(newConfig, null, 2), function (err) {
if (err) {
return console.log(err);
}

File diff suppressed because it is too large Load diff

View file

@ -8,13 +8,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.0.4
version: 1.0.5
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v1.0.32"
appVersion: "v1.0.35"
dependencies:
- name: cert-manager
version: v1.8.0

View file

@ -83,6 +83,34 @@ helm upgrade \
grafana/oncall
```
### Set up Slack and Telegram
You can set up Slack connection via following variables:
```
oncall:
slack:
enabled: true
command: ~
clientId: ~
clientSecret: ~
apiToken: ~
apiTokenCommon: ~
```
`oncall.slack.command` is used for changing default bot slash command,
`oncall`. In slack, it could be called via `/<oncall.slack.command>`.
To set up Telegram tokem and webhook url use:
```
oncall:
telegram:
enabled: true
token: ~
webhookUrl: ~
```
### Set up external access
Grafana OnCall can be connected to the external monitoring systems or grafana deployed to the other cluster.
Nginx Ingress Controller and Cert Manager charts are included in the helm chart with the default configuration.

View file

@ -23,6 +23,40 @@
value: "1024"
{{- end }}
{{- define "snippet.oncall.slack.env" -}}
{{- if .Values.oncall.slack.enabled -}}
- name: FEATURE_SLACK_INTEGRATION_ENABLED
value: {{ .Values.oncall.slack.enabled | toString | title | quote }}
- name: SLACK_SLASH_COMMAND_NAME
value: "/{{ .Values.oncall.slack.commandName | default "oncall" }}"
- name: SLACK_CLIENT_OAUTH_ID
value: {{ .Values.oncall.slack.clientId | default "" | quote }}
- name: SLACK_CLIENT_OAUTH_SECRET
value: {{ .Values.oncall.slack.clientSecret | default "" | quote }}
- name: SLACK_API_TOKEN
value: {{ .Values.oncall.slack.apiToken | default "" | quote }}
- name: SLACK_API_TOKEN_COMMON
value: {{ .Values.oncall.slack.apiTokenCommon | default "" | quote }}
{{- else -}}
- name: FEATURE_SLACK_INTEGRATION_ENABLED
value: {{ .Values.oncall.slack.enabled | toString | title | quote }}
{{- end -}}
{{- end }}
{{- define "snippet.oncall.telegram.env" -}}
{{- if .Values.oncall.telegram.enabled -}}
- name: FEATURE_TELEGRAM_INTEGRATION_ENABLED
value: {{ .Values.oncall.telegram.enabled | toString | title | quote }}
- name: TELEGRAM_WEBHOOK_URL
value: {{ .Values.oncall.telegram.webhookUrl | default "" | quote }}
- name: TELEGRAM_TOKEN
value: {{ .Values.oncall.telegram.token | default "" | quote }}
{{- else -}}
- name: FEATURE_TELEGRAM_INTEGRATION_ENABLED
value: {{ .Values.oncall.telegram.enabled | toString | title | quote }}
{{- end -}}
{{- end }}
{{- define "snippet.celery.env" -}}
- name: CELERY_WORKER_QUEUE
value: "default,critical,long,slack,telegram,webhook,celery"

View file

@ -39,6 +39,8 @@ spec:
env:
{{- include "snippet.celery.env" . | nindent 12 }}
{{- include "snippet.oncall.env" . | nindent 12 }}
{{- include "snippet.oncall.slack.env" . | nindent 12 }}
{{- include "snippet.oncall.telegram.env" . | nindent 12 }}
{{- include "snippet.mysql.env" . | nindent 12 }}
{{- include "snippet.rabbitmq.env" . | nindent 12 }}
{{- include "snippet.redis.env" . | nindent 12 }}

View file

@ -45,6 +45,8 @@ spec:
protocol: TCP
env:
{{- include "snippet.oncall.env" . | nindent 12 }}
{{- include "snippet.oncall.slack.env" . | nindent 12 }}
{{- include "snippet.oncall.telegram.env" . | nindent 12 }}
{{- include "snippet.mysql.env" . | nindent 12 }}
{{- include "snippet.rabbitmq.env" . | nindent 12 }}
{{- include "snippet.redis.env" . | nindent 12 }}

View file

@ -40,6 +40,19 @@ celery:
# cpu: 100m
# memory: 128Mi
oncall:
slack:
enabled: false
command: ~
clientId: ~
clientSecret: ~
apiToken: ~
apiTokenCommon: ~
telegram:
enabled: false
token: ~
webhookUrl: ~
# Whether to run django database migrations automatically
migrate:
enabled: true

View file

@ -20,7 +20,7 @@ Resources that can be migrated using this tool:
1. Make sure you have `docker` installed
2. Build the docker image: `docker build -t pd-oncall-migrator .`
3. Obtain a PagerDuty API token: https://support.pagerduty.com/docs/api-access-keys
3. Obtain a PagerDuty API user token: https://support.pagerduty.com/docs/api-access-keys#generate-a-user-token-rest-api-key
4. Obtain a Grafana OnCall API token and API URL on the "Settings" page of your Grafana OnCall instance
## Migration plan
@ -84,4 +84,4 @@ It's possible to specify a default contact method type for user notification rul
* Connect integrations (press the "How to connect" button on the integration page)
* Make sure users connect their phone numbers, Slack accounts, etc. in their user settings
* At some point you would probably want to recreate schedules using Google Calendar or Terraform to be able to modify migrated on-call schedules in Grafana OnCall

Some files were not shown because too many files have changed in this diff Show more