Merge branch 'dev' into 63-raw-response-content

This commit is contained in:
Rares Mardare 2022-08-26 13:55:54 +03:00
commit c28af7a2cc
103 changed files with 1570 additions and 1979 deletions

View file

@ -1,5 +1,15 @@
# Change Log
## v1.0.25 (2022-08-24)
- Bug fixes
## v1.0.24 (2022-08-24)
- Insight logs
- Default DATA_UPLOAD_MAX_MEMORY_SIZE to 1mb
## v1.0.23 (2022-08-23)
- Bug fixes
## v1.0.22 (2022-08-16)
- Make STATIC_URL configurable from environment variable

View file

@ -16,15 +16,22 @@ weight: 300
# Telegram integration for Grafana OnCall
You can use Telegram to deliver alert group notifications to a dedicated channel, and allow users to perform notification actions.
You can manage alerts either directly in your personal Telegram DMs or in a dedicated team channel.
Each alert group notification is assigned a dedicated discussion. Users can perform notification actions (acknowledge, resolve, silence), create reports, and discuss alerts in the comments section of the discussions.
## Configure Telegram user settings in Grafana OnCall
In case an integration route is not configured to use a Telegram channel, users will receive messages with alert group contents, logs and actions in their DMs.
To receive alert group contents, escalation logs and to be able to perform actions (acknowledge, resolve, silence) in Telegram DMs, please refer to the following steps:
## Connect to Telegram
1. In your profile, find the Telegram setting and click **Connect**.
1. Click **Connect automatically** for the bot to message you and to bring up your telegram account.
1. Click **Start** when the OnCall bot messages you and wait for the connection confirmation.
1. Done! Now you can receive alerts directly to your Telegram DMs.
Connect your organization's Telegram account to your Grafana OnCall instance by following the instructions provided in OnCall. You can use the following steps as a reference.
If you want to connect manually, you can click the URL provided and then **SEND MESSAGE**. In your Telegram account, click **Start**.
## (Optional) Connect to a Telegram channel
In case you want to manage alerts in a dedicated Telegram channel, please use the following steps as a reference.
> **NOTE:** Only Grafana users with the administrator role can configure OnCall settings.
@ -42,10 +49,5 @@ Connect your organization's Telegram account to your Grafana OnCall instance by
1. In OnCall, send the provided verification code to the channel.
1. Make sure users connect to Telegram in their OnCall user profile.
## Configure Telegram user settings in OnCall
1. In your profile, find the Telegram setting and click **Connect**.
1. Click **Connect automatically** for the bot to message you and to bring up your telegram account.
1. Click **Start** when the OnCall bot messages you.
If you want to connect manually, you can click the URL provided and then **SEND MESSAGE**. In your Telegram account, click **Start**.
Each alert group is assigned a dedicated discussion. Users can perform actions (acknowledge, resolve, silence), and discuss alerts in the comments section of the discussions.
In case an integration route is not configured to use a Telegram channel, users will receive messages with alert group contents, logs and actions in their DMs.

View file

@ -166,13 +166,11 @@ lt --port 8080 -s pretty-turkey-83 --print-requests
The Telegram integration for Grafana OnCall is designed for collaborative team work and improved incident response. Refer to the following steps to configure the Telegram integration:
1. Ensure your OnCall environment is up and running.
1. Request [BotFather](https://t.me/BotFather) for a key, then add your key in `TELEGRAM_TOKEN` in your Grafana OnCall **Env Variables**.
1. Set `TELEGRAM_WEBHOOK_HOST` with your external URL for your Grafana OnCall.
1. From the **ChatOps** tab in Grafana OnCall, click **Telegram**.
1. Ensure your Grafana OnCall environment is up and running.
2. Create a Telegram bot using [BotFather](https://t.me/BotFather) and save the token provided by BotFather. Please make sure to disable **Group Privacy** for the bot (Bot Settings -> Group Privacy -> Turn off).
3. Paste the token provided by BotFather to the `TELEGRAM_TOKEN` variable on the **Env Variables** page of your Grafana OnCall instance.
4. Set the `TELEGRAM_WEBHOOK_HOST` variable to the external address of your Grafana OnCall instance. Please note that `TELEGRAM_WEBHOOK_HOST` must start with `https://` and be publicly available (meaning that it can be reached by Telegram servers). If your host is private or local, consider using a reverse proxy (e.g. [ngrok](https://ngrok.com)).
5. Now you can connect Telegram accounts on the **Users** page and receive alert groups to Telegram direct messages. Alternatively, in case you want to connect Telegram channels to your Grafana OnCall environment, navigate to the **ChatOps** tab.
## Grafana OSS-Cloud Setup

View file

@ -659,9 +659,7 @@ class IncidentLogBuilder:
# last passed step order + 1
notification_policy_order = last_user_log.notification_policy.order + 1
notification_policies = UserNotificationPolicy.objects.get_or_create_for_user(
user=user_to_notify, important=important
)
notification_policies = UserNotificationPolicy.objects.filter(user=user_to_notify, important=important)
for notification_policy in notification_policies:
future_notification = notification_policy.order >= notification_policy_order

View file

@ -196,9 +196,9 @@ class Alert(models.Model):
if grouping_id_template is not None:
group_distinction, _ = apply_jinja_template(grouping_id_template, raw_request_data)
# Insert demo uuid to prevent grouping of demo alerts.
if is_demo:
group_distinction = cls.insert_demo_uuid(group_distinction)
# Insert random uuid to prevent grouping of demo alerts or alerts with group_distinction=None
if is_demo or not group_distinction:
group_distinction = cls.insert_random_uuid(group_distinction)
if group_distinction is not None:
group_distinction = hashlib.md5(str(group_distinction).encode()).hexdigest()
@ -224,7 +224,7 @@ class Alert(models.Model):
)
@staticmethod
def insert_demo_uuid(distinction):
def insert_random_uuid(distinction):
if distinction is not None:
distinction += str(uuid4())
else:

View file

@ -27,9 +27,9 @@ from apps.integrations.tasks import create_alert, create_alertmanager_alerts
from apps.slack.constants import SLACK_RATE_LIMIT_DELAY, SLACK_RATE_LIMIT_TIMEOUT
from apps.slack.tasks import post_slack_rate_limit_message
from apps.slack.utils import post_message_to_channel
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.utils import create_engine_url
from common.exceptions import TeamCanNotBeChangedError, UnableToSendDemoAlert
from common.insight_log import EntityEvent, write_resource_insight_log
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
logger = logging.getLogger(__name__)
@ -342,66 +342,6 @@ class AlertReceiveChannel(IntegrationOptionsMixin, MaintainableObject):
self.save(update_fields=["rate_limit_message_task_id", "rate_limited_in_slack_at"])
post_slack_rate_limit_message.apply_async((self.pk,), countdown=delay, task_id=task_id)
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
name: Grafana :blush:, team: example, auto resolve allowed: Yes
templates:
Slack title: *<{{ grafana_oncall_link }}|#{{ grafana_oncall_id }} Custom title>* via {{ integration_name }}
{% if source_link %}
(*<{{ source_link }}|source>*)
{%- endif %},
Slack message: default,
Slack image url: default,
SMS title: default,
Phone call title: default,
Web title: default,
Web message: default,
Web image url: default,
Email title: default,
Email message: default,
Telegram title: default,
Telegram message: default,
Telegram image url: default,
Source link: default,
Grouping id: default,
Resolve condition: default,
Acknowledge condition: default
"""
result = f"name: {self.verbal_name}, team: {self.team.name if self.team else 'No team'}"
if self.is_able_to_autoresolve:
result += f", auto resolve allowed: {'Yes' if self.allow_source_based_resolving else 'No'}"
if self.integration == AlertReceiveChannel.INTEGRATION_SLACK_CHANNEL:
slack_channel = None
if self.integration_slack_channel_id:
SlackChannel = apps.get_model("slack", "SlackChannel")
slack_channel = SlackChannel.objects.filter(
slack_team_identity=self.organization.slack_team_identity,
slack_id=self.integration_slack_channel_id,
).first()
result += f", slack channel: {slack_channel.name if slack_channel else 'not selected'}"
result += (
f"\ntemplates:\nSlack title: {self.slack_title_template or 'default'},\n"
f"Slack message: {self.slack_message_template or 'default'},\n"
f"Slack image url: {self.slack_image_url_template or 'default'},\n"
f"SMS title: {self.sms_title_template or 'default'},\n"
f"Phone call title: {self.phone_call_title_template or 'default'},\n"
f"Web title: {self.web_title_template or 'default'},\n"
f"Web message: {self.web_message_template or 'default'},\n"
f"Web image url: {self.web_image_url_template or 'default'},\n"
f"Email title: {self.email_title_template or 'default'},\n"
f"Email message: {self.email_message_template or 'default'},\n"
f"Telegram title: {self.telegram_title_template or 'default'},\n"
f"Telegram message: {self.telegram_message_template or 'default'},\n"
f"Telegram image url: {self.telegram_image_url_template or 'default'},\n"
f"Source link: {self.source_link_template or 'default'},\n"
f"Grouping id: {self.grouping_id_template or 'default'},\n"
f"Resolve condition: {self.resolve_condition_template or 'default'},\n"
f"Acknowledge condition: {self.acknowledge_condition_template or 'default'}"
)
return result
@property
def alert_groups_count(self):
return self.alert_groups.count()
@ -658,6 +598,55 @@ class AlertReceiveChannel(IntegrationOptionsMixin, MaintainableObject):
AlertReceiveChannel.INTEGRATION_GRAFANA_ALERTING,
)
# Insight logs
@property
def insight_logs_type_verbal(self):
return "integration"
@property
def insight_logs_verbal(self):
return self.verbal_name
@property
def insight_logs_serialized(self):
result = {
"name": self.verbal_name,
"allow_source_based_resolving": self.allow_source_based_resolving,
"slack_title": self.slack_title_template or "default",
"slack_message": self.slack_message_template or "default",
"slack_image_url": self.slack_image_url_template or "default",
"sms_title": self.sms_title_template or "default",
"phone_call_title": self.phone_call_title_template or "default",
"web_title": self.web_title_template or "default",
"web_message": self.web_message_template or "default",
"web_image_url_template": self.web_image_url_template or "default",
"email_title_template": self.email_title_template or "default",
"email_message": self.email_message_template or "default",
"telegram_title": self.telegram_title_template or "default",
"telegram_message": self.telegram_message_template or "default",
"telegram_image_url": self.telegram_image_url_template or "default",
"source_link": self.source_link_template or "default",
"grouping_id": self.grouping_id_template or "default",
"resolve_condition": self.resolve_condition_template or "default",
"acknowledge_condition": self.acknowledge_condition_template or "default",
}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
@property
def insight_logs_metadata(self):
result = {}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
@receiver(post_save, sender=AlertReceiveChannel)
def listen_for_alertreceivechannel_model_save(sender, instance, created, *args, **kwargs):
@ -665,30 +654,15 @@ def listen_for_alertreceivechannel_model_save(sender, instance, created, *args,
IntegrationHeartBeat = apps.get_model("heartbeat", "IntegrationHeartBeat")
if created:
description = f"New integration {instance.verbal_name} was created"
create_organization_log(
instance.organization,
instance.author,
type=OrganizationLogType.TYPE_INTEGRATION_CREATED,
description=description,
)
write_resource_insight_log(instance=instance, author=instance.author, event=EntityEvent.CREATED)
default_filter = ChannelFilter(alert_receive_channel=instance, filtering_term=None, is_default=True)
default_filter.save()
filter_verbal = default_filter.verbal_name_for_clients.capitalize()
description = f"{filter_verbal} was created for integration {instance.verbal_name}"
create_organization_log(
instance.organization,
None,
OrganizationLogType.TYPE_CHANNEL_FILTER_CREATED,
description,
)
write_resource_insight_log(instance=default_filter, author=instance.author, event=EntityEvent.CREATED)
TEN_MINUTES = 600 # this is timeout for cloud heartbeats
if instance.is_available_for_integration_heartbeat:
IntegrationHeartBeat.objects.create(alert_receive_channel=instance, timeout_seconds=TEN_MINUTES)
description = f"Heartbeat for integration {instance.verbal_name} was created"
create_organization_log(
instance.organization, None, OrganizationLogType.TYPE_HEARTBEAT_CREATED, description
)
heartbeat = IntegrationHeartBeat.objects.create(alert_receive_channel=instance, timeout_seconds=TEN_MINUTES)
write_resource_insight_log(instance=heartbeat, author=instance.author, event=EntityEvent.CREATED)
if instance.integration == AlertReceiveChannel.INTEGRATION_GRAFANA_ALERTING:
if created:

View file

@ -129,45 +129,57 @@ class ChannelFilter(OrderedModel):
else:
return self.slack_channel_id
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
term: .*, order: 0, slack notification allowed: Yes, telegram notification allowed: Yes,
slack channel: without_amixr_general_channel, telegram channel: default
"""
result = (
f"term: {self.str_for_clients}, order: {self.order}, slack notification allowed: "
f"{'Yes' if self.notify_in_slack else 'No'}, telegram notification allowed: "
f"{'Yes' if self.notify_in_telegram else 'No'}"
)
if self.notification_backends:
for backend_id, backend in self.notification_backends.items():
result += f", {backend_id} notification allowed: {'Yes' if backend.get('enabled') else 'No'}"
slack_channel = None
if self.slack_channel_id:
SlackChannel = apps.get_model("slack", "SlackChannel")
sti = self.alert_receive_channel.organization.slack_team_identity
slack_channel = SlackChannel.objects.filter(slack_team_identity=sti, slack_id=self.slack_channel_id).first()
result += f", slack channel: {slack_channel.name if slack_channel else 'default'}"
result += f", telegram channel: {self.telegram_channel.channel_name if self.telegram_channel else 'default'}"
if self.notification_backends:
for backend_id, backend in self.notification_backends.items():
channel = backend.get("channel_id") or "default"
result += f", {backend_id} channel: {channel}"
result += f", escalation chain: {self.escalation_chain.name if self.escalation_chain else 'not selected'}"
return result
@property
def str_for_clients(self):
if self.filtering_term is None:
return "default"
return str(self.filtering_term).replace("`", "")
@property
def verbal_name_for_clients(self):
return "default route" if self.is_default else f"route `{self.str_for_clients}`"
def send_demo_alert(self):
integration = self.alert_receive_channel
integration.send_demo_alert(force_route_id=self.pk)
# Insight logs
@property
def insight_logs_type_verbal(self):
return "route"
@property
def insight_logs_verbal(self):
return f"{self.str_for_clients} for {self.alert_receive_channel.insight_logs_verbal}"
@property
def insight_logs_serialized(self):
result = {
"filtering_term": self.str_for_clients,
"order": self.order,
"slack_notification_enabled": self.notify_in_slack,
"telegram_notification_enabled": self.notify_in_telegram,
# TODO: use names instead of pks, it's needed to rework messaging backends for that
}
# TODO: use names instead of pks, it's needed to rework messaging backends for that
if self.slack_channel_id:
if self.slack_channel_id:
SlackChannel = apps.get_model("slack", "SlackChannel")
sti = self.alert_receive_channel.organization.slack_team_identity
slack_channel = SlackChannel.objects.filter(
slack_team_identity=sti, slack_id=self.slack_channel_id
).first()
result["slack_channel"] = slack_channel.name
if self.telegram_channel:
result["telegram_channel"] = self.telegram_channel.public_primary_key
if self.escalation_chain:
result["escalation_chain"] = self.escalation_chain.insight_logs_verbal
result["escalation_chain_id"] = self.escalation_chain.public_primary_key
if self.notification_backends:
for backend_id, backend in self.notification_backends.items():
channel = backend.get("channel_id") or "default"
result[backend_id] = channel
return result
@property
def insight_logs_metadata(self):
return {
"integration": self.alert_receive_channel.insight_logs_verbal,
"integration_id": self.alert_receive_channel.public_primary_key,
}

View file

@ -94,19 +94,6 @@ class CustomButton(models.Model):
def hard_delete(self):
super().delete()
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
name: example, team: example, webhook: https://example.com, user: None, password: None,
authorization header: None, data: None
"""
return (
f"name: {self.name}, team: {self.team.name if self.team else 'No team'}, webhook: {self.webhook}, "
f"user: {self.user}, password: {self.password}, authorization header: {self.authorization_header}, "
f"data: {self.data}, forward_whole_payload {self.forward_whole_payload}"
)
def build_post_kwargs(self, alert):
post_kwargs = {}
if self.user and self.password:
@ -148,6 +135,44 @@ class CustomButton(models.Model):
"""
return json.dumps(string)[1:-1]
# Insight logs
@property
def insight_logs_type_verbal(self):
return "outgoing_webhook"
@property
def insight_logs_verbal(self):
return self.name
@property
def insight_logs_serialized(self):
result = {
"name": self.name,
"webhook": self.webhook,
"user": self.user,
"password": self.password,
"authorization_header": self.authorization_header,
"data": self.data,
"forward_whole_payload": self.forward_whole_payload,
}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
@property
def insight_logs_metadata(self):
result = {}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
class EscapeDoubleQuotesDict(dict):
"""

View file

@ -46,10 +46,6 @@ class EscalationChain(models.Model):
def __str__(self):
return f"{self.pk}: {self.name}"
@property
def repr_settings_for_client_side_logging(self):
return f"name: {self.name}, team: {self.team.name if self.team else 'No team'}"
def make_copy(self, copy_name: str):
with transaction.atomic():
copied_chain = EscalationChain.objects.create(
@ -68,3 +64,35 @@ class EscalationChain(models.Model):
escalation_policy.save()
escalation_policy.notify_to_users_queue.set(notify_to_users_queue)
return copied_chain
# Insight logs
@property
def insight_logs_type_verbal(self):
return "escalation_chain"
@property
def insight_logs_verbal(self):
return self.name
@property
def insight_logs_serialized(self):
result = {
"name": self.name,
}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
@property
def insight_logs_metadata(self):
result = {}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result

View file

@ -299,47 +299,6 @@ class EscalationPolicy(OrderedModel):
def step_type_verbal(self):
return self.STEP_CHOICES[self.step][1] if self.step is not None else "Empty"
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
step: 'Notify multiple Users', order: 0, important: No, users: Alex, Bob
Another example:
step: 'Continue escalation only if time is from', order: 4, from time: 09:40:00 (UTC), to time: 15:40:00 (UTC)
"""
result = f"step: '{self.step_type_verbal}', order: {self.order}"
if self.step not in EscalationPolicy.STEPS_WITH_NO_IMPORTANT_VERSION_SET:
result += f", important: {'Yes' if self.step in EscalationPolicy.IMPORTANT_STEPS_SET else 'No'}"
if self.step == EscalationPolicy.STEP_WAIT:
result += f", wait: {self.get_wait_delay_display() if self.wait_delay else 'default'}"
elif self.step in [EscalationPolicy.STEP_NOTIFY_GROUP, EscalationPolicy.STEP_NOTIFY_GROUP_IMPORTANT]:
result += f", user group: {self.notify_to_group.name if self.notify_to_group else 'not selected'}"
elif self.step in [EscalationPolicy.STEP_NOTIFY_SCHEDULE, EscalationPolicy.STEP_NOTIFY_SCHEDULE_IMPORTANT]:
result += f", on-call schedule: {self.notify_schedule.name if self.notify_schedule else 'not selected'}"
elif self.step == EscalationPolicy.STEP_TRIGGER_CUSTOM_BUTTON:
result += f", action: {self.custom_button_trigger.name if self.custom_button_trigger else 'not selected'}"
elif self.step in [
EscalationPolicy.STEP_NOTIFY_USERS_QUEUE,
EscalationPolicy.STEP_NOTIFY_MULTIPLE_USERS,
EscalationPolicy.STEP_NOTIFY_MULTIPLE_USERS_IMPORTANT,
]:
if self.notify_to_users_queue:
users_verbal = ", ".join([user.username for user in self.sorted_users_queue])
else:
users_verbal = "not selected"
result += f", users: {users_verbal}"
elif self.step == EscalationPolicy.STEP_NOTIFY_IF_TIME:
if self.from_time:
from_time_verbal = self.from_time.isoformat() + " (UTC)"
else:
from_time_verbal = "not selected"
if self.to_time:
to_time_verbal = self.to_time.isoformat() + " (UTC)"
else:
to_time_verbal = "not selected"
result += f", from time: {from_time_verbal}, to time: {to_time_verbal}"
return result
@property
def sorted_users_queue(self):
return sorted(self.notify_to_users_queue.all(), key=lambda user: (user.username or "", user.pk))
@ -359,3 +318,57 @@ class EscalationPolicy(OrderedModel):
step_name = step_choice[1]
break
return step_name
# Insight logs
@property
def insight_logs_type_verbal(self):
return "escalation_policy"
@property
def insight_logs_verbal(self):
return f"Escalation Policy {self.order} in {self.escalation_chain.insight_logs_verbal}"
@property
def insight_logs_serialized(self):
result = {
"type": self.step_type_verbal,
"order": self.order,
}
if self.step == EscalationPolicy.STEP_WAIT:
if self.wait_delay:
result["wait_delay"] = self.get_wait_delay_display()
elif self.step in [EscalationPolicy.STEP_NOTIFY_GROUP, EscalationPolicy.STEP_NOTIFY_GROUP_IMPORTANT]:
if self.notify_to_group:
result["user_group"] = self.notify_to_group.name
result["user_group_id"] = self.notify_to_group.public_primary_key
elif self.step in [EscalationPolicy.STEP_NOTIFY_SCHEDULE, EscalationPolicy.STEP_NOTIFY_SCHEDULE_IMPORTANT]:
if self.notify_schedule:
result["on-call_schedule"] = self.notify_schedule.insight_logs_verbal
result["on-call_schedule_id"] = self.notify_schedule.public_primary_key
elif self.step == EscalationPolicy.STEP_TRIGGER_CUSTOM_BUTTON:
if self.custom_button_trigger:
result["outgoing_webhook"] = self.custom_button_trigger.insight_logs_verbal
result["outgoing_webhook_id"] = self.custom_button_trigger.public_primary_key
elif self.step in [
EscalationPolicy.STEP_NOTIFY_USERS_QUEUE,
EscalationPolicy.STEP_NOTIFY_MULTIPLE_USERS,
EscalationPolicy.STEP_NOTIFY_MULTIPLE_USERS_IMPORTANT,
]:
if self.notify_to_users_queue:
result["notify_users"] = [user.username for user in self.sorted_users_queue]
result["notify_users_ids"] = [user.public_primary_key for user in self.sorted_users_queue]
elif self.step == EscalationPolicy.STEP_NOTIFY_IF_TIME:
if self.from_time:
result["from_time"] = self.from_time.isoformat() + " (UTC)"
if self.to_time:
result["to_time"] = self.to_time.isoformat() + " (UTC)"
return result
@property
def insight_logs_metadata(self):
return {
"escalation_chain": self.escalation_chain.insight_logs_verbal,
"escalation_chain_id": self.escalation_chain.public_primary_key,
}

View file

@ -7,8 +7,8 @@ from django.db import models, transaction
from django.utils import timezone
from apps.slack.scenarios.scenario_step import ScenarioStep
from apps.user_management.organization_log_creator import create_organization_log
from common.exceptions import MaintenanceCouldNotBeStartedError
from common.insight_log import MaintenanceEvent, write_maintenance_insight_log
class MaintainableObject(models.Model):
@ -82,7 +82,6 @@ class MaintainableObject(models.Model):
AlertGroup = apps.get_model("alerts", "AlertGroup")
AlertReceiveChannel = apps.get_model("alerts", "AlertReceiveChannel")
Alert = apps.get_model("alerts", "Alert")
OrganizationLogRecord = apps.get_model("base", "OrganizationLogRecord")
with transaction.atomic():
_self = self.__class__.objects.select_for_update().get(pk=self.pk)
@ -105,6 +104,7 @@ class MaintainableObject(models.Model):
organization=organization,
team=team,
integration=AlertReceiveChannel.INTEGRATION_MAINTENANCE,
author=user,
)
maintenance_uuid = _self.start_disable_maintenance_task(maintenance_duration)
@ -152,11 +152,7 @@ class MaintainableObject(models.Model):
},
)
alert.save()
# create team log
log_type, object_verbal = OrganizationLogRecord.get_log_type_and_maintainable_object_verbal(self, mode, verbal)
description = f"{self.get_maintenance_mode_display()} of {object_verbal} started for {duration_verbal}"
create_organization_log(organization, user, log_type, description)
write_maintenance_insight_log(self, user, MaintenanceEvent.STARTED)
if mode == AlertReceiveChannel.MAINTENANCE:
self.send_maintenance_incident(organization, group, alert)
self.notify_about_maintenance_action(

View file

@ -30,13 +30,34 @@ def call_ack_url(ack_url, alert_group_pk, channel, http_method="GET"):
else None
)
text = "{}".format(debug_message)
footer = "{}".format(info_message)
blocks = [
{
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": text,
},
},
{"type": "divider"},
{
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": footer,
},
},
]
if channel is not None:
result = sc.api_call(
"chat.postMessage",
channel=channel,
attachments=[
{"callback_id": "alert", "text": "{}".format(debug_message), "footer": "{}".format(info_message)},
],
text=text,
blocks=blocks,
thread_ts=alert_group.slack_message.slack_id,
mrkdwn=True,
)

View file

@ -4,8 +4,8 @@ from django.db import transaction
from django.db.models import ExpressionWrapper, F, fields
from django.utils import timezone
from apps.user_management.organization_log_creator import create_organization_log
from common.custom_celery_tasks import shared_dedicated_queue_retry_task
from common.insight_log import MaintenanceEvent, write_maintenance_insight_log
from .task_logger import task_logger
@ -15,7 +15,6 @@ from .task_logger import task_logger
)
def disable_maintenance(*args, **kwargs):
AlertGroup = apps.get_model("alerts", "AlertGroup")
OrganizationLogRecord = apps.get_model("base", "OrganizationLogRecord")
User = apps.get_model("user_management", "User")
Organization = apps.get_model("user_management", "Organization")
user = None
@ -25,7 +24,6 @@ def disable_maintenance(*args, **kwargs):
user = User.objects.get(pk=user_id)
force = kwargs.get("force", False)
with transaction.atomic():
if "alert_receive_channel_id" in kwargs:
AlertReceiveChannel = apps.get_model("alerts", "AlertReceiveChannel")
@ -52,23 +50,8 @@ def disable_maintenance(*args, **kwargs):
if object_under_maintenance is not None and (
disable_maintenance.request.id == object_under_maintenance.maintenance_uuid or force
):
verbal = object_under_maintenance.get_verbal()
log_type, object_verbal = OrganizationLogRecord.get_log_type_and_maintainable_object_verbal(
object_under_maintenance,
object_under_maintenance.maintenance_mode,
verbal,
stopped=True,
)
description = (
f"{object_under_maintenance.get_maintenance_mode_display()} of {object_verbal} "
f"stopped{' by user' if user else ''}"
)
organization = (
object_under_maintenance
if isinstance(object_under_maintenance, Organization)
else object_under_maintenance.organization
)
create_organization_log(organization, user, log_type, description)
organization = object_under_maintenance.get_organization()
write_maintenance_insight_log(object_under_maintenance, user, MaintenanceEvent.FINISHED)
if object_under_maintenance.maintenance_mode == object_under_maintenance.MAINTENANCE:
mode_verbal = "Maintenance"
maintenance_incident = AlertGroup.all_objects.get(
@ -82,7 +65,7 @@ def disable_maintenance(*args, **kwargs):
if organization.slack_team_identity:
transaction.on_commit(
lambda: object_under_maintenance.notify_about_maintenance_action(
f"{mode_verbal} of {verbal} finished."
f"{mode_verbal} of {object_under_maintenance.get_verbal()} finished."
)
)

View file

@ -58,7 +58,7 @@ def notify_group_task(alert_group_pk, escalation_policy_snapshot_order=None):
if not user.is_notification_allowed:
continue
notification_policies = UserNotificationPolicy.objects.get_or_create_for_user(
notification_policies = UserNotificationPolicy.objects.filter(
user=user,
important=escalation_policy_step == EscalationPolicy.STEP_NOTIFY_GROUP_IMPORTANT,
)

View file

@ -73,9 +73,12 @@ def notify_user_task(
user_has_notification = UserHasNotification.objects.filter(pk=user_has_notification.pk).select_for_update()[0]
if previous_notification_policy_pk is None:
notification_policy = UserNotificationPolicy.objects.get_or_create_for_user(
user=user, important=important
).first()
notification_policy = UserNotificationPolicy.objects.filter(user=user, important=important).first()
if notification_policy is None:
task_logger.info(
f"notify_user_task: Failed to notify. No notification policies. user_id={user_pk} alert_group_id={alert_group_pk} important={important}"
)
return
# Here we collect a brief overview of notification steps configured for user to send it to thread.
collected_steps_ids = []
next_notification_policy = notification_policy.next()

View file

@ -33,29 +33,6 @@ def test_channel_filter_select_filter(make_organization, make_alert_receive_chan
assert satisfied_filter == channel_filter
@pytest.mark.django_db
def test_channel_filter_notification_backends_repr(make_organization, make_alert_receive_channel, make_channel_filter):
organization = make_organization()
alert_receive_channel = make_alert_receive_channel(organization)
# extra backend is enabled
channel_filter = make_channel_filter(
alert_receive_channel,
notification_backends={"BACKEND": {"channel_id": "foobar", "enabled": True}},
)
assert "BACKEND notification allowed: Yes" in channel_filter.repr_settings_for_client_side_logging
assert "BACKEND channel: foobar" in channel_filter.repr_settings_for_client_side_logging
# backend is disabled
channel_filter_disabled_backend = make_channel_filter(
alert_receive_channel,
notification_backends={"BACKEND": {"channel_id": "foobar", "enabled": False}},
)
assert "BACKEND notification allowed: No" in channel_filter_disabled_backend.repr_settings_for_client_side_logging
assert "BACKEND channel: foobar" in channel_filter_disabled_backend.repr_settings_for_client_side_logging
@mock.patch("apps.integrations.tasks.create_alert.apply_async", return_value=None)
@pytest.mark.django_db
def test_send_demo_alert(

View file

@ -22,7 +22,7 @@ def test_start_maintenance_integration(
organization, user = maintenance_test_setup
alert_receive_channel = make_alert_receive_channel(
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA, author=user
)
mode = AlertReceiveChannel.MAINTENANCE
duration = AlertReceiveChannel.DURATION_ONE_HOUR.seconds
@ -43,11 +43,13 @@ def test_start_maintenance_integration_multiple_previous_instances(
organization, user = maintenance_test_setup
alert_receive_channel = make_alert_receive_channel(
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA, author=user
)
# 2 maintenance integrations were created in the past
for i in range(2):
AlertReceiveChannel.create(organization=organization, integration=AlertReceiveChannel.INTEGRATION_MAINTENANCE)
AlertReceiveChannel.create(
organization=organization, integration=AlertReceiveChannel.INTEGRATION_MAINTENANCE, author=user
)
mode = AlertReceiveChannel.MAINTENANCE
duration = AlertReceiveChannel.DURATION_ONE_HOUR.seconds
@ -68,7 +70,7 @@ def test_maintenance_integration_will_not_start_twice(
organization, user = maintenance_test_setup
alert_receive_channel = make_alert_receive_channel(
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA
organization, integration=AlertReceiveChannel.INTEGRATION_GRAFANA, author=user
)
mode = AlertReceiveChannel.MAINTENANCE
duration = AlertReceiveChannel.DURATION_ONE_HOUR.seconds

View file

@ -147,7 +147,7 @@ class CurrentOrganizationSerializer(OrganizationSerializer):
else:
verbal_time_saved_by_amixr = None
res = {
result = {
"grouped_percent": obj.cached_grouped_percent,
"alerts_count": obj.cached_alerts_count,
"noise_reduction": obj.cached_noise_reduction,
@ -155,7 +155,7 @@ class CurrentOrganizationSerializer(OrganizationSerializer):
"verbal_time_saved_by_amixr": verbal_time_saved_by_amixr,
}
return res
return result
def update(self, instance, validated_data):
current_archive_date = instance.archive_alerts_from

View file

@ -1,38 +0,0 @@
from emoji import emojize
from rest_framework import serializers
from apps.base.models import OrganizationLogRecord
from common.api_helpers.mixins import EagerLoadingMixin
class OrganizationLogRecordSerializer(EagerLoadingMixin, serializers.ModelSerializer):
id = serializers.CharField(read_only=True, source="public_primary_key")
author = serializers.SerializerMethodField()
description = serializers.SerializerMethodField()
class Meta:
model = OrganizationLogRecord
fields = [
"id",
"author",
"created_at",
"description",
"labels",
]
read_only_fields = fields.copy()
PREFETCH_RELATED = [
"author__organization",
# "author__slack_user_identities__slack_team_identity__amixr_team",
]
SELECT_RELATED = ["author", "organization"]
def get_author(self, obj):
if obj.author:
user_data = obj.author.short()
return user_data
def get_description(self, obj):
return emojize(obj.description, use_aliases=True).replace("\n", "<br>")

View file

@ -1,242 +0,0 @@
from unittest.mock import patch
import pytest
from django.urls import reverse
from rest_framework import status
from rest_framework.response import Response
from rest_framework.test import APIClient
from apps.base.models import OrganizationLogRecord
from apps.user_management.organization_log_creator import OrganizationLogType
from common.constants.role import Role
@pytest.mark.django_db
@pytest.mark.parametrize(
"role,expected_status",
[
(Role.ADMIN, status.HTTP_200_OK),
(Role.EDITOR, status.HTTP_200_OK),
(Role.VIEWER, status.HTTP_200_OK),
],
)
def test_organization_log_records_permissions(
make_organization_and_user_with_plugin_token, make_user_auth_headers, role, expected_status
):
_, user, token = make_organization_and_user_with_plugin_token(role)
client = APIClient()
url = reverse("api-internal:organization_log-list")
with patch(
"apps.api.views.organization_log_record.OrganizationLogRecordView.list",
return_value=Response(
status=status.HTTP_200_OK,
),
):
response = client.get(url, format="json", **make_user_auth_headers(user, token))
assert response.status_code == expected_status
@pytest.mark.django_db
@pytest.mark.parametrize(
"role,expected_status",
[
(Role.ADMIN, status.HTTP_200_OK),
(Role.EDITOR, status.HTTP_200_OK),
(Role.VIEWER, status.HTTP_200_OK),
],
)
def test_organization_log_records_filters_permissions(
make_organization_and_user_with_plugin_token, make_user_auth_headers, role, expected_status
):
_, user, token = make_organization_and_user_with_plugin_token(role)
client = APIClient()
url = reverse("api-internal:organization_log-filters")
with patch(
"apps.api.views.organization_log_record.OrganizationLogRecordView.filters",
return_value=Response(
status=status.HTTP_200_OK,
),
):
response = client.get(url, format="json", **make_user_auth_headers(user, token))
assert response.status_code == expected_status
@pytest.mark.django_db
@pytest.mark.parametrize(
"role,expected_status",
[
(Role.ADMIN, status.HTTP_200_OK),
(Role.EDITOR, status.HTTP_200_OK),
(Role.VIEWER, status.HTTP_200_OK),
],
)
def test_organization_log_records_label_options_permissions(
make_organization_and_user_with_plugin_token, make_user_auth_headers, role, expected_status
):
_, user, token = make_organization_and_user_with_plugin_token(role)
client = APIClient()
url = reverse("api-internal:organization_log-label-options")
with patch(
"apps.api.views.organization_log_record.OrganizationLogRecordView.label_options",
return_value=Response(
status=status.HTTP_200_OK,
),
):
response = client.get(url, format="json", **make_user_auth_headers(user, token))
assert response.status_code == expected_status
@pytest.mark.django_db
def test_get_filter_created_at(
make_organization_and_user_with_plugin_token,
make_organization_log_record,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
make_organization_log_record(organization, user)
url = reverse("api-internal:organization_log-list")
response = client.get(
url + "?created_at=1970-01-01T00:00:00/2099-01-01T23:59:59",
format="json",
**make_user_auth_headers(user, token),
)
assert response.status_code == status.HTTP_200_OK
assert len(response.data["results"]) == 1
@pytest.mark.django_db
def test_get_filter_created_at_empty_result(
make_organization_and_user_with_plugin_token,
make_organization_log_record,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
make_organization_log_record(organization, user)
url = reverse("api-internal:organization_log-list")
response = client.get(
f"{url}?created_at=1970-01-01T00:00:00/1970-01-01T23:59:59",
format="json",
**make_user_auth_headers(user, token),
)
assert response.status_code == status.HTTP_200_OK
assert len(response.data["results"]) == 0
@pytest.mark.django_db
def test_get_filter_created_at_invalid_format(
make_organization_and_user_with_plugin_token,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
url = reverse("api-internal:organization_log-list")
response = client.get(f"{url}?created_at=invalid_date_format", format="json", **make_user_auth_headers(user, token))
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_get_filter_by_labels(
make_organization_and_user_with_plugin_token,
make_organization_log_record,
make_user_auth_headers,
):
organization, user, token = make_organization_and_user_with_plugin_token()
client = APIClient()
# create log that contains LABEL_SLACK and LABEL_DEFAULT_CHANNEL
make_organization_log_record(organization, user, type=OrganizationLogType.TYPE_SLACK_DEFAULT_CHANNEL_CHANGED)
# create log that contains LABEL_SLACK but does not contain LABEL_DEFAULT_CHANNEL
make_organization_log_record(organization, user, type=OrganizationLogType.TYPE_SLACK_WORKSPACE_DISCONNECTED)
# create log that does not contain labels from search
make_organization_log_record(organization, user, type=OrganizationLogType.TYPE_INTEGRATION_CREATED)
url = reverse("api-internal:organization_log-list")
# search by one label: LABEL_SLACK
response = client.get(
f"{url}?labels={OrganizationLogRecord.LABEL_SLACK}", format="json", **make_user_auth_headers(user, token)
)
assert response.status_code == status.HTTP_200_OK
assert len(response.data["results"]) == 2
response_log_labels = [log["labels"] for log in response.data["results"]]
for labels in response_log_labels:
assert OrganizationLogRecord.LABEL_SLACK in labels
# search by two labels: LABEL_SLACK and LABEL_DEFAULT_CHANNEL
response = client.get(
f"{url}?labels={OrganizationLogRecord.LABEL_SLACK}&labels={OrganizationLogRecord.LABEL_DEFAULT_CHANNEL}",
format="json",
**make_user_auth_headers(user, token),
)
assert response.status_code == status.HTTP_200_OK
assert len(response.data["results"]) == 1
response_log_labels = [log["labels"] for log in response.data["results"]]
for labels in response_log_labels:
assert OrganizationLogRecord.LABEL_SLACK in labels
assert OrganizationLogRecord.LABEL_DEFAULT_CHANNEL in labels
@pytest.mark.django_db
def test_get_filter_author(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_organization_log_record,
make_user_auth_headers,
):
client = APIClient()
organization, first_user, token = make_organization_and_user_with_plugin_token()
second_user = make_user_for_organization(organization)
make_organization_log_record(organization, first_user)
url = reverse("api-internal:organization_log-list")
first_response = client.get(
f"{url}?author={first_user.public_primary_key}", format="json", **make_user_auth_headers(first_user, token)
)
assert first_response.status_code == status.HTTP_200_OK
assert len(first_response.data["results"]) == 1
second_response = client.get(
f"{url}?author={second_user.public_primary_key}", format="json", **make_user_auth_headers(first_user, token)
)
assert second_response.status_code == status.HTTP_200_OK
assert len(second_response.data["results"]) == 0
@pytest.mark.django_db
def test_get_filter_author_multiple_values(
make_organization_and_user_with_plugin_token,
make_user_for_organization,
make_organization_log_record,
make_user_auth_headers,
):
client = APIClient()
organization, first_user, token = make_organization_and_user_with_plugin_token()
second_user = make_user_for_organization(organization)
third_user = make_user_for_organization(organization)
make_organization_log_record(organization, first_user)
make_organization_log_record(organization, second_user)
url = reverse("api-internal:organization_log-list")
first_response = client.get(
f"{url}?author={first_user.public_primary_key}&author={second_user.public_primary_key}",
format="json",
**make_user_auth_headers(first_user, token),
)
assert first_response.status_code == status.HTTP_200_OK
assert len(first_response.data["results"]) == 2
second_response = client.get(
f"{url}?author={first_user.public_primary_key}&author={third_user.public_primary_key}",
format="json",
**make_user_auth_headers(first_user, token),
)
assert second_response.status_code == status.HTTP_200_OK
assert len(second_response.data["results"]) == 1

View file

@ -912,7 +912,7 @@ def test_merging_same_shift_events(
"is_gap": False,
"priority_level": 1,
"start": start_date + timezone.timedelta(hours=10),
"users": [user_a.username, user_b.username],
"users": sorted([user_a.username, user_b.username]),
"missing_users": [user_c.username],
}
]
@ -929,7 +929,7 @@ def test_merging_same_shift_events(
"is_gap": e["is_gap"],
"priority_level": e["priority_level"],
"start": e["start"],
"users": [u["display_name"] for u in e["users"]] if e["users"] else None,
"users": sorted([u["display_name"] for u in e["users"]]) if e["users"] else None,
"missing_users": e["missing_users"],
}
for e in response.data["events"]
@ -950,7 +950,7 @@ def test_merging_same_shift_events(
"is_gap": e["is_gap"],
"priority_level": e["priority_level"],
"start": e["start"],
"users": [u["display_name"] for u in e["users"]] if e["users"] else None,
"users": sorted([u["display_name"] for u in e["users"]]) if e["users"] else None,
"missing_users": e["missing_users"],
}
for e in response.data["events"]

View file

@ -25,7 +25,6 @@ from .views.organization import (
GetTelegramVerificationCode,
SetGeneralChannel,
)
from .views.organization_log_record import OrganizationLogRecordView
from .views.preview_template_options import PreviewTemplateOptionsView
from .views.public_api_tokens import PublicApiTokenView
from .views.resolution_note import ResolutionNoteView
@ -65,7 +64,6 @@ router.register(r"telegram_channels", TelegramChannelViewSet, basename="telegram
router.register(r"slack_channels", SlackChannelView, basename="slack_channel")
router.register(r"user_groups", UserGroupViewSet, basename="user_group")
router.register(r"heartbeats", IntegrationHeartBeatView, basename="integration_heartbeat")
router.register(r"organization_logs", OrganizationLogRecordView, basename="organization_log")
router.register(r"tokens", PublicApiTokenView, basename="api_token")
router.register(r"live_settings", LiveSettingViewSet, basename="live_settings")
router.register(r"oncall_shifts", OnCallShiftView, basename="oncall_shifts")

View file

@ -17,7 +17,6 @@ from apps.api.serializers.alert_receive_channel import (
)
from apps.api.throttlers import DemoAlertThrottler
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import (
FilterSerializerMixin,
@ -26,6 +25,7 @@ from common.api_helpers.mixins import (
UpdateSerializerMixin,
)
from common.exceptions import TeamCanNotBeChangedError, UnableToSendDemoAlert
from common.insight_log import EntityEvent, write_resource_insight_log
class AlertReceiveChannelFilter(filters.FilterSet):
@ -96,21 +96,22 @@ class AlertReceiveChannelView(
return Response(data="invalid integration", status=status.HTTP_400_BAD_REQUEST)
def perform_update(self, serializer):
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Integration settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
serializer.instance.organization,
self.request.user,
OrganizationLogType.TYPE_INTEGRATION_CHANGED,
description,
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
description = f"Integration {instance.verbal_name} was deleted"
create_organization_log(
instance.organization, self.request.user, OrganizationLogType.TYPE_INTEGRATION_DELETED, description
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()

View file

@ -5,8 +5,8 @@ from apps.alerts.models import AlertReceiveChannel
from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission, AnyRole, IsAdmin
from apps.api.serializers.alert_receive_channel import AlertReceiveChannelTemplatesSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.mixins import PublicPrimaryKeyMixin
from common.insight_log import EntityEvent, write_resource_insight_log
class AlertReceiveChannelTemplateView(
@ -35,18 +35,15 @@ class AlertReceiveChannelTemplateView(
def update(self, request, *args, **kwargs):
instance = self.get_object()
old_state = instance.repr_settings_for_client_side_logging
prev_state = instance.insight_logs_serialized
result = super().update(request, *args, **kwargs)
instance = self.get_object()
new_state = instance.repr_settings_for_client_side_logging
if new_state != old_state:
description = f"Integration settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_INTEGRATION_CHANGED,
description,
)
new_state = instance.insight_logs_serialized
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return result

View file

@ -15,10 +15,10 @@ from apps.api.serializers.channel_filter import (
from apps.api.throttlers import DemoAlertThrottler
from apps.auth_token.auth import PluginAuthentication
from apps.slack.models import SlackChannel
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import CreateSerializerMixin, PublicPrimaryKeyMixin, UpdateSerializerMixin
from common.exceptions import UnableToSendDemoAlert
from common.insight_log import EntityEvent, write_resource_insight_log
class ChannelFilterView(PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateSerializerMixin, ModelViewSet):
@ -59,70 +59,59 @@ class ChannelFilterView(PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateSeri
return queryset
def destroy(self, request, *args, **kwargs):
user = request.user
instance = self.get_object()
if instance.is_default:
raise BadRequest(detail="Unable to delete default filter")
else:
alert_receive_channel = instance.alert_receive_channel
route_verbal = instance.verbal_name_for_clients.capitalize()
description = f"{route_verbal} for integration {alert_receive_channel.verbal_name} was deleted"
create_organization_log(
user.organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_DELETED, description
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)
def perform_create(self, serializer):
user = self.request.user
serializer.save()
instance = serializer.instance
alert_receive_channel = instance.alert_receive_channel
route_verbal = instance.verbal_name_for_clients.capitalize()
description = f"{route_verbal} was created for integration {alert_receive_channel.verbal_name}"
create_organization_log(user.organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_CREATED, description)
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
alert_receive_channel = serializer.instance.alert_receive_channel
route_verbal = serializer.instance.verbal_name_for_clients
description = (
f"Settings for {route_verbal} of integration {alert_receive_channel.verbal_name} "
f"was changed from:\n{old_state}\nto:\n{new_state}"
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
create_organization_log(user.organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_CHANGED, description)
@action(detail=True, methods=["put"])
def move_to_position(self, request, pk):
position = request.query_params.get("position", None)
if position is not None:
try:
source_filter = ChannelFilter.objects.get(public_primary_key=pk)
instance = ChannelFilter.objects.get(public_primary_key=pk)
except ChannelFilter.DoesNotExist:
raise BadRequest(detail="Channel filter does not exist")
try:
if source_filter.is_default:
if instance.is_default:
raise BadRequest(detail="Unable to change position for default filter")
user = self.request.user
old_state = source_filter.repr_settings_for_client_side_logging
prev_state = instance.insight_logs_serialized
instance.to(int(position))
new_state = instance.insight_logs_serialized
source_filter.to(int(position))
new_state = source_filter.repr_settings_for_client_side_logging
alert_receive_channel = source_filter.alert_receive_channel
route_verbal = source_filter.verbal_name_for_clients
description = (
f"Settings for {route_verbal} of integration {alert_receive_channel.verbal_name} "
f"was changed from:\n{old_state}\nto:\n{new_state}"
)
create_organization_log(
user.organization,
user,
OrganizationLogType.TYPE_CHANNEL_FILTER_CHANGED,
description,
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(status=status.HTTP_200_OK)
except ValueError as e:

View file

@ -11,9 +11,9 @@ from apps.alerts.tasks.custom_button_result import custom_button_result
from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission, AnyRole, IsAdmin, IsAdminOrEditor
from apps.api.serializers.custom_button import CustomButtonSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import PublicPrimaryKeyMixin
from common.insight_log import EntityEvent, write_resource_insight_log
class CustomButtonView(PublicPrimaryKeyMixin, ModelViewSet):
@ -55,26 +55,30 @@ class CustomButtonView(PublicPrimaryKeyMixin, ModelViewSet):
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.auth.organization
user = self.request.user
description = f"Custom action {instance.name} was created"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_CREATED, description)
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Custom action {serializer.instance.name} was changed " f"from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_CHANGED, description)
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = f"Custom action {instance.name} was deleted"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_DELETED, description)
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()
@action(detail=True, methods=["post"])

View file

@ -10,9 +10,9 @@ from apps.alerts.models import EscalationChain
from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission, AnyRole, IsAdmin
from apps.api.serializers.escalation_chain import EscalationChainListSerializer, EscalationChainSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import ListSerializerMixin, PublicPrimaryKeyMixin
from common.insight_log import EntityEvent, write_resource_insight_log
class EscalationChainViewSet(PublicPrimaryKeyMixin, ListSerializerMixin, viewsets.ModelViewSet):
@ -56,45 +56,31 @@ class EscalationChainViewSet(PublicPrimaryKeyMixin, ListSerializerMixin, viewset
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
description = f"Escalation chain {instance.name} was created"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_CREATED,
description,
)
write_resource_insight_log(instance=serializer.instance, author=self.request.user, event=EntityEvent.CREATED)
def perform_destroy(self, instance):
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()
description = f"Escalation chain {instance.name} was deleted"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_DELETED,
description,
)
def perform_update(self, serializer):
instance = serializer.instance
old_state = instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.insight_logs_serialized
new_state = instance.repr_settings_for_client_side_logging
description = f"Escalation chain {instance.name} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_CHANGED,
description,
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
@action(methods=["post"], detail=True)
def copy(self, request, pk):
user = request.user
name = request.data.get("name")
if name is None:
raise BadRequest(detail={"name": ["This field may not be null."]})
@ -105,8 +91,11 @@ class EscalationChainViewSet(PublicPrimaryKeyMixin, ListSerializerMixin, viewset
obj = self.get_object()
copy = obj.make_copy(name)
serializer = self.get_serializer(copy)
description = f"Escalation chain {obj.name} was copied with new name {name}"
create_organization_log(copy.organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_CHANGED, description)
write_resource_insight_log(
instance=copy,
author=self.request.user,
event=EntityEvent.CREATED,
)
return Response(serializer.data)
@action(methods=["get"], detail=True)

View file

@ -14,9 +14,9 @@ from apps.api.serializers.escalation_policy import (
EscalationPolicyUpdateSerializer,
)
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import CreateSerializerMixin, PublicPrimaryKeyMixin, UpdateSerializerMixin
from common.insight_log import EntityEvent, write_resource_insight_log
class EscalationPolicyView(PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateSerializerMixin, ModelViewSet):
@ -66,37 +66,31 @@ class EscalationPolicyView(PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateS
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.user.organization
user = self.request.user
description = (
f"Escalation step '{instance.step_type_verbal}' with order {instance.order} "
f"was created for escalation chain '{instance.escalation_chain.name}'"
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_CREATED, description)
def perform_update(self, serializer):
organization = self.request.user.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
escalation_chain_name = serializer.instance.escalation_chain.name
new_state = serializer.instance.insight_logs_serialized
description = (
f"Settings for escalation step of escalation chain '{escalation_chain_name}' "
f"was changed from:\n{old_state}\nto:\n{new_state}"
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_CHANGED, description)
def perform_destroy(self, instance):
organization = self.request.user.organization
user = self.request.user
description = (
f"Escalation step '{instance.step_type_verbal}' with order {instance.order} of "
f"of escalation chain '{instance.escalation_chain.name}' was deleted"
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_DELETED, description)
instance.delete()
@action(detail=True, methods=["put"])
@ -104,29 +98,22 @@ class EscalationPolicyView(PublicPrimaryKeyMixin, CreateSerializerMixin, UpdateS
position = request.query_params.get("position", None)
if position is not None:
try:
source_step = EscalationPolicy.objects.get(public_primary_key=pk)
instance = EscalationPolicy.objects.get(public_primary_key=pk)
except EscalationPolicy.DoesNotExist:
raise BadRequest(detail="Step does not exist")
try:
user = self.request.user
old_state = source_step.repr_settings_for_client_side_logging
prev_state = instance.insight_logs_serialized
position = int(position)
source_step.to(position)
instance.to(position)
new_state = instance.insight_logs_serialized
new_state = source_step.repr_settings_for_client_side_logging
escalation_chain_name = source_step.escalation_chain.name
description = (
f"Settings for escalation step of escalation chain '{escalation_chain_name}' "
f"was changed from:\n{old_state}\nto:\n{new_state}"
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
create_organization_log(
user.organization,
user,
OrganizationLogType.TYPE_ESCALATION_STEP_CHANGED,
description,
)
return Response(status=status.HTTP_200_OK)
except ValueError as e:
raise BadRequest(detail=f"{e}")

View file

@ -7,8 +7,8 @@ from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission,
from apps.api.serializers.integration_heartbeat import IntegrationHeartBeatSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.heartbeat.models import IntegrationHeartBeat
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.mixins import PublicPrimaryKeyMixin
from common.insight_log import EntityEvent, write_resource_insight_log
class IntegrationHeartBeatView(
@ -45,29 +45,22 @@ class IntegrationHeartBeatView(
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
description = f"Heartbeat for integration {instance.alert_receive_channel.verbal_name} was created"
create_organization_log(
instance.alert_receive_channel.organization,
self.request.user,
OrganizationLogType.TYPE_HEARTBEAT_CREATED,
description,
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
alert_receive_channel = serializer.instance.alert_receive_channel
description = (
f"Settings for heartbeat of integration "
f"{alert_receive_channel.verbal_name} was changed "
f"from:\n{old_state}\nto:\n{new_state}"
)
create_organization_log(
alert_receive_channel.organization,
self.request.user,
OrganizationLogType.TYPE_HEARTBEAT_CHANGED,
description,
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
@action(detail=False, methods=["get"])

View file

@ -10,10 +10,10 @@ from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission,
from apps.api.serializers.on_call_shifts import OnCallShiftSerializer, OnCallShiftUpdateSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.schedules.models import CustomOnCallShift
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.mixins import PublicPrimaryKeyMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.api_helpers.utils import get_date_range_from_request
from common.insight_log import EntityEvent, write_resource_insight_log
class OnCallShiftView(PublicPrimaryKeyMixin, UpdateSerializerMixin, ModelViewSet):
@ -52,31 +52,30 @@ class OnCallShiftView(PublicPrimaryKeyMixin, UpdateSerializerMixin, ModelViewSet
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.auth.organization
user = self.request.user
description = (
f"Custom on-call shift with params: {instance.repr_settings_for_client_side_logging} "
f"was created" # todo
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_CREATED, description)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Settings of custom on-call shift was changed " f"from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_CHANGED, description)
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = (
f"Custom on-call shift " f"with params: {instance.repr_settings_for_client_side_logging} was deleted"
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_DELETED, description)
instance.delete()
@action(detail=False, methods=["post"])

View file

@ -11,7 +11,7 @@ from apps.api.serializers.organization import CurrentOrganizationSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.base.messaging import get_messaging_backend_from_id
from apps.telegram.client import TelegramClient
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log import EntityEvent, write_resource_insight_log
class CurrentOrganizationView(APIView):
@ -27,16 +27,19 @@ class CurrentOrganizationView(APIView):
def put(self, request):
organization = self.request.auth.organization
old_state = organization.repr_settings_for_client_side_logging
prev_state = organization.insight_logs_serialized
serializer = CurrentOrganizationSerializer(
instance=organization, data=request.data, context={"request": request}
)
serializer.is_valid(raise_exception=True)
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Organization settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization, request.user, OrganizationLogType.TYPE_ORGANIZATION_SETTINGS_CHANGED, description
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(serializer.data)

View file

@ -1,128 +0,0 @@
from datetime import timedelta
from django.db.models import Q
from django.utils import timezone
from django_filters import rest_framework as filters
from django_filters.rest_framework import DjangoFilterBackend
from rest_framework import mixins, viewsets
from rest_framework.decorators import action
from rest_framework.filters import SearchFilter
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from apps.api.serializers.organization_log_record import OrganizationLogRecordSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.base.models import OrganizationLogRecord
from apps.user_management.models import User
from common.api_helpers.filters import DateRangeFilterMixin, ModelFieldFilterMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
LABEL_CHOICES = [[label, label] for label in OrganizationLogRecord.LABELS]
def get_user_queryset(request):
if request is None:
return User.objects.none()
return User.objects.filter(organization=request.user.organization).distinct()
class OrganizationLogRecordFilter(DateRangeFilterMixin, ModelFieldFilterMixin, filters.FilterSet):
author = filters.ModelMultipleChoiceFilter(
field_name="author",
queryset=get_user_queryset,
to_field_name="public_primary_key",
method=ModelFieldFilterMixin.filter_model_field.__name__,
)
created_at = filters.CharFilter(field_name="created_at", method=DateRangeFilterMixin.filter_date_range.__name__)
labels = filters.MultipleChoiceFilter(choices=LABEL_CHOICES, method="filter_labels")
class Meta:
model = OrganizationLogRecord
fields = ["author", "labels", "created_at"]
def filter_labels(self, queryset, name, value):
if not value:
return queryset
q_objects = Q()
for item in value:
q_objects &= Q(_labels__contains=item)
queryset = queryset.filter(q_objects)
return queryset
class OrganizationLogRecordView(mixins.ListModelMixin, viewsets.GenericViewSet):
authentication_classes = (PluginAuthentication,)
permission_classes = (IsAuthenticated,)
serializer_class = OrganizationLogRecordSerializer
pagination_class = FiftyPageSizePaginator
filter_backends = (
SearchFilter,
DjangoFilterBackend,
)
search_fields = ("description",)
filterset_class = OrganizationLogRecordFilter
def get_queryset(self):
queryset = OrganizationLogRecord.objects.filter(organization=self.request.auth.organization).order_by(
"-created_at"
)
queryset = self.serializer_class.setup_eager_loading(queryset)
return queryset
@action(detail=False, methods=["get"])
def filters(self, request):
filter_name = request.query_params.get("filter_name", None)
api_root = "/api/internal/v1/"
filter_options = [
{
"name": "search",
"type": "search",
},
{
"name": "author",
"type": "options",
"href": api_root + "users/?filters=true&roles=0&roles=1&roles=2",
},
{
"name": "labels",
"type": "options",
"options": [
{
"display_name": label,
"value": label,
}
for label in OrganizationLogRecord.LABELS
],
},
{
"name": "created_at",
"type": "daterange",
"default": f"{timezone.datetime.now() - timedelta(days=7):%Y-%m-%d/{timezone.datetime.now():%Y-%m-%d}}",
},
]
if filter_name is not None:
filter_options = list(filter(lambda f: f["name"].startswith(filter_name), filter_options))
return Response(filter_options)
@action(detail=False, methods=["get"])
def label_options(self, request):
return Response(
[
{
"display_name": label,
"value": label,
}
for label in OrganizationLogRecord.LABELS
]
)

View file

@ -7,8 +7,8 @@ from apps.api.serializers.public_api_token import PublicApiTokenSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.auth_token.constants import MAX_PUBLIC_API_TOKENS_PER_USER
from apps.auth_token.models import ApiAuthToken
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.insight_log import EntityEvent, write_resource_insight_log
class PublicApiTokenView(
@ -30,10 +30,8 @@ class PublicApiTokenView(
return ApiAuthToken.objects.filter(user=self.request.user, organization=self.request.user.organization)
def destroy(self, request, *args, **kwargs):
user = request.user
instance = self.get_object()
description = f"API token {instance.name} was revoked"
create_organization_log(user.organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_DELETED, description)
write_resource_insight_log(instance=instance, author=instance.author, event=EntityEvent.DELETED)
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)
@ -51,5 +49,5 @@ class PublicApiTokenView(
raise BadRequest("Invalid token name")
instance, token = ApiAuthToken.create_auth_token(user, user.organization, token_name)
data = {"id": instance.pk, "token": token, "name": instance.name, "created_at": instance.created_at}
write_resource_insight_log(instance=instance, author=user, event=EntityEvent.CREATED)
return Response(data, status=status.HTTP_201_CREATED)

View file

@ -25,7 +25,6 @@ from apps.auth_token.models import ScheduleExportAuthToken
from apps.schedules.models import OnCallSchedule
from apps.slack.models import SlackChannel
from apps.slack.tasks import update_slack_user_group_for_schedules
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest, Conflict
from common.api_helpers.mixins import (
CreateSerializerMixin,
@ -34,6 +33,7 @@ from common.api_helpers.mixins import (
UpdateSerializerMixin,
)
from common.api_helpers.utils import create_engine_url, get_date_range_from_request
from common.insight_log import EntityEvent, write_resource_insight_log
EVENTS_FILTER_BY_ROTATION = "rotation"
EVENTS_FILTER_BY_OVERRIDE = "override"
@ -136,38 +136,32 @@ class ScheduleView(
return super().get_object()
def perform_create(self, serializer):
schedule = serializer.save()
if schedule.user_group is not None:
update_slack_user_group_for_schedules.apply_async((schedule.user_group.pk,))
organization = self.request.auth.organization
user = self.request.user
description = f"Schedule {schedule.name} was created"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_CREATED, description)
serializer.save()
write_resource_insight_log(instance=serializer.instance, author=self.request.user, event=EntityEvent.CREATED)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_schedule = serializer.instance
old_state = old_schedule.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
old_user_group = serializer.instance.user_group
updated_schedule = serializer.save()
serializer.save()
if old_user_group is not None:
update_slack_user_group_for_schedules.apply_async((old_user_group.pk,))
if updated_schedule.user_group is not None and updated_schedule.user_group != old_user_group:
update_slack_user_group_for_schedules.apply_async((updated_schedule.user_group.pk,))
new_state = updated_schedule.repr_settings_for_client_side_logging
description = f"Schedule {updated_schedule.name} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_CHANGED, description)
if serializer.instance.user_group is not None and serializer.instance.user_group != old_user_group:
update_slack_user_group_for_schedules.apply_async((serializer.instance.user_group.pk,))
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = f"Schedule {instance.name} was deleted"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_DELETED, description)
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()
if instance.user_group is not None:
@ -331,6 +325,7 @@ class ScheduleView(
instance, token = ScheduleExportAuthToken.create_auth_token(
request.user, request.user.organization, schedule
)
write_resource_insight_log(instance=instance, author=self.request.user, event=EntityEvent.CREATED)
except IntegrityError:
raise Conflict("Schedule export token for user already exists")
@ -346,6 +341,7 @@ class ScheduleView(
if self.request.method == "DELETE":
try:
token = ScheduleExportAuthToken.objects.get(user_id=self.request.user.id, schedule_id=schedule.id)
write_resource_insight_log(instance=token, author=self.request.user, event=EntityEvent.DELETED)
token.delete()
except ScheduleExportAuthToken.DoesNotExist:
raise NotFound

View file

@ -6,7 +6,7 @@ from apps.api.permissions import AnyRole, IsAdmin, MethodPermission
from apps.api.serializers.organization_slack_settings import OrganizationSlackSettingsSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.models import Organization
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log import EntityEvent, write_resource_insight_log
class SlackTeamSettingsAPIView(views.APIView):
@ -27,14 +27,17 @@ class SlackTeamSettingsAPIView(views.APIView):
def put(self, request):
organization = self.request.auth.organization
old_state = organization.repr_settings_for_client_side_logging
prev_state = organization.insight_logs_serialized
serializer = self.serializer_class(organization, data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Organization settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization, request.user, OrganizationLogType.TYPE_ORGANIZATION_SETTINGS_CHANGED, description
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(serializer.data)

View file

@ -7,8 +7,8 @@ from rest_framework.response import Response
from apps.api.permissions import MODIFY_ACTIONS, READ_ACTIONS, ActionPermission, AnyRole, IsAdmin
from apps.api.serializers.telegram import TelegramToOrganizationConnectorSerializer
from apps.auth_token.auth import PluginAuthentication
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.mixins import PublicPrimaryKeyMixin
from common.insight_log.chatops_insight_logs import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
class TelegramChannelViewSet(
@ -41,8 +41,10 @@ class TelegramChannelViewSet(
def perform_destroy(self, instance):
user = self.request.user
organization = user.organization
description = f"Telegram channel @{instance.channel_name} was disconnected from organization"
create_organization_log(organization, user, OrganizationLogType.TYPE_TELEGRAM_CHANNEL_DISCONNECTED, description)
write_chatops_insight_log(
author=user,
event_name=ChatOpsEvent.CHANNEL_DISCONNECTED,
chatops_type=ChatOpsType.TELEGRAM,
channel_name=instance.channel_name,
)
instance.delete()

View file

@ -40,12 +40,18 @@ from apps.telegram.models import TelegramVerificationCode
from apps.twilioapp.phone_manager import PhoneManager
from apps.twilioapp.twilio_client import twilio_client
from apps.user_management.models import User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import Conflict
from common.api_helpers.mixins import FilterSerializerMixin, PublicPrimaryKeyMixin
from common.api_helpers.paginators import HundredPageSizePaginator
from common.api_helpers.utils import create_engine_url
from common.constants.role import Role
from common.insight_log import (
ChatOpsEvent,
ChatOpsType,
EntityEvent,
write_chatops_insight_log,
write_resource_insight_log,
)
logger = logging.getLogger(__name__)
@ -259,41 +265,37 @@ class UserView(
def verify_number(self, request, pk):
target_user = self.get_object()
code = request.query_params.get("token", None)
old_state = target_user.repr_settings_for_client_side_logging
prev_state = target_user.insight_logs_serialized
phone_manager = PhoneManager(target_user)
verified, error = phone_manager.verify_phone_number(code)
if not verified:
return Response(error, status=status.HTTP_400_BAD_REQUEST)
organization = request.auth.organization
new_state = target_user.repr_settings_for_client_side_logging
description = f"User settings for user {target_user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
request.user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = target_user.insight_logs_serialized
write_resource_insight_log(
instance=target_user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(status=status.HTTP_200_OK)
@action(detail=True, methods=["put"])
def forget_number(self, request, pk):
target_user = self.get_object()
old_state = target_user.repr_settings_for_client_side_logging
prev_state = target_user.insight_logs_serialized
phone_manager = PhoneManager(target_user)
forget = phone_manager.forget_phone_number()
if forget:
organization = request.auth.organization
new_state = target_user.repr_settings_for_client_side_logging
description = (
f"User settings for user {target_user.username} was changed from:\n{old_state}\nto:\n{new_state}"
)
create_organization_log(
organization,
request.user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = target_user.insight_logs_serialized
write_resource_insight_log(
instance=target_user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
return Response(status=status.HTTP_200_OK)
@ -352,25 +354,23 @@ class UserView(
def unlink_telegram(self, request, pk):
user = self.get_object()
TelegramToUserConnector = apps.get_model("telegram", "TelegramToUserConnector")
try:
connector = TelegramToUserConnector.objects.get(user=user)
connector.delete()
write_chatops_insight_log(
author=request.user,
event_name=ChatOpsEvent.USER_UNLINKED,
chatops_type=ChatOpsType.TELEGRAM,
linked_user=user.username,
linked_user_id=user.public_primary_key,
)
except TelegramToUserConnector.DoesNotExist:
return Response(status=status.HTTP_400_BAD_REQUEST)
description = f"Telegram account of user {user.username} was disconnected"
create_organization_log(
user.organization,
user,
OrganizationLogType.TYPE_TELEGRAM_FROM_USER_DISCONNECTED,
description,
)
return Response(status=status.HTTP_200_OK)
@action(detail=True, methods=["post"])
def unlink_backend(self, request, pk):
# TODO: insight logs support
backend_id = request.query_params.get("backend")
backend = get_messaging_backend_from_id(backend_id)
if backend is None:
@ -379,17 +379,15 @@ class UserView(
user = self.get_object()
try:
backend.unlink_user(user)
write_chatops_insight_log(
author=request.user,
event_name=ChatOpsEvent.USER_UNLINKED,
chatops_type=backend.backend_id,
linked_user=user.username,
linked_user_id=user.public_primary_key,
)
except ObjectDoesNotExist:
return Response(status=status.HTTP_400_BAD_REQUEST)
description = f"{backend.label} account of user {user.username} was disconnected"
create_organization_log(
user.organization,
user,
OrganizationLogType.TYPE_MESSAGING_BACKEND_USER_DISCONNECTED,
description,
)
return Response(status=status.HTTP_200_OK)
@action(detail=True, methods=["get", "post", "delete"])
@ -412,6 +410,7 @@ class UserView(
if self.request.method == "POST":
try:
instance, token = UserScheduleExportAuthToken.create_auth_token(user, user.organization)
write_resource_insight_log(instance=instance, author=self.request.user, event=EntityEvent.CREATED)
except IntegrityError:
raise Conflict("Schedule export token for user already exists")
@ -426,10 +425,10 @@ class UserView(
if self.request.method == "DELETE":
try:
token = UserScheduleExportAuthToken.objects.get(user=user)
write_resource_insight_log(instance=token, author=self.request.user, event=EntityEvent.DELETED)
token.delete()
except UserScheduleExportAuthToken.DoesNotExist:
raise NotFound
return Response(status=status.HTTP_204_NO_CONTENT)
@action(detail=True, methods=["get", "post", "delete"])

View file

@ -24,9 +24,10 @@ from apps.base.messaging import get_messaging_backend_from_id
from apps.base.models import UserNotificationPolicy
from apps.base.models.user_notification_policy import BUILT_IN_BACKENDS, NotificationChannelAPIOptions
from apps.user_management.models import User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import UpdateSerializerMixin
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
from common.insight_log import EntityEvent, write_resource_insight_log
class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
@ -55,14 +56,14 @@ class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
except ValueError:
raise BadRequest(detail="Invalid user param")
if user_id is None or user_id == self.request.user.public_primary_key:
queryset = self.model.objects.get_or_create_for_user(user=self.request.user, important=important)
queryset = self.model.objects.filter(user=self.request.user, important=important)
else:
try:
target_user = User.objects.get(public_primary_key=user_id)
except User.DoesNotExist:
raise BadRequest(detail="User does not exist")
queryset = self.model.objects.get_or_create_for_user(user=target_user, important=important)
queryset = self.model.objects.filter(user=target_user, important=important)
queryset = self.serializer_class.setup_eager_loading(queryset)
@ -83,45 +84,45 @@ class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
return obj
def perform_create(self, serializer):
organization = self.request.auth.organization
user = serializer.validated_data.get("user") or self.request.user
old_state = user.repr_settings_for_client_side_logging
prev_state = user.insight_logs_serialized
serializer.save()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
self.request.user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = serializer.validated_data.get("user") or self.request.user
old_state = user.repr_settings_for_client_side_logging
prev_state = user.insight_logs_serialized
serializer.save()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
self.request.user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = instance.user
old_state = user.repr_settings_for_client_side_logging
instance.delete()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
self.request.user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
prev_state = user.insight_logs_serialized
try:
instance.delete()
except UserNotificationPolicyCouldNotBeDeleted:
raise BadRequest(detail="Can't delete last user notification policy")
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
@action(detail=True, methods=["put"])
@ -176,7 +177,8 @@ class UserNotificationPolicyView(UpdateSerializerMixin, ModelViewSet):
continue
# extra backends may be enabled per organization
if notification_channel.name not in BUILT_IN_BACKENDS:
built_in_backend_names = {b[0] for b in BUILT_IN_BACKENDS}
if notification_channel.name not in built_in_backend_names:
extra_messaging_backend = get_messaging_backend_from_id(notification_channel.name)
if extra_messaging_backend is None:
continue

View file

@ -5,7 +5,6 @@ from django.db import models
from apps.auth_token import constants, crypto
from apps.auth_token.models.base_auth_token import BaseAuthToken
from apps.user_management.models import Organization, User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
class ApiAuthToken(BaseAuthToken):
@ -27,6 +26,22 @@ class ApiAuthToken(BaseAuthToken):
organization=organization,
name=name,
)
description = f"API token {instance.name} was created"
create_organization_log(organization, user, OrganizationLogType.TYPE_API_TOKEN_CREATED, description)
return instance, token_string
# Insight logs
@property
def insight_logs_type_verbal(self):
return "public_api_token"
@property
def insight_logs_verbal(self):
return self.name
@property
def insight_logs_serialized(self):
# API tokens are not modifiable, so return empty dict to implement InsightLoggable interface
return {}
@property
def insight_logs_metadata(self):
return {}

View file

@ -6,7 +6,6 @@ from apps.auth_token import constants, crypto
from apps.auth_token.models.base_auth_token import BaseAuthToken
from apps.schedules.models import OnCallSchedule
from apps.user_management.models import Organization, User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
class ScheduleExportAuthToken(BaseAuthToken):
@ -38,8 +37,22 @@ class ScheduleExportAuthToken(BaseAuthToken):
organization=organization,
schedule=schedule,
)
description = "Schedule export token was created by user {0} for schedule {1}".format(
user.username, schedule.name
)
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_EXPORT_TOKEN_CREATED, description)
return instance, token_string
# Insight logs
@property
def insight_logs_type_verbal(self):
return "schedule_export_token"
@property
def insight_logs_verbal(self):
return f"Schedule export token for {self.schedule.insight_logs_verbal}"
@property
def insight_logs_serialized(self):
# Schedule export tokens are not modifiable, return empty dict to implement InsightLoggable interface
return {}
@property
def insight_logs_metadata(self):
return {}

View file

@ -5,7 +5,6 @@ from django.db import models
from apps.auth_token import constants, crypto
from apps.auth_token.models.base_auth_token import BaseAuthToken
from apps.user_management.models import Organization, User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
class UserScheduleExportAuthToken(BaseAuthToken):
@ -31,6 +30,22 @@ class UserScheduleExportAuthToken(BaseAuthToken):
user=user,
organization=organization,
)
description = "User schedule export token was created by user {0}".format(user.username)
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_EXPORT_TOKEN_CREATED, description)
return instance, token_string
# Insight logs
@property
def insight_logs_type_verbal(self):
return "user_schedule_export_token"
@property
def insight_logs_verbal(self):
return f"Users chedule export token for {self.user.username}"
@property
def insight_logs_serialized(self):
# Schedule export tokens are not modifiable, return empty dict to implement InsightLoggable interface
return {}
@property
def insight_logs_metadata(self):
return {}

View file

@ -9,6 +9,9 @@ class BaseMessagingBackend:
available_for_use = False
templater = None
def __init__(self, *args, **kwargs):
self.notification_channel_id = kwargs.get("notification_channel_id")
def get_templater_class(self):
if self.templater:
return import_string(self.templater)
@ -46,16 +49,16 @@ class BaseMessagingBackend:
raise NotImplementedError("notify_user method missing implementation")
def load_backend(path):
return import_string(path)()
def load_backend(path, *args, **kwargs):
return import_string(path)(*args, **kwargs)
def get_messaging_backends():
global _messaging_backends
if _messaging_backends is None:
_messaging_backends = {}
for backend_path in settings.EXTRA_MESSAGING_BACKENDS:
backend = load_backend(backend_path)
for (backend_path, notification_channel_id) in settings.EXTRA_MESSAGING_BACKENDS:
backend = load_backend(backend_path, notification_channel_id=notification_channel_id)
_messaging_backends[backend.backend_id] = backend
return _messaging_backends.items()

View file

@ -1,7 +1,6 @@
# Generated by Django 3.2.5 on 2022-05-31 14:46
import apps.base.models.live_setting
import apps.base.models.organization_log_record
import apps.base.models.user_notification_policy
import datetime
import django.core.validators
@ -51,7 +50,7 @@ class Migration(migrations.Migration):
name='OrganizationLogRecord',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('public_primary_key', models.CharField(default=apps.base.models.organization_log_record.generate_public_primary_key_for_organization_log, max_length=20, unique=True, validators=[django.core.validators.MinLengthValidator(13)])),
('public_primary_key', models.CharField(max_length=20, null=True, default=None)),
('created_at', models.DateTimeField(auto_now_add=True)),
('description', models.TextField(default=None, null=True)),
('_labels', models.JSONField(default=list)),

View file

@ -0,0 +1,16 @@
# Generated by Django 3.2.5 on 2022-08-23 12:03
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('base', '0002_squashed_initial'),
]
operations = [
migrations.DeleteModel(
name='OrganizationLogRecord',
),
]

View file

@ -1,6 +1,5 @@
from .dynamic_setting import DynamicSetting # noqa: F401
from .failed_to_invoke_celery_task import FailedToInvokeCeleryTask # noqa: F401
from .live_setting import LiveSetting # noqa: F401
from .organization_log_record import OrganizationLogRecord # noqa: F401
from .user_notification_policy import UserNotificationPolicy # noqa: F401
from .user_notification_policy_log_record import UserNotificationPolicyLogRecord # noqa: F401

View file

@ -1,317 +0,0 @@
from django.apps import apps
from django.conf import settings
from django.core.validators import MinLengthValidator
from django.db import models
from django.db.models import JSONField
from emoji import emojize
from apps.alerts.models.maintainable_object import MaintainableObject
from apps.user_management.organization_log_creator import OrganizationLogType
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
def generate_public_primary_key_for_organization_log():
prefix = "V"
new_public_primary_key = generate_public_primary_key(prefix)
failure_counter = 0
while OrganizationLogRecord.objects.filter(public_primary_key=new_public_primary_key).exists():
new_public_primary_key = increase_public_primary_key_length(
failure_counter=failure_counter, prefix=prefix, model_name="OrganizationLogRecord"
)
failure_counter += 1
return new_public_primary_key
class OrganizationLogRecordManager(models.Manager):
def create(self, organization, author, type, description):
# set labels
labels = OrganizationLogRecord.LABELS_FOR_TYPE[type]
return super().create(
organization=organization,
author=author,
description=description,
_labels=labels,
)
class OrganizationLogRecord(models.Model):
objects = OrganizationLogRecordManager()
LABEL_ORGANIZATION = "organization"
LABEL_SLACK = "slack"
LABEL_TELEGRAM = "telegram"
LABEL_DEFAULT_CHANNEL = "default channel"
LABEL_SLACK_WORKSPACE_CONNECTED = "slack workspace connected"
LABEL_SLACK_WORKSPACE_DISCONNECTED = "slack workspace disconnected"
LABEL_TELEGRAM_CHANNEL_CONNECTED = "telegram channel connected"
LABEL_TELEGRAM_CHANNEL_DISCONNECTED = "telegram channel disconnected"
LABEL_INTEGRATION = "integration"
LABEL_INTEGRATION_CREATED = "integration created"
LABEL_INTEGRATION_DELETED = "integration deleted"
LABEL_INTEGRATION_CHANGED = "integration changed"
LABEL_INTEGRATION_HEARTBEAT = "integration heartbeat"
LABEL_INTEGRATION_HEARTBEAT_CREATED = "integration heartbeat created"
LABEL_INTEGRATION_HEARTBEAT_CHANGED = "integration heartbeat changed"
LABEL_MAINTENANCE = "maintenance"
LABEL_MAINTENANCE_STARTED = "maintenance started"
LABEL_MAINTENANCE_STOPPED = "maintenance stopped"
LABEL_DEBUG = "debug"
LABEL_DEBUG_STARTED = "debug started"
LABEL_DEBUG_STOPPED = "debug stopped"
LABEL_CHANNEL_FILTER = "route"
LABEL_CHANNEL_FILTER_CREATED = "route created"
LABEL_CHANNEL_FILTER_CHANGED = "route changed"
LABEL_CHANNEL_FILTER_DELETED = "route deleted"
LABEL_ESCALATION_CHAIN = "escalation chain"
LABEL_ESCALATION_CHAIN_CREATED = "escalation chain created"
LABEL_ESCALATION_CHAIN_DELETED = "escalation chain deleted"
LABEL_ESCALATION_CHAIN_CHANGED = "escalation chain changed"
LABEL_ESCALATION_POLICY = "escalation policy"
LABEL_ESCALATION_POLICY_CREATED = "escalation policy created"
LABEL_ESCALATION_POLICY_DELETED = "escalation policy deleted"
LABEL_ESCALATION_POLICY_CHANGED = "escalation policy changed"
LABEL_CUSTOM_ACTION = "custom action"
LABEL_CUSTOM_ACTION_CREATED = "custom action created"
LABEL_CUSTOM_ACTION_DELETED = "custom action deleted"
LABEL_CUSTOM_ACTION_CHANGED = "custom action changed"
LABEL_SCHEDULE = "schedule"
LABEL_SCHEDULE_CREATED = "schedule created"
LABEL_SCHEDULE_DELETED = "schedule deleted"
LABEL_SCHEDULE_CHANGED = "schedule changed"
LABEL_ON_CALL_SHIFT = "on-call shift"
LABEL_ON_CALL_SHIFT_CREATED = "on-call shift created"
LABEL_ON_CALL_SHIFT_DELETED = "on-call shift deleted"
LABEL_ON_CALL_SHIFT_CHANGED = "on-call shift changed"
LABEL_USER = "user"
LABEL_USER_CREATED = "user created"
LABEL_USER_SETTINGS_CHANGED = "user changed"
LABEL_ORGANIZATION_SETTINGS_CHANGED = "organization settings changed"
LABEL_TELEGRAM_TO_USER_CONNECTED = "telegram to user connected"
LABEL_TELEGRAM_FROM_USER_DISCONNECTED = "telegram from user disconnected"
LABEL_API_TOKEN = "api token"
LABEL_API_TOKEN_CREATED = "api token created"
LABEL_API_TOKEN_REVOKED = "api token revoked"
LABEL_ESCALATION_CHAIN_COPIED = "escalation chain copied"
LABEL_SCHEDULE_EXPORT_TOKEN = "schedule export token"
LABEL_SCHEDULE_EXPORT_TOKEN_CREATED = "schedule export token created"
LABEL_MESSAGING_BACKEND_CHANNEL_CHANGED = "messaging backend channel changed"
LABEL_MESSAGING_BACKEND_CHANNEL_DELETED = "messaging backend channel deleted"
LABEL_MESSAGING_BACKEND_USER_DISCONNECTED = "messaging backend user disconnected"
LABELS = [
LABEL_ORGANIZATION,
LABEL_SLACK,
LABEL_TELEGRAM,
LABEL_DEFAULT_CHANNEL,
LABEL_SLACK_WORKSPACE_CONNECTED,
LABEL_SLACK_WORKSPACE_DISCONNECTED,
LABEL_TELEGRAM_CHANNEL_CONNECTED,
LABEL_TELEGRAM_CHANNEL_DISCONNECTED,
LABEL_INTEGRATION,
LABEL_INTEGRATION_CREATED,
LABEL_INTEGRATION_DELETED,
LABEL_INTEGRATION_CHANGED,
LABEL_INTEGRATION_HEARTBEAT,
LABEL_INTEGRATION_HEARTBEAT_CREATED,
LABEL_INTEGRATION_HEARTBEAT_CHANGED,
LABEL_MAINTENANCE,
LABEL_MAINTENANCE_STARTED,
LABEL_MAINTENANCE_STOPPED,
LABEL_DEBUG,
LABEL_DEBUG_STARTED,
LABEL_DEBUG_STOPPED,
LABEL_CHANNEL_FILTER,
LABEL_CHANNEL_FILTER_CREATED,
LABEL_CHANNEL_FILTER_CHANGED,
LABEL_CHANNEL_FILTER_DELETED,
LABEL_ESCALATION_CHAIN,
LABEL_ESCALATION_CHAIN_CREATED,
LABEL_ESCALATION_CHAIN_DELETED,
LABEL_ESCALATION_CHAIN_CHANGED,
LABEL_ESCALATION_POLICY,
LABEL_ESCALATION_POLICY_CREATED,
LABEL_ESCALATION_POLICY_DELETED,
LABEL_ESCALATION_POLICY_CHANGED,
LABEL_CUSTOM_ACTION,
LABEL_CUSTOM_ACTION_CREATED,
LABEL_CUSTOM_ACTION_DELETED,
LABEL_CUSTOM_ACTION_CHANGED,
LABEL_SCHEDULE,
LABEL_SCHEDULE_CREATED,
LABEL_SCHEDULE_DELETED,
LABEL_SCHEDULE_CHANGED,
LABEL_ON_CALL_SHIFT,
LABEL_ON_CALL_SHIFT_CREATED,
LABEL_ON_CALL_SHIFT_DELETED,
LABEL_ON_CALL_SHIFT_CHANGED,
LABEL_USER,
LABEL_USER_CREATED,
LABEL_USER_SETTINGS_CHANGED,
LABEL_ORGANIZATION_SETTINGS_CHANGED,
LABEL_TELEGRAM_TO_USER_CONNECTED,
LABEL_TELEGRAM_FROM_USER_DISCONNECTED,
LABEL_API_TOKEN,
LABEL_API_TOKEN_CREATED,
LABEL_API_TOKEN_REVOKED,
LABEL_ESCALATION_CHAIN_COPIED,
LABEL_SCHEDULE_EXPORT_TOKEN,
LABEL_MESSAGING_BACKEND_CHANNEL_CHANGED,
LABEL_MESSAGING_BACKEND_CHANNEL_DELETED,
LABEL_MESSAGING_BACKEND_USER_DISCONNECTED,
]
LABELS_FOR_TYPE = {
OrganizationLogType.TYPE_SLACK_DEFAULT_CHANNEL_CHANGED: [LABEL_SLACK, LABEL_DEFAULT_CHANNEL],
OrganizationLogType.TYPE_SLACK_WORKSPACE_CONNECTED: [LABEL_SLACK, LABEL_SLACK_WORKSPACE_CONNECTED],
OrganizationLogType.TYPE_SLACK_WORKSPACE_DISCONNECTED: [LABEL_SLACK, LABEL_SLACK_WORKSPACE_DISCONNECTED],
OrganizationLogType.TYPE_TELEGRAM_DEFAULT_CHANNEL_CHANGED: [LABEL_TELEGRAM, LABEL_DEFAULT_CHANNEL],
OrganizationLogType.TYPE_TELEGRAM_CHANNEL_CONNECTED: [LABEL_TELEGRAM, LABEL_TELEGRAM_CHANNEL_CONNECTED],
OrganizationLogType.TYPE_TELEGRAM_CHANNEL_DISCONNECTED: [LABEL_TELEGRAM, LABEL_TELEGRAM_CHANNEL_DISCONNECTED],
OrganizationLogType.TYPE_INTEGRATION_CREATED: [LABEL_INTEGRATION, LABEL_INTEGRATION_CREATED],
OrganizationLogType.TYPE_INTEGRATION_DELETED: [LABEL_INTEGRATION, LABEL_INTEGRATION_DELETED],
OrganizationLogType.TYPE_INTEGRATION_CHANGED: [LABEL_INTEGRATION, LABEL_INTEGRATION_CHANGED],
OrganizationLogType.TYPE_HEARTBEAT_CREATED: [LABEL_INTEGRATION_HEARTBEAT, LABEL_INTEGRATION_HEARTBEAT_CREATED],
OrganizationLogType.TYPE_HEARTBEAT_CHANGED: [LABEL_INTEGRATION_HEARTBEAT, LABEL_INTEGRATION_HEARTBEAT_CHANGED],
OrganizationLogType.TYPE_CHANNEL_FILTER_CREATED: [LABEL_CHANNEL_FILTER, LABEL_CHANNEL_FILTER_CREATED],
OrganizationLogType.TYPE_CHANNEL_FILTER_DELETED: [LABEL_CHANNEL_FILTER, LABEL_CHANNEL_FILTER_DELETED],
OrganizationLogType.TYPE_CHANNEL_FILTER_CHANGED: [LABEL_CHANNEL_FILTER, LABEL_CHANNEL_FILTER_CHANGED],
OrganizationLogType.TYPE_ESCALATION_CHAIN_CREATED: [LABEL_ESCALATION_CHAIN, LABEL_ESCALATION_CHAIN_CREATED],
OrganizationLogType.TYPE_ESCALATION_CHAIN_DELETED: [LABEL_ESCALATION_CHAIN, LABEL_ESCALATION_CHAIN_DELETED],
OrganizationLogType.TYPE_ESCALATION_CHAIN_CHANGED: [LABEL_ESCALATION_CHAIN, LABEL_ESCALATION_CHAIN_CHANGED],
OrganizationLogType.TYPE_ESCALATION_STEP_CREATED: [LABEL_ESCALATION_POLICY, LABEL_ESCALATION_POLICY_CREATED],
OrganizationLogType.TYPE_ESCALATION_STEP_DELETED: [LABEL_ESCALATION_POLICY, LABEL_ESCALATION_POLICY_DELETED],
OrganizationLogType.TYPE_ESCALATION_STEP_CHANGED: [LABEL_ESCALATION_POLICY, LABEL_ESCALATION_POLICY_CHANGED],
OrganizationLogType.TYPE_MAINTENANCE_STARTED_FOR_ORGANIZATION: [
LABEL_MAINTENANCE,
LABEL_MAINTENANCE_STARTED,
LABEL_ORGANIZATION,
],
OrganizationLogType.TYPE_MAINTENANCE_STARTED_FOR_INTEGRATION: [
LABEL_MAINTENANCE,
LABEL_MAINTENANCE_STARTED,
LABEL_INTEGRATION,
],
OrganizationLogType.TYPE_MAINTENANCE_STOPPED_FOR_ORGANIZATION: [
LABEL_MAINTENANCE,
LABEL_MAINTENANCE_STOPPED,
LABEL_ORGANIZATION,
],
OrganizationLogType.TYPE_MAINTENANCE_STOPPED_FOR_INTEGRATION: [
LABEL_MAINTENANCE,
LABEL_MAINTENANCE_STOPPED,
LABEL_INTEGRATION,
],
OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STARTED_FOR_ORGANIZATION: [
LABEL_DEBUG,
LABEL_DEBUG_STARTED,
LABEL_ORGANIZATION,
],
OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STARTED_FOR_INTEGRATION: [
LABEL_DEBUG,
LABEL_DEBUG_STARTED,
LABEL_INTEGRATION,
],
OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_ORGANIZATION: [
LABEL_DEBUG,
LABEL_DEBUG_STOPPED,
LABEL_ORGANIZATION,
],
OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_INTEGRATION: [
LABEL_DEBUG,
LABEL_DEBUG_STOPPED,
LABEL_INTEGRATION,
],
OrganizationLogType.TYPE_CUSTOM_ACTION_CREATED: [LABEL_CUSTOM_ACTION, LABEL_CUSTOM_ACTION_CREATED],
OrganizationLogType.TYPE_CUSTOM_ACTION_DELETED: [LABEL_CUSTOM_ACTION, LABEL_CUSTOM_ACTION_DELETED],
OrganizationLogType.TYPE_CUSTOM_ACTION_CHANGED: [LABEL_CUSTOM_ACTION, LABEL_CUSTOM_ACTION_CHANGED],
OrganizationLogType.TYPE_SCHEDULE_CREATED: [LABEL_SCHEDULE, LABEL_SCHEDULE_CREATED],
OrganizationLogType.TYPE_SCHEDULE_DELETED: [LABEL_SCHEDULE, LABEL_SCHEDULE_DELETED],
OrganizationLogType.TYPE_SCHEDULE_CHANGED: [LABEL_SCHEDULE, LABEL_SCHEDULE_CHANGED],
OrganizationLogType.TYPE_ON_CALL_SHIFT_CREATED: [LABEL_ON_CALL_SHIFT, LABEL_ON_CALL_SHIFT_CREATED],
OrganizationLogType.TYPE_ON_CALL_SHIFT_DELETED: [LABEL_ON_CALL_SHIFT, LABEL_ON_CALL_SHIFT_DELETED],
OrganizationLogType.TYPE_ON_CALL_SHIFT_CHANGED: [LABEL_ON_CALL_SHIFT, LABEL_ON_CALL_SHIFT_CHANGED],
OrganizationLogType.TYPE_NEW_USER_ADDED: [LABEL_USER, LABEL_USER_CREATED],
OrganizationLogType.TYPE_ORGANIZATION_SETTINGS_CHANGED: [
LABEL_ORGANIZATION,
LABEL_ORGANIZATION_SETTINGS_CHANGED,
],
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED: [LABEL_USER, LABEL_USER_SETTINGS_CHANGED],
OrganizationLogType.TYPE_TELEGRAM_TO_USER_CONNECTED: [LABEL_TELEGRAM, LABEL_TELEGRAM_TO_USER_CONNECTED],
OrganizationLogType.TYPE_TELEGRAM_FROM_USER_DISCONNECTED: [
LABEL_TELEGRAM,
LABEL_TELEGRAM_FROM_USER_DISCONNECTED,
],
OrganizationLogType.TYPE_API_TOKEN_CREATED: [LABEL_API_TOKEN, LABEL_API_TOKEN_CREATED],
OrganizationLogType.TYPE_API_TOKEN_REVOKED: [LABEL_API_TOKEN, LABEL_API_TOKEN_REVOKED],
OrganizationLogType.TYPE_ESCALATION_CHAIN_COPIED: [LABEL_ESCALATION_CHAIN, LABEL_ESCALATION_CHAIN_COPIED],
OrganizationLogType.TYPE_SCHEDULE_EXPORT_TOKEN_CREATED: [
LABEL_SCHEDULE_EXPORT_TOKEN,
LABEL_SCHEDULE_EXPORT_TOKEN_CREATED,
],
OrganizationLogType.TYPE_MESSAGING_BACKEND_CHANNEL_CHANGED: [LABEL_MESSAGING_BACKEND_CHANNEL_CHANGED],
OrganizationLogType.TYPE_MESSAGING_BACKEND_CHANNEL_DELETED: [LABEL_MESSAGING_BACKEND_CHANNEL_DELETED],
OrganizationLogType.TYPE_MESSAGING_BACKEND_USER_DISCONNECTED: [LABEL_MESSAGING_BACKEND_USER_DISCONNECTED],
}
public_primary_key = models.CharField(
max_length=20,
validators=[MinLengthValidator(settings.PUBLIC_PRIMARY_KEY_MIN_LENGTH + 1)],
unique=True,
default=generate_public_primary_key_for_organization_log,
)
organization = models.ForeignKey(
"user_management.Organization", on_delete=models.CASCADE, related_name="log_records"
)
author = models.ForeignKey(
"user_management.User",
on_delete=models.SET_NULL,
related_name="team_log_records",
default=None,
null=True,
)
created_at = models.DateTimeField(auto_now_add=True)
description = models.TextField(null=True, default=None)
_labels = JSONField(default=list)
@property
def labels(self):
return self._labels
@staticmethod
def get_log_type_and_maintainable_object_verbal(maintainable_obj, mode, verbal, stopped=False):
AlertReceiveChannel = apps.get_model("alerts", "AlertReceiveChannel")
Organization = apps.get_model("user_management", "Organization")
object_verbal_map = {
AlertReceiveChannel: f"integration {emojize(verbal, use_aliases=True)}",
Organization: "organization",
}
if stopped:
log_type_map = {
AlertReceiveChannel: {
MaintainableObject.DEBUG_MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_INTEGRATION,
MaintainableObject.MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_STOPPED_FOR_INTEGRATION,
},
Organization: {
MaintainableObject.DEBUG_MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_ORGANIZATION,
MaintainableObject.MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_STOPPED_FOR_ORGANIZATION,
},
}
else:
log_type_map = {
AlertReceiveChannel: {
MaintainableObject.DEBUG_MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STARTED_FOR_INTEGRATION,
MaintainableObject.MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_STARTED_FOR_INTEGRATION,
},
Organization: {
MaintainableObject.DEBUG_MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_DEBUG_STARTED_FOR_ORGANIZATION,
MaintainableObject.MAINTENANCE: OrganizationLogType.TYPE_MAINTENANCE_STARTED_FOR_ORGANIZATION,
},
}
log_type = log_type_map[type(maintainable_obj)][mode]
object_verbal = object_verbal_map[type(maintainable_obj)]
return log_type, object_verbal

View file

@ -1,16 +1,17 @@
from enum import unique
from typing import Tuple
from django.conf import settings
from django.core.exceptions import ValidationError
from django.core.validators import MinLengthValidator
from django.db import models, transaction
from django.db import models
from django.db.models import Q, QuerySet
from django.utils import timezone
from ordered_model.models import OrderedModel
from apps.base.messaging import get_messaging_backends
from apps.user_management.models import User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
@ -30,13 +31,13 @@ def generate_public_primary_key_for_notification_policy():
# base supported notification backends
BUILT_IN_BACKENDS = (
"SLACK",
"SMS",
"PHONE_CALL",
"TELEGRAM",
"EMAIL",
"MOBILE_PUSH_GENERAL",
"MOBILE_PUSH_CRITICAL",
("SLACK", 0),
("SMS", 1),
("PHONE_CALL", 2),
("TELEGRAM", 3),
("EMAIL", 4),
("MOBILE_PUSH_GENERAL", 5),
("MOBILE_PUSH_CRITICAL", 6),
)
@ -49,10 +50,10 @@ def _notification_channel_choices():
# use NotificationChannelOptions.AVAILABLE_FOR_USE instead.
supported_backends = list(BUILT_IN_BACKENDS)
for backend_id, _ in get_messaging_backends():
supported_backends.append(backend_id)
for backend_id, backend in get_messaging_backends():
supported_backends.append((backend_id, backend.notification_channel_id))
channels_enum = models.IntegerChoices("NotificationChannel", supported_backends, start=0)
channels_enum = unique(models.IntegerChoices("NotificationChannel", supported_backends))
return channels_enum
@ -69,33 +70,6 @@ def validate_channel_choice(value):
class UserNotificationPolicyQuerySet(models.QuerySet):
def get_or_create_for_user(self, user: User, important: bool) -> "QuerySet[UserNotificationPolicy]":
with transaction.atomic():
User.objects.select_for_update().get(pk=user.pk)
return self._get_or_create_for_user(user, important)
def _get_or_create_for_user(self, user: User, important: bool) -> "QuerySet[UserNotificationPolicy]":
notification_policies = super().filter(user=user, important=important)
if notification_policies.exists():
return notification_policies
old_state = user.repr_settings_for_client_side_logging
if important:
policies = self.create_important_policies_for_user(user)
else:
policies = self.create_default_policies_for_user(user)
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
user.organization,
None,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
)
return policies
def create_default_policies_for_user(self, user: User) -> "QuerySet[UserNotificationPolicy]":
model = self.model
@ -206,6 +180,12 @@ class UserNotificationPolicy(OrderedModel):
else:
return "Not set"
def delete(self):
if UserNotificationPolicy.objects.filter(important=self.important, user=self.user).count() == 1:
raise UserNotificationPolicyCouldNotBeDeleted("Can't delete last user notification policy")
else:
super().delete()
class NotificationChannelOptions:
"""

View file

@ -1,6 +1,6 @@
import factory
from apps.base.models import LiveSetting, OrganizationLogRecord, UserNotificationPolicy, UserNotificationPolicyLogRecord
from apps.base.models import LiveSetting, UserNotificationPolicy, UserNotificationPolicyLogRecord
class UserNotificationPolicyFactory(factory.DjangoModelFactory):
@ -13,13 +13,6 @@ class UserNotificationPolicyLogRecordFactory(factory.DjangoModelFactory):
model = UserNotificationPolicyLogRecord
class OrganizationLogRecordFactory(factory.DjangoModelFactory):
description = factory.Faker("sentence", nb_words=4)
class Meta:
model = OrganizationLogRecord
class LiveSettingFactory(factory.DjangoModelFactory):
class Meta:
model = LiveSetting

View file

@ -1,18 +0,0 @@
import pytest
from apps.base.models import OrganizationLogRecord
@pytest.mark.django_db
def test_organization_log_set_general_log_channel(
make_organization_with_slack_team_identity, make_user_for_organization, make_slack_channel
):
organization, slack_team_identity = make_organization_with_slack_team_identity()
user = make_user_for_organization(organization)
slack_channel = make_slack_channel(slack_team_identity)
organization.set_general_log_channel(slack_channel.slack_id, slack_channel.name, user)
assert organization.log_records.filter(
_labels=[OrganizationLogRecord.LABEL_SLACK, OrganizationLogRecord.LABEL_DEFAULT_CHANNEL]
).exists()

View file

@ -9,6 +9,7 @@ from apps.base.models.user_notification_policy import (
validate_channel_choice,
)
from apps.base.tests.messaging_backend import TestOnlyBackend
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
@pytest.mark.parametrize(
@ -80,3 +81,25 @@ def test_extra_messaging_backends_details():
)
assert validate_channel_choice(channel_choice) is None
@pytest.mark.django_db
def test_unable_to_delete_last_notification_policy(
make_organization,
make_user_for_organization,
make_user_notification_policy,
):
organization = make_organization()
user = make_user_for_organization(organization)
first_policy = make_user_notification_policy(
user, UserNotificationPolicy.Step.NOTIFY, notify_by=UserNotificationPolicy.NotificationChannel.SLACK
)
second_policy = make_user_notification_policy(
user, UserNotificationPolicy.Step.WAIT, wait_delay=timedelta(minutes=5)
)
first_policy.delete()
with pytest.raises(UserNotificationPolicyCouldNotBeDeleted):
second_policy.delete()

View file

@ -1,7 +1,6 @@
import logging
from urllib.parse import urljoin
import humanize
from django.conf import settings
from django.core.validators import MinLengthValidator
from django.db import models, transaction
@ -171,14 +170,6 @@ class IntegrationHeartBeat(BaseHeartBeat):
"alerts.AlertReceiveChannel", on_delete=models.CASCADE, related_name="integration_heartbeat"
)
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
timeout: 30 minutes
"""
return f"timeout: {humanize.naturaldelta(self.timeout_seconds)}"
@property
def is_expired(self):
if self.last_heartbeat_time is not None:
@ -242,3 +233,25 @@ class IntegrationHeartBeat(BaseHeartBeat):
(43200, "12 hours"),
(86400, "1 day"),
)
# Insight logs
@property
def insight_logs_type_verbal(self):
return "integration_heartbeat"
@property
def insight_logs_verbal(self):
return f"Integration Heartbeat for {self.alert_receive_channel.insight_logs_verbal}"
@property
def insight_logs_serialized(self):
return {
"timeout": self.timeout_seconds,
}
@property
def insight_logs_metadata(self):
return {
"integration": self.alert_receive_channel.insight_logs_verbal,
"integration_id": self.alert_receive_channel.public_primary_key,
}

View file

@ -6,10 +6,10 @@ from apps.alerts.models import CustomButton
from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers.action import ActionCreateSerializer, ActionUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.filters import ByTeamFilter
from common.api_helpers.mixins import PublicPrimaryKeyMixin, RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class ActionView(RateLimitHeadersMixin, PublicPrimaryKeyMixin, UpdateSerializerMixin, ModelViewSet):
@ -36,24 +36,28 @@ class ActionView(RateLimitHeadersMixin, PublicPrimaryKeyMixin, UpdateSerializerM
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.auth.organization
user = self.request.user
description = f"Custom action {instance.name} was created"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_CREATED, description)
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Custom action {serializer.instance.name} was changed " f"from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_CHANGED, description)
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = f"Custom action {instance.name} was deleted"
create_organization_log(organization, user, OrganizationLogType.TYPE_CUSTOM_ACTION_DELETED, description)
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()

View file

@ -8,10 +8,10 @@ from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers import EscalationChainSerializer
from apps.public_api.serializers.escalation_chains import EscalationChainUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.filters import ByTeamFilter
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class EscalationChainView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -48,38 +48,29 @@ class EscalationChainView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelVie
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
description = f"Escalation chain {instance.name} was created"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_CREATED,
description,
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_destroy(self, instance):
instance.delete()
description = f"Escalation chain {instance.name} was deleted"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_DELETED,
description,
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()
def perform_update(self, serializer):
instance = serializer.instance
old_state = instance.repr_settings_for_client_side_logging
prev_state = instance.insight_logs_serialized
serializer.save()
new_state = instance.repr_settings_for_client_side_logging
description = f"Escalation chain {instance.name} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
instance.organization,
self.request.user,
OrganizationLogType.TYPE_ESCALATION_CHAIN_CHANGED,
description,
new_state = instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)

View file

@ -7,9 +7,9 @@ from apps.alerts.models import EscalationPolicy
from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers import EscalationPolicySerializer, EscalationPolicyUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class EscalationPolicyView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -50,36 +50,28 @@ class EscalationPolicyView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelVi
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.auth.organization
user = self.request.user
escalation_chain = instance.escalation_chain
description = (
f"Escalation step '{instance.step_type_verbal}' with order {instance.order} was created for "
f"escalation chain '{escalation_chain.name}'"
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_CREATED, description)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
escalation_chain = serializer.instance.escalation_chain
description = (
f"Settings for escalation step of escalation chain '{escalation_chain.name}' was changed "
f"from:\n{old_state}\nto:\n{new_state}"
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_CHANGED, description)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
escalation_chain = instance.escalation_chain
description = (
f"Escalation step '{instance.step_type_verbal}' with order {instance.order} of "
f"escalation chain '{escalation_chain.name}' was deleted"
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ESCALATION_STEP_DELETED, description)
instance.delete()

View file

@ -8,10 +8,10 @@ from apps.alerts.models import AlertReceiveChannel
from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers import IntegrationSerializer, IntegrationUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.filters import ByTeamFilter
from common.api_helpers.mixins import FilterSerializerMixin, RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
from .maintaiable_object_mixin import MaintainableObjectMixin
@ -58,20 +58,17 @@ class IntegrationView(
raise NotFound
def perform_update(self, serializer):
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Integration settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
serializer.instance.organization,
self.request.user,
OrganizationLogType.TYPE_INTEGRATION_CHANGED,
description,
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = instance.organization
user = self.request.user
description = f"Integration {instance.verbal_name} was deleted"
create_organization_log(organization, user, OrganizationLogType.TYPE_INTEGRATION_DELETED, description)
write_resource_insight_log(instance=instance, author=self.request.user, event=EntityEvent.DELETED)
instance.delete()

View file

@ -7,10 +7,10 @@ from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers import CustomOnCallShiftSerializer, CustomOnCallShiftUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.schedules.models import CustomOnCallShift
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.filters import ByTeamFilter
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class CustomOnCallShiftView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -52,28 +52,28 @@ class CustomOnCallShiftView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelV
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
organization = self.request.auth.organization
user = self.request.user
description = (
f"Custom on-call shift with params: {instance.repr_settings_for_client_side_logging} " f"was created"
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_CREATED, description)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Settings of custom on-call shift was changed " f"from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_CHANGED, description)
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = (
f"Custom on-call shift " f"with params: {instance.repr_settings_for_client_side_logging} was deleted"
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_ON_CALL_SHIFT_DELETED, description)
instance.delete()

View file

@ -9,10 +9,11 @@ from apps.base.models import UserNotificationPolicy
from apps.public_api.serializers import PersonalNotificationRuleSerializer, PersonalNotificationRuleUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.models import User
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.exceptions import UserNotificationPolicyCouldNotBeDeleted
from common.insight_log import EntityEvent, write_resource_insight_log
class PersonalNotificationView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -72,45 +73,43 @@ class PersonalNotificationView(RateLimitHeadersMixin, UpdateSerializerMixin, Mod
return Response(status=status.HTTP_204_NO_CONTENT)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
old_state = user.repr_settings_for_client_side_logging
instance.delete()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
prev_state = user.insight_logs_serialized
try:
instance.delete()
except UserNotificationPolicyCouldNotBeDeleted:
raise BadRequest(detail="Can't delete last user notification policy")
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_create(self, serializer):
organization = self.request.auth.organization
author = self.request.user
user = serializer.validated_data["user"]
old_state = user.repr_settings_for_client_side_logging
prev_state = user.insight_logs_serialized
serializer.save()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
author,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = user.repr_settings_for_client_side_logging
prev_state = user.insight_logs_serialized
serializer.save()
new_state = user.repr_settings_for_client_side_logging
description = f"User settings for user {user.username} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
organization,
user,
OrganizationLogType.TYPE_USER_SETTINGS_CHANGED,
description,
new_state = user.insight_logs_serialized
write_resource_insight_log(
instance=user,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)

View file

@ -9,10 +9,10 @@ from apps.alerts.models import ChannelFilter
from apps.auth_token.auth import ApiTokenAuthentication
from apps.public_api.serializers import ChannelFilterSerializer, ChannelFilterUpdateSerializer
from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import TwentyFivePageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class ChannelFilterView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -60,43 +60,30 @@ class ChannelFilterView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewS
if instance.is_default:
raise BadRequest(detail="Unable to delete default filter")
else:
alert_receive_channel = instance.alert_receive_channel
user = self.request.user
route_verbal = instance.verbal_name_for_clients.capitalize()
description = f"{route_verbal} of integration {alert_receive_channel.verbal_name} was deleted"
create_organization_log(
alert_receive_channel.organization,
user,
OrganizationLogType.TYPE_CHANNEL_FILTER_DELETED,
description,
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)
def perform_create(self, serializer):
serializer.save()
instance = serializer.instance
alert_receive_channel = instance.alert_receive_channel
user = self.request.user
route_verbal = instance.verbal_name_for_clients.capitalize()
description = f"{route_verbal} was created for integration {alert_receive_channel.verbal_name}"
create_organization_log(
alert_receive_channel.organization,
user,
OrganizationLogType.TYPE_CHANNEL_FILTER_CREATED,
description,
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
serializer.save()
new_state = serializer.instance.repr_settings_for_client_side_logging
alert_receive_channel = serializer.instance.alert_receive_channel
route_verbal = serializer.instance.verbal_name_for_clients.capitalize()
description = (
f"Settings for {route_verbal} of integration {alert_receive_channel.verbal_name} "
f"was changed from:\n{old_state}\nto:\n{new_state}"
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
create_organization_log(organization, user, OrganizationLogType.TYPE_CHANNEL_FILTER_CHANGED, description)

View file

@ -13,11 +13,11 @@ from apps.public_api.throttlers.user_throttle import UserThrottle
from apps.schedules.ical_utils import ical_export_from_schedule
from apps.schedules.models import OnCallSchedule, OnCallScheduleWeb
from apps.slack.tasks import update_slack_user_group_for_schedules
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.api_helpers.exceptions import BadRequest
from common.api_helpers.filters import ByTeamFilter
from common.api_helpers.mixins import RateLimitHeadersMixin, UpdateSerializerMixin
from common.api_helpers.paginators import FiftyPageSizePaginator
from common.insight_log import EntityEvent, write_resource_insight_log
class OnCallScheduleChannelView(RateLimitHeadersMixin, UpdateSerializerMixin, ModelViewSet):
@ -65,18 +65,17 @@ class OnCallScheduleChannelView(RateLimitHeadersMixin, UpdateSerializerMixin, Mo
if instance.user_group is not None:
update_slack_user_group_for_schedules.apply_async((instance.user_group.pk,))
organization = self.request.auth.organization
user = self.request.user
description = f"Schedule {instance.name} was created"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_CREATED, description)
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.CREATED,
)
def perform_update(self, serializer):
if isinstance(serializer.instance, OnCallScheduleWeb):
raise BadRequest(detail="Web schedule update is not enabled through API")
organization = self.request.auth.organization
user = self.request.user
old_state = serializer.instance.repr_settings_for_client_side_logging
prev_state = serializer.instance.insight_logs_serialized
old_user_group = serializer.instance.user_group
updated_schedule = serializer.save()
@ -87,15 +86,21 @@ class OnCallScheduleChannelView(RateLimitHeadersMixin, UpdateSerializerMixin, Mo
if updated_schedule.user_group is not None and updated_schedule.user_group != old_user_group:
update_slack_user_group_for_schedules.apply_async((updated_schedule.user_group.pk,))
new_state = serializer.instance.repr_settings_for_client_side_logging
description = f"Schedule {serializer.instance.name} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_CHANGED, description)
new_state = serializer.instance.insight_logs_serialized
write_resource_insight_log(
instance=serializer.instance,
author=self.request.user,
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def perform_destroy(self, instance):
organization = self.request.auth.organization
user = self.request.user
description = f"Schedule {instance.name} was deleted"
create_organization_log(organization, user, OrganizationLogType.TYPE_SCHEDULE_DELETED, description)
write_resource_insight_log(
instance=instance,
author=self.request.user,
event=EntityEvent.DELETED,
)
instance.delete()

View file

@ -381,23 +381,6 @@ class CustomOnCallShift(models.Model):
days_for_next_event += next_month_days
next_event_start = current_event_start + timezone.timedelta(days=days_for_next_event)
end_date = None
# get the period for calculating the current rotation end date for long events with frequency weekly and monthly
if self.frequency == CustomOnCallShift.FREQUENCY_WEEKLY:
DAYS_IN_A_WEEK = 7
days_diff = 0
# get the last day of the week with respect to the week_start
if next_event_start.weekday() != self.week_start:
days_diff = DAYS_IN_A_WEEK + next_event_start.weekday() - self.week_start
days_diff %= DAYS_IN_A_WEEK
end_date = next_event_start + timezone.timedelta(days=DAYS_IN_A_WEEK - days_diff - ONE_DAY)
elif self.frequency == CustomOnCallShift.FREQUENCY_MONTHLY:
# get the last day of the month
current_day_number = next_event_start.day
number_of_days = monthrange(next_event_start.year, next_event_start.month)[1]
days_diff = number_of_days - current_day_number
end_date = next_event_start + timezone.timedelta(days=days_diff)
next_event = None
# repetitions generate the next event shift according with the recurrence rules
repetitions = UnfoldableCalendar(current_event).RepeatedEvent(
@ -405,21 +388,12 @@ class CustomOnCallShift(models.Model):
)
ical_iter = repetitions.__iter__()
for event in ical_iter:
if end_date: # end_date exists for long events with frequency weekly and monthly
if end_date >= event.start >= next_event_start:
if event.start >= self.rotation_start:
next_event = event
break
else:
break
else:
if event.start >= next_event_start:
next_event = event
break
if event.start >= next_event_start:
next_event = event
break
next_event_dt = next_event.start if next_event is not None else None
next_event_dt = next_event.start if next_event is not None else next_event_start
if self.until and next_event_dt > self.until:
if self.until and next_event_dt and next_event_dt > self.until:
return
return next_event_dt
@ -539,3 +513,65 @@ class CustomOnCallShift(models.Model):
name = f"{schedule.name}-{shift_type_name}-{priority_level}-"
name += "".join(random.choice(string.ascii_lowercase) for _ in range(5))
return name
# Insight logs
@property
def insight_logs_type_verbal(self):
return "oncall_shift"
@property
def insight_logs_verbal(self):
return self.name
@property
def insight_logs_serialized(self):
users_verbal = []
if self.type == CustomOnCallShift.TYPE_ROLLING_USERS_EVENT:
if self.rolling_users is not None:
for users_dict in self.rolling_users:
users = self.organization.users.filter(public_primary_key__in=users_dict.values())
users_verbal.extend([user.username for user in users])
else:
users = self.users.all()
users_verbal = [user.username for user in users]
result = {
"name": self.name,
"source": self.get_source_display(),
"type": self.get_type_display(),
"users": users_verbal,
"start": self.start.isoformat(),
"duration": self.duration.seconds,
"priority_level": self.priority_level,
}
if self.type not in (CustomOnCallShift.TYPE_SINGLE_EVENT, CustomOnCallShift.TYPE_OVERRIDE):
result["frequency"] = self.get_frequency_display()
result["interval"] = self.interval
result["week_start"] = self.week_start
result["by_day"] = self.by_day
result["by_month"] = self.by_month
result["by_monthday"] = self.by_monthday
result["rotation_start"] = self.rotation_start.isoformat()
if self.until:
result["until"] = self.until.isoformat()
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
if self.time_zone:
result["time_zone"] = self.time_zone
return result
@property
def insight_logs_metadata(self):
result = {}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
if self.schedule:
result["schedule"] = self.schedule.insight_logs_verbal
result["schedule_id"] = self.schedule.public_primary_key
return result

View file

@ -133,36 +133,6 @@ class OnCallSchedule(PolymorphicModel):
class Meta:
unique_together = ("name", "organization")
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
name: test, team: example, url: None
slack reminder settings: notification frequency: Each shift, current shift notification: Yes,
next shift notification: No, action for slot when no one is on-call: Notify all people in the channel
"""
result = f"name: {self.name}, team: {self.team.name if self.team else 'No team'}"
if self.organization.slack_team_identity:
if self.channel:
SlackChannel = apps.get_model("slack", "SlackChannel")
sti = self.organization.slack_team_identity
slack_channel = SlackChannel.objects.filter(slack_team_identity=sti, slack_id=self.channel).first()
if slack_channel:
result += f", slack channel: {slack_channel.name}"
if self.user_group is not None:
result += f", user group: {self.user_group.handle}"
result += (
f"\nslack reminder settings: "
f"notification frequency: {self.get_notify_oncall_shift_freq_display()}, "
f"current shift notification: {'Yes' if self.mention_oncall_start else 'No'}, "
f"next shift notification: {'Yes' if self.mention_oncall_next else 'No'}, "
f"action for slot when no one is on-call: {self.get_notify_empty_oncall_display()}"
)
return result
def get_icalendars(self):
"""Returns list of calendars. Primary calendar should always be the first"""
calendar_primary = None
@ -368,6 +338,47 @@ class OnCallSchedule(PolymorphicModel):
resolved.sort(key=lambda e: (e["start"], e["shift"]["pk"]))
return resolved
# Insight logs
@property
def insight_logs_verbal(self):
return self.name
@property
def insight_logs_serialized(self):
result = {
"name": self.name,
}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
if self.organization.slack_team_identity:
if self.channel:
SlackChannel = apps.get_model("slack", "SlackChannel")
sti = self.organization.slack_team_identity
slack_channel = SlackChannel.objects.filter(slack_team_identity=sti, slack_id=self.channel).first()
if slack_channel:
result["slack_channel"] = slack_channel.name
if self.user_group is not None:
result["user_group"] = self.user_group.handle
result["notification_frequency"] = self.get_notify_oncall_shift_freq_display()
result["current_shift_notification"] = self.mention_oncall_start
result["next_shift_notification"] = self.mention_oncall_next
result["notify_empty_oncall"] = self.get_notify_empty_oncall_display
return result
@property
def insight_logs_metadata(self):
result = {}
if self.team:
result["team"] = self.team.name
result["team_id"] = self.team.public_primary_key
else:
result["team"] = "General"
return result
class OnCallScheduleICal(OnCallSchedule):
# For the ical schedule both primary and overrides icals are imported via ical url
@ -421,13 +432,17 @@ class OnCallScheduleICal(OnCallSchedule):
)
self.save(update_fields=["cached_ical_file_overrides", "prev_ical_file_overrides", "ical_file_error_overrides"])
# Insight logs
@property
def repr_settings_for_client_side_logging(self):
result = super().repr_settings_for_client_side_logging
result += (
f", primary calendar url: {self.ical_url_primary}, " f"overrides calendar url: {self.ical_url_overrides}"
)
return result
def insight_logs_serialized(self):
res = super().insight_logs_serialized
res["primary_calendar_url"] = self.ical_url_primary
res["overrides_calendar_url"] = self.ical_url_overrides
return res
@property
def insight_logs_type_verbal(self):
return "ical_schedule"
class OnCallScheduleCalendar(OnCallSchedule):
@ -501,10 +516,14 @@ class OnCallScheduleCalendar(OnCallSchedule):
return ical
@property
def repr_settings_for_client_side_logging(self):
result = super().repr_settings_for_client_side_logging
result += f", overrides calendar url: {self.ical_url_overrides}"
return result
def insight_logs_type_verbal(self):
return "calendar_schedule"
@property
def insight_logs_serialized(self):
res = super().insight_logs_serialized
res["overrides_calendar_url"] = self.ical_url_overrides
return res
class OnCallScheduleWeb(OnCallSchedule):
@ -598,3 +617,14 @@ class OnCallScheduleWeb(OnCallSchedule):
setattr(self, ical_attr, original_value)
return shift_events, final_events
# Insight logs
@property
def insight_logs_type_verbal(self):
return "web_schedule"
@property
def insight_logs_serialized(self):
res = super().insight_logs_serialized
res["time_zone"] = self.time_zone
return res

View file

@ -583,20 +583,22 @@ def test_rolling_users_with_diff_start_and_rotation_start_weekly_by_day(
user_3 = make_user_for_organization(organization)
schedule = make_schedule(organization, schedule_class=OnCallScheduleWeb)
now = timezone.now().replace(microsecond=0)
now = timezone.now().replace(hour=0, minute=0, second=0, microsecond=0)
today_weekday = now.weekday()
weekdays = [(today_weekday + 1) % 7, (today_weekday + 3) % 7]
next_week_monday = now + timezone.timedelta(days=(0 - today_weekday) % 7)
# SAT, SUN
weekdays = [5, 6]
by_day = [CustomOnCallShift.ICAL_WEEKDAY_MAP[day] for day in weekdays]
data = {
"priority_level": 1,
"start": now,
"week_start": today_weekday,
"rotation_start": now + timezone.timedelta(days=8, hours=1),
"week_start": 0,
"rotation_start": next_week_monday,
"duration": timezone.timedelta(seconds=1800),
"frequency": CustomOnCallShift.FREQUENCY_WEEKLY,
"schedule": schedule,
"until": now + timezone.timedelta(days=23, minutes=1),
"until": next_week_monday + timezone.timedelta(days=30, minutes=1),
"by_day": by_day,
}
rolling_users = [[user_1], [user_2], [user_3]]
@ -605,22 +607,16 @@ def test_rolling_users_with_diff_start_and_rotation_start_weekly_by_day(
)
on_call_shift.add_rolling_users(rolling_users)
date = now + timezone.timedelta(minutes=5)
first_sat = next_week_monday + timezone.timedelta(days=5) + timezone.timedelta(minutes=5)
# week 1: weekdays[0] - no (+1 day from start) ; weekdays[1] - no (+3 days from start) user_1
# week 2: weekdays[0] - no (+8 days from start) ; weekdays[1] - yes (+10 days from start) user_2
# week 3: weekdays[0] - yes (+15 days from start) ; weekdays[1] - yes (+17 days from start) user_3
# week 4: weekdays[0] - yes (+22 days from start) ; weekdays[1] - no (+24 days from start) user_1
user_1_on_call_dates = [date + timezone.timedelta(days=22)]
user_2_on_call_dates = [date + timezone.timedelta(days=10)]
user_3_on_call_dates = [date + timezone.timedelta(days=15), date + timezone.timedelta(days=17)]
user_1_on_call_dates = [first_sat + timezone.timedelta(days=15)]
user_2_on_call_dates = [first_sat, first_sat + timezone.timedelta(days=22)]
user_3_on_call_dates = [first_sat + timezone.timedelta(days=7), first_sat + timezone.timedelta(days=8)]
nobody_on_call_dates = [
date, # less than rotation start
date + timezone.timedelta(days=1), # less than rotation start
date + timezone.timedelta(days=3), # less than rotation start
date + timezone.timedelta(days=8), # less than rotation start
date + timezone.timedelta(days=9), # weekday value not in by_day
date + timezone.timedelta(days=24), # higher than until
now, # less than rotation start
first_sat - timezone.timedelta(days=7), # before rotation start
first_sat + timezone.timedelta(days=9), # weekday value not in by_day
first_sat + timezone.timedelta(days=30), # higher than until
]
for dt in user_1_on_call_dates:

View file

@ -137,8 +137,15 @@ class SlackMessage(models.Model):
else:
text = "{}\nInviting {} to look at incident.".format(alert_group.long_verbose_name, user_verbal)
attachments = [
{"color": "#c6c000", "callback_id": "alert", "text": text}, # yellow
blocks = [
{
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": text,
},
}
]
sc = SlackClientWithErrorHandling(self.slack_team_identity.bot_access_token)
channel_id = slack_message.channel_id
@ -147,7 +154,8 @@ class SlackMessage(models.Model):
result = sc.api_call(
"chat.postMessage",
channel=channel_id,
attachments=attachments,
text=text,
blocks=blocks,
thread_ts=slack_message.slack_id,
unfurl_links=True,
)

View file

@ -7,8 +7,8 @@ from django.db.models import JSONField
from apps.slack.constants import SLACK_INVALID_AUTH_RESPONSE, SLACK_WRONG_TEAM_NAMES
from apps.slack.slack_client import SlackClientWithErrorHandling
from apps.slack.slack_client.exceptions import SlackAPIException, SlackAPITokenException
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.constants.role import Role
from common.insight_log.chatops_insight_logs import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
logger = logging.getLogger(__name__)
@ -63,8 +63,9 @@ class SlackTeamIdentity(models.Model):
self.cached_reinstall_data = None
self.installed_via_granular_permissions = True
self.save()
description = f"Slack workspace {self.cached_name} was connected to organization"
create_organization_log(organization, user, OrganizationLogType.TYPE_SLACK_WORKSPACE_CONNECTED, description)
write_chatops_insight_log(
author=user, event_name=ChatOpsEvent.WORKSPACE_CONNECTED, chatops_type=ChatOpsType.SLACK
)
def get_cached_channels(self, search_term=None, slack_id=None):
queryset = self.cached_channels

View file

@ -6,8 +6,8 @@ from jinja2 import TemplateSyntaxError
from rest_framework.response import Response
from apps.slack.scenarios import scenario_step
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.constants.role import Role
from common.insight_log import EntityEvent, write_resource_insight_log
from common.jinja_templater import jinja_template_env
from .step_mixins import CheckAlertIsUnarchivedMixin, IncidentActionsAccessControlMixin
@ -233,7 +233,7 @@ class UpdateAppearanceStep(scenario_step.ScenarioStep):
alert_group = AlertGroup.all_objects.filter(pk=alert_group_pk).select_for_update().get()
integration = alert_group.channel.integration
alert_receive_channel = alert_group.channel
old_state = alert_receive_channel.repr_settings_for_client_side_logging
prev_state = alert_receive_channel.insight_logs_serialized
for templatizable_attr in ["title", "message", "image_url"]:
for notification_channel in ["slack", "web", "sms", "phone_call", "email", "telegram"]:
@ -308,12 +308,15 @@ class UpdateAppearanceStep(scenario_step.ScenarioStep):
headers={"content-type": "application/json"},
)
new_state = alert_receive_channel.repr_settings_for_client_side_logging
new_state = alert_receive_channel.insight_logs_serialized
if new_state != old_state:
description = f"Integration settings was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
self.organization, self.user, OrganizationLogType.TYPE_INTEGRATION_CHANGED, description
if new_state != prev_state:
write_resource_insight_log(
instance=alert_receive_channel,
author=slack_user_identity.get_user(alert_receive_channel.organization),
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
attachments = alert_group.render_slack_attachments()

View file

@ -192,6 +192,7 @@ class AlertShootingStep(scenario_step.ScenarioStep):
self._slack_client.api_call(
"chat.postMessage",
channel=channel_id,
text=text,
attachments=[],
thread_ts=alert_group.slack_message.slack_id,
mrkdwn=True,
@ -480,10 +481,8 @@ class AttachGroupStep(
alert_group = log_record.alert_group
if log_record.type == AlertGroupLogRecord.TYPE_ATTACHED and log_record.alert_group.is_maintenance_incident:
attachments = [
{"callback_id": "alert", "text": "{}".format(log_record.rendered_log_line_action(for_slack=True))},
]
self._publish_message_to_thread(alert_group, attachments)
text = f"{log_record.rendered_log_line_action(for_slack=True)}"
self.publish_message_to_thread(alert_group, text=text)
if log_record.type == AlertGroupLogRecord.TYPE_FAILED_ATTACHMENT:
ephemeral_text = log_record.rendered_log_line_action(for_slack=True)
@ -629,9 +628,9 @@ class CustomButtonProcessStep(
f"according to escalation policy with the result `{result_message}`"
)
attachments = [
{"callback_id": "alert", "text": debug_message, "footer": text},
{"callback_id": "alert", "text": debug_message},
]
self._publish_message_to_thread(alert_group, attachments)
self.publish_message_to_thread(alert_group, attachments=attachments, text=text)
class ResolveGroupStep(
@ -763,23 +762,27 @@ class UnAcknowledgeGroupStep(
message_attachments = [
{
"callback_id": "alert",
"text": f"{user_verbal} hasn't responded to an acknowledge timeout reminder."
f" Incident is unacknowledged automatically",
"text": "",
"footer": "Escalation started again...",
},
]
text = (
f"{user_verbal} hasn't responded to an acknowledge timeout reminder."
f" Incident is unacknowledged automatically"
)
if alert_group.slack_message.ack_reminder_message_ts:
try:
self._slack_client.api_call(
"chat.update",
channel=channel_id,
ts=alert_group.slack_message.ack_reminder_message_ts,
text=text,
attachments=message_attachments,
)
except SlackAPIException as e:
# post to thread if ack reminder message was deleted in Slack
if e.response["error"] == "message_not_found":
self._publish_message_to_thread(alert_group, message_attachments)
self.publish_message_to_thread(alert_group, attachments=message_attachments, text=text)
elif e.response["error"] == "account_inactive":
logger.info(
f"Skip unacknowledge slack message for alert_group {alert_group.pk} due to account_inactive"
@ -787,7 +790,7 @@ class UnAcknowledgeGroupStep(
else:
raise
else:
self._publish_message_to_thread(alert_group, message_attachments)
self.publish_message_to_thread(alert_group, attachments=message_attachments, text=text)
self._update_slack_message(alert_group)
logger.debug(f"Finished process_signal in UnAcknowledgeGroupStep for alert_group {alert_group.pk}")
@ -806,18 +809,12 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
if alert_group.acknowledged_by == AlertGroup.USER:
if self.user == alert_group.acknowledged_by_user:
user_verbal = alert_group.acknowledged_by_user.get_user_verbal_for_team_for_slack()
attachments = [
{
"color": "#c6c000",
"callback_id": "alert",
"text": f"{user_verbal} is confirmed to be working on this incident",
},
]
text = f"{user_verbal} confirmed that the incident is still acknowledged"
self._slack_client.api_call(
"chat.update",
channel=channel,
ts=message_ts,
attachments=attachments,
text=text,
)
alert_group.acknowledged_by_confirmed = datetime.utcnow()
alert_group.save(update_fields=["acknowledged_by_confirmed"])
@ -830,18 +827,12 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
)
elif alert_group.acknowledged_by == AlertGroup.SOURCE:
user_verbal = self.user.get_user_verbal_for_team_for_slack()
attachments = [
{
"color": "#c6c000",
"callback_id": "alert",
"text": f"{user_verbal} is confirmed to be working on this incident",
},
]
text = f"{user_verbal} confirmed that the incident is still acknowledged"
self._slack_client.api_call(
"chat.update",
channel=channel,
ts=message_ts,
attachments=attachments,
text=text,
)
alert_group.acknowledged_by_confirmed = datetime.utcnow()
alert_group.save(update_fields=["acknowledged_by_confirmed"])
@ -865,12 +856,13 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
alert_group = log_record.alert_group
channel_id = alert_group.slack_message.channel_id
user_verbal = log_record.author.get_user_verbal_for_team_for_slack(mention=True)
text = f"{user_verbal}, please confirm that you're still working on this incident."
if alert_group.channel.organization.unacknowledge_timeout != Organization.UNACKNOWLEDGE_TIMEOUT_NEVER:
attachments = [
{
"fallback": "Are you still working on this incident?",
"text": f"{user_verbal}, please confirm that you're still working on this incident.",
"text": text,
"callback_id": "alert",
"attachment_type": "default",
"footer": "This is a reminder that the incident is still acknowledged"
@ -896,6 +888,7 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
response = self._slack_client.api_call(
"chat.postMessage",
channel=channel_id,
text=text,
attachments=attachments,
thread_ts=alert_group.slack_message.slack_id,
)
@ -932,14 +925,8 @@ class AcknowledgeConfirmationStep(AcknowledgeGroupStep):
alert_group.slack_message.ack_reminder_message_ts = response["ts"]
alert_group.slack_message.save(update_fields=["ack_reminder_message_ts"])
else:
attachments = [
{
"callback_id": "alert",
"text": f"This is a reminder that the incident is still acknowledged by {user_verbal}"
f" and not resolved.",
},
]
self._publish_message_to_thread(alert_group, attachments)
text = f"This is a reminder that the incident is still acknowledged by {user_verbal}"
self.publish_message_to_thread(alert_group, text=text)
class WipeGroupStep(scenario_step.ScenarioStep):
@ -953,15 +940,8 @@ class WipeGroupStep(scenario_step.ScenarioStep):
def process_signal(self, log_record):
alert_group = log_record.alert_group
user_verbal = log_record.author.get_user_verbal_for_team_for_slack()
attachments = [
{
"color": "warning",
"callback_id": "alert",
"footer": "Incident wiped",
"text": "Wiped by {}.".format(user_verbal),
},
]
self._publish_message_to_thread(alert_group, attachments)
text = f"Wiped by {user_verbal}"
self.publish_message_to_thread(alert_group, text=text)
self._update_slack_message(alert_group)
@ -1069,21 +1049,15 @@ class UpdateLogReportMessageStep(scenario_step.ScenarioStep):
logger.info(f"Cannot post log message for alert_group {alert_group.pk} because SlackMessage doesn't exist")
return None
attachments = [
{
"text": "Building escalation plan... :thinking_face:",
}
]
text = ("Building escalation plan... :thinking_face:",)
slack_log_message = alert_group.slack_log_message
if slack_log_message is None:
logger.debug(f"Start posting new log message for alert_group {alert_group.pk}")
try:
result = self._slack_client.api_call(
"chat.postMessage",
channel=slack_message.channel_id,
thread_ts=slack_message.slack_id,
attachments=attachments,
"chat.postMessage", channel=slack_message.channel_id, thread_ts=slack_message.slack_id, text=text
)
except SlackAPITokenException as e:
print(e)
@ -1148,6 +1122,7 @@ class UpdateLogReportMessageStep(scenario_step.ScenarioStep):
self._slack_client.api_call(
"chat.update",
channel=slack_message.channel_id,
text="Alert Group log",
ts=slack_log_message.slack_id,
attachments=attachments,
)

View file

@ -34,14 +34,3 @@ class EscalationDeliveryStep(scenario_step.ScenarioStep):
user_mention_as = user_verbal
notify_by = " by {}".format(UserNotificationPolicy.NotificationChannel(notification_channel).label)
return "Inviting {}{} to look at incident.".format(user_mention_as, notify_by)
def notify_thread_about_action(self, alert_group, text, footer=None, color=None):
attachments = [
{
"callback_id": "alert",
"footer": footer,
"text": text,
"color": color,
},
]
self._publish_message_to_thread(alert_group, attachments)

View file

@ -62,16 +62,34 @@ class NotificationDeliveryStep(scenario_step.ScenarioStep):
)
def post_message_to_channel(self, text, channel, color=None, footer=None):
color_id = self.get_color_id(color)
attachments = [
{"color": color_id, "callback_id": "alert", "footer": footer, "text": text},
# TODO: No color in blocks, use prefix emoji?
# color_id = self.get_color_id(color)
blocks = [
{
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": text,
},
},
{"type": "divider"},
{
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": footer,
},
},
]
try:
# TODO: slack-onprem, check exceptions
self._slack_client.api_call(
"chat.postMessage",
channel=channel,
attachments=attachments,
text=text,
blocks=blocks,
unfurl_links=True,
)
except SlackAPITokenException as e:

View file

@ -287,7 +287,7 @@ class ScenarioStep(object):
raise e
logger.info(f"Finished _update_slack_message for alert_group {alert_group.pk}")
def _publish_message_to_thread(self, alert_group, attachments, mrkdwn=True, unfurl_links=True):
def publish_message_to_thread(self, alert_group, attachments=[], mrkdwn=True, unfurl_links=True, text=None):
# TODO: refactor checking the possibility of sending message to slack
# do not try to post message to slack if integration is rate limited
if alert_group.channel.is_rate_limited_in_slack:
@ -300,6 +300,7 @@ class ScenarioStep(object):
result = self._slack_client.api_call(
"chat.postMessage",
channel=channel_id,
text=text,
attachments=attachments,
thread_ts=slack_message.slack_id,
mrkdwn=mrkdwn,

View file

@ -6,7 +6,7 @@ from django.utils import timezone
from apps.schedules.models import OnCallSchedule
from apps.slack.scenarios import scenario_step
from apps.slack.utils import format_datetime_to_slack
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log import EntityEvent, write_resource_insight_log
class EditScheduleShiftNotifyStep(scenario_step.ScenarioStep):
@ -57,16 +57,16 @@ class EditScheduleShiftNotifyStep(scenario_step.ScenarioStep):
private_metadata = json.loads(payload["view"]["private_metadata"])
schedule_id = private_metadata["schedule_id"]
schedule = OnCallSchedule.objects.get(pk=schedule_id)
old_state = schedule.repr_settings_for_client_side_logging
prev_state = schedule.insight_logs_serialized
setattr(schedule, action["block_id"], int(action["selected_option"]["value"]))
schedule.save()
new_state = schedule.repr_settings_for_client_side_logging
description = f"Schedule {schedule.name} was changed from:\n{old_state}\nto:\n{new_state}"
create_organization_log(
schedule.organization,
slack_user_identity.get_user(schedule.organization),
OrganizationLogType.TYPE_SCHEDULE_CHANGED,
description,
new_state = schedule.insight_logs_serialized
write_resource_insight_log(
instance=schedule,
author=slack_user_identity.get_user(schedule.organization),
event=EntityEvent.UPDATED,
prev_state=prev_state,
new_state=new_state,
)
def get_modal_blocks(self, schedule_id):

View file

@ -16,7 +16,7 @@ class AlertGroupLogSlackRenderer:
attachments = []
# get rendered logs
result = "Alert Group log:\n\n"
result = ""
for log_record in all_log_records: # list of AlertGroupLogRecord and UserNotificationPolicyLogRecord logs
if type(log_record) == AlertGroupLogRecord:
result += f"{log_record.rendered_incident_log_line(for_slack=True)}\n"

View file

@ -36,16 +36,24 @@ class IncidentActionsAccessControlMixin(AccessControl):
thread_ts = payload["message_ts"]
except KeyError:
thread_ts = payload["message"]["ts"]
text = "Attempted to {} by {}, but failed due to a lack of permissions.".format(
self.ACTION_VERBOSE,
self.user.get_user_verbal_for_team_for_slack(),
)
self._slack_client.api_call(
"chat.postMessage",
channel=payload["channel"]["id"],
attachments=[
text=text,
blocks=[
{
"callback_id": "alert",
"text": "Attempted to {} by {}, but failed due to a lack of permissions.".format(
self.ACTION_VERBOSE,
self.user.get_user_verbal_for_team_for_slack(),
),
"type": "section",
"block_id": "alert",
"text": {
"type": "mrkdwn",
"text": text,
},
},
],
thread_ts=None if self.send_denied_message_to_channel(payload) else thread_ts,

View file

@ -98,9 +98,10 @@ def check_slack_message_exists_before_post_message_to_thread(
slack_message = alert_group.get_slack_message()
if slack_message is not None:
EscalationDeliveryStep(slack_team_identity, alert_group.channel.organization).notify_thread_about_action(
alert_group, text
EscalationDeliveryStep(slack_team_identity, alert_group.channel.organization).publish_message_to_thread(
alert_group, text=text
)
# check how much time has passed since alert group was created
# to prevent eternal loop of restarting check_slack_message_before_post_message_to_thread
elif timezone.now() < alert_group.started_at + timezone.timedelta(hours=retry_timeout_hours):
@ -239,12 +240,7 @@ def send_message_to_thread_if_bot_not_in_channel(alert_group_pk, slack_team_iden
members = slack_team_identity.get_conversation_members(sc, channel_id)
if bot_user_id not in members:
text = f"Please invite <@{bot_user_id}> to this channel to make all features " f"available :wink:"
attachments = [
{
"text": text,
}
]
ScenarioStep(slack_team_identity)._publish_message_to_thread(alert_group, attachments)
ScenarioStep(slack_team_identity).publish_message_to_thread(alert_group, text=text)
@shared_dedicated_queue_retry_task(autoretry_for=(Exception,), retry_backoff=True, max_retries=1)
@ -269,6 +265,7 @@ def send_debug_message_to_thread(alert_group_pk, slack_team_identity_pk):
result = sc.api_call(
"chat.postMessage",
channel=channel_id,
text=text,
attachments=[],
thread_ts=current_alert_group.slack_message.slack_id,
mrkdwn=True,

View file

@ -51,7 +51,7 @@ from apps.slack.scenarios.slack_usergroup import STEPS_ROUTING as SLACK_USERGROU
from apps.slack.slack_client import SlackClientWithErrorHandling
from apps.slack.slack_client.exceptions import SlackAPIException, SlackAPITokenException
from apps.slack.tasks import clean_slack_integration_leftovers, unpopulate_slack_user_identities
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
from .models import SlackActionRecord, SlackMessage, SlackTeamIdentity, SlackUserIdentity
@ -286,7 +286,6 @@ class SlackEventApiEndpointView(APIView):
or payload["event"]["subtype"] == EVENT_SUBTYPE_MESSAGE_DELETED
)
):
print("Inside channel.messages event")
for route in SCENARIOS_ROUTES:
if (
"message_channel_type" in route
@ -538,9 +537,10 @@ class ResetSlackView(APIView):
slack_team_identity = organization.slack_team_identity
if slack_team_identity is not None:
clean_slack_integration_leftovers.apply_async((organization.pk,))
description = f"Slack workspace {slack_team_identity.cached_name} was disconnected from organization"
create_organization_log(
organization, request.user, OrganizationLogType.TYPE_SLACK_WORKSPACE_DISCONNECTED, description
write_chatops_insight_log(
author=request.user,
event_name=ChatOpsEvent.WORKSPACE_DISCONNECTED,
chatops_type=ChatOpsType.SLACK,
)
unpopulate_slack_user_identities(organization.pk, True)
response = Response(status=200)

View file

@ -12,6 +12,7 @@ from common.constants.slack_auth import (
SLACK_AUTH_SLACK_USER_ALREADY_CONNECTED_ERROR,
SLACK_AUTH_WRONG_WORKSPACE_ERROR,
)
from common.insight_log import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
logger = logging.getLogger(__name__)
@ -66,6 +67,14 @@ def connect_user_to_slack(response, backend, strategy, user, organization, *args
"cached_slack_email": response["user"]["email"],
},
)
write_chatops_insight_log(
author=user,
event_name=ChatOpsEvent.USER_LINKED,
chatops_type=ChatOpsType.SLACK,
linked_user=user.username,
linked_user_id=user.public_primary_key,
)
user.slack_user_identity = slack_user_identity
user.save(update_fields=["slack_user_identity"])

View file

@ -1,7 +1,8 @@
import logging
from typing import Optional, Tuple, Union
from telegram import Bot, InlineKeyboardMarkup, Message, ParseMode
from telegram.error import InvalidToken, Unauthorized
from telegram.error import BadRequest, InvalidToken, Unauthorized
from telegram.utils.request import Request
from apps.alerts.models import AlertGroup
@ -11,6 +12,8 @@ from apps.telegram.renderers.keyboard import TelegramKeyboardRenderer
from apps.telegram.renderers.message import TelegramMessageRenderer
from common.api_helpers.utils import create_engine_url
logger = logging.getLogger(__name__)
class TelegramClient:
ALLOWED_UPDATES = ("message", "callback_query")
@ -67,14 +70,19 @@ class TelegramClient:
keyboard: Optional[InlineKeyboardMarkup] = None,
reply_to_message_id: Optional[int] = None,
) -> Message:
message = self.api_client.send_message(
chat_id=chat_id,
text=text,
reply_markup=keyboard,
reply_to_message_id=reply_to_message_id,
parse_mode=self.PARSE_MODE,
disable_web_page_preview=False,
)
try:
message = self.api_client.send_message(
chat_id=chat_id,
text=text,
reply_markup=keyboard,
reply_to_message_id=reply_to_message_id,
parse_mode=self.PARSE_MODE,
disable_web_page_preview=False,
)
except BadRequest as e:
logger.warning("Telegram BadRequest: {}".format(e.message))
raise
return message
def edit_message(self, message: TelegramMessage) -> TelegramMessage:

View file

@ -10,7 +10,7 @@ from telegram import error
from apps.alerts.models import AlertGroup
from apps.telegram.client import TelegramClient
from apps.telegram.models import TelegramMessage
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log.chatops_insight_logs import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
logger = logging.getLogger(__name__)
@ -99,17 +99,12 @@ class TelegramToOrganizationConnector(models.Model):
self.is_default_channel = True
self.save(update_fields=["is_default_channel"])
description = (
f"The default channel for incidents in Telegram was changed "
f"{f'from @{old_default_channel.channel_name} ' if old_default_channel else ''}"
f"to @{self.channel_name}"
)
create_organization_log(
self.organization,
author,
OrganizationLogType.TYPE_TELEGRAM_DEFAULT_CHANNEL_CHANGED,
description,
write_chatops_insight_log(
author=author,
event_name=ChatOpsEvent.DEFAULT_CHANNEL_CHANGED,
chatops_type=ChatOpsType.TELEGRAM,
prev_channel=old_default_channel.channel_name if old_default_channel else None,
new_channel=self.channel_name,
)
def send_alert_group_message(self, alert_group: AlertGroup) -> None:

View file

@ -6,7 +6,7 @@ from django.db import models
from django.utils import timezone
from apps.telegram.models import TelegramToOrganizationConnector
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log.chatops_insight_logs import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
class TelegramChannelVerificationCode(models.Model):
@ -50,21 +50,19 @@ class TelegramChannelVerificationCode(models.Model):
},
)
description = f"Telegram channel @{channel_name} was connected to organization"
create_organization_log(
verification_code.organization,
verification_code.author,
OrganizationLogType.TYPE_TELEGRAM_CHANNEL_CONNECTED,
description,
write_chatops_insight_log(
author=verification_code.author,
event_name=ChatOpsEvent.CHANNEL_CONNECTED,
chatops_type=ChatOpsType.TELEGRAM,
channel_name=channel_name,
)
if not connector_exists:
description = f"The default channel for incidents in Telegram was changed to @{channel_name}"
create_organization_log(
verification_code.organization,
verification_code.author,
OrganizationLogType.TYPE_TELEGRAM_DEFAULT_CHANNEL_CHANGED,
description,
write_chatops_insight_log(
author=verification_code.author,
event_name=ChatOpsEvent.DEFAULT_CHANNEL_CHANGED,
chatops_type=ChatOpsType.TELEGRAM,
prev_channel=None,
new_channel=channel_name,
)
return connector, created

View file

@ -2,11 +2,11 @@ from typing import Optional, Tuple
from uuid import uuid4
from django.core.exceptions import ValidationError
from django.db import models
from django.db import IntegrityError, models
from django.utils import timezone
from apps.telegram.models import TelegramToUserConnector
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from common.insight_log import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
class TelegramVerificationCode(models.Model):
@ -30,17 +30,16 @@ class TelegramVerificationCode(models.Model):
user = verification_code.user
connector, created = TelegramToUserConnector.objects.get_or_create(
user=user, telegram_chat_id=telegram_chat_id, defaults={"telegram_nick_name": telegram_nick_name}
user=user, defaults={"telegram_nick_name": telegram_nick_name, "telegram_chat_id": telegram_chat_id}
)
description = f"Telegram account of user {user.username} was connected"
create_organization_log(
user.organization,
user,
OrganizationLogType.TYPE_TELEGRAM_TO_USER_CONNECTED,
description,
write_chatops_insight_log(
author=user,
event_name=ChatOpsEvent.USER_LINKED,
chatops_type=ChatOpsType.TELEGRAM,
linked_user=user.username,
linked_user_id=user.public_primary_key,
)
return connector, created
except (ValidationError, cls.DoesNotExist):
except (ValidationError, cls.DoesNotExist, IntegrityError):
return None, False

View file

@ -105,6 +105,10 @@ def send_link_to_channel_message_or_fallback_to_full_incident(
f"Most probably it was deleted while escalation was in progress."
f"alert_group {alert_group_pk}"
)
except UserNotificationPolicy.DoesNotExist:
logger.warning(
f"UserNotificationPolicy {notification_policy_pk} does not exist for alert group {alert_group_pk}"
)
@shared_dedicated_queue_retry_task(
@ -132,20 +136,30 @@ def send_log_and_actions_message(self, channel_chat_id, group_chat_id, channel_m
with OkToRetry(
task=self, exc=(error.RetryAfter, error.TimedOut), compute_countdown=lambda e: getattr(e, "retry_after", 3)
):
if not log_message_sent:
telegram_client.send_message(
chat_id=group_chat_id,
message_type=TelegramMessage.LOG_MESSAGE,
alert_group=alert_group,
reply_to_message_id=reply_to_message_id,
)
if not actions_message_sent:
telegram_client.send_message(
chat_id=group_chat_id,
message_type=TelegramMessage.ACTIONS_MESSAGE,
alert_group=alert_group,
reply_to_message_id=reply_to_message_id,
)
try:
if not log_message_sent:
telegram_client.send_message(
chat_id=group_chat_id,
message_type=TelegramMessage.LOG_MESSAGE,
alert_group=alert_group,
reply_to_message_id=reply_to_message_id,
)
if not actions_message_sent:
telegram_client.send_message(
chat_id=group_chat_id,
message_type=TelegramMessage.ACTIONS_MESSAGE,
alert_group=alert_group,
reply_to_message_id=reply_to_message_id,
)
except error.BadRequest as e:
if e.message == "Chat not found":
logger.warning(
f"Could not send log and actions messages to Telegram group with id {group_chat_id} "
f"due to 'Chat not found'. alert_group {alert_group.pk}"
)
return
else:
raise
@shared_dedicated_queue_retry_task(

View file

@ -22,3 +22,24 @@ def test_user_verification_handler_process_update_another_account_already_linked
assert created
assert connector.telegram_chat_id == chat_id
assert connector.user == user_2
@pytest.mark.django_db
def test_user_verification_handler_process_update_user_already_linked(
make_organization,
make_user_for_organization,
make_telegram_user_connector,
make_telegram_verification_code,
):
organization = make_organization()
chat_id = 123
user_1 = make_user_for_organization(organization)
make_telegram_user_connector(user_1, telegram_chat_id=chat_id)
other_chat_id = 321
code = make_telegram_verification_code(user_1)
connector, created = TelegramVerificationCode.verify_user(code.uuid, other_chat_id, "nickname")
assert created is False
assert connector.user == user_1
assert connector.telegram_chat_id == chat_id

View file

@ -4,7 +4,7 @@ from apps.telegram.updates.update_handlers import UpdateHandler
from apps.telegram.utils import is_verification_message
USER_CONNECTED_TEXT = "Done! This Telegram account is now linked to <b>{username}</b> 🎉"
RELINK_ACCOUNT_TEXT = """This Telegram account is already connected to Grafana OnCall user <b>{username}</b>
RELINK_ACCOUNT_TEXT = """This user is already connected to a Telegram account.
Please unlink Telegram account in profile settings of user <b>{username}</b> or contact Grafana OnCall support."""
WRONG_VERIFICATION_CODE = "Verification failed: wrong verification code"
@ -38,11 +38,10 @@ class PersonalVerificationCodeHandler(UpdateHandler):
if created:
reply_text = USER_CONNECTED_TEXT.format(username=connector.user.username)
elif connector is not None:
reply_text = RELINK_ACCOUNT_TEXT.format(username=connector.user.username)
else:
if connector is not None:
reply_text = RELINK_ACCOUNT_TEXT.format(username=connector.user.username)
else:
reply_text = WRONG_VERIFICATION_CODE
reply_text = WRONG_VERIFICATION_CODE
telegram_client = TelegramClient()
telegram_client.send_raw_message(chat_id=user.id, text=reply_text)

View file

@ -10,8 +10,8 @@ from mirage import fields as mirage_fields
from apps.alerts.models import MaintainableObject
from apps.alerts.tasks import disable_maintenance
from apps.slack.utils import post_message_to_channel
from apps.user_management.organization_log_creator import OrganizationLogType, create_organization_log
from apps.user_management.subscription_strategy import FreePublicBetaSubscriptionStrategy
from common.insight_log import ChatOpsEvent, ChatOpsType, write_chatops_insight_log
from common.public_primary_keys import generate_public_primary_key, increase_public_primary_key_length
logger = logging.getLogger(__name__)
@ -232,31 +232,13 @@ class Organization(MaintainableObject):
old_channel_name = old_general_log_channel_id.name if old_general_log_channel_id else None
self.general_log_channel_id = channel_id
self.save(update_fields=["general_log_channel_id"])
description = (
f"The default channel for incidents in Slack changed "
f"{f'from #{old_channel_name} ' if old_channel_name else ''}to #{channel_name}"
write_chatops_insight_log(
author=user,
event_name=ChatOpsEvent.DEFAULT_CHANNEL_CHANGED,
chatops_type=ChatOpsType.SLACK,
prev_channel=old_channel_name,
new_channel=channel_name,
)
create_organization_log(self, user, OrganizationLogType.TYPE_SLACK_DEFAULT_CHANNEL_CHANGED, description)
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
# TODO: 770: check format
name: Test, archive alerts from date: 2019-10-24, require resolution note: No
acknowledge remind settings: Never remind about ack-ed incidents, and never unack
"""
result = (
f"name: {self.org_title}, "
f"archive alerts from date: {self.archive_alerts_from.isoformat()}, "
f"require resolution note: {'Yes' if self.is_resolution_note_required else 'No'}"
)
if self.slack_team_identity:
result += (
f"\nacknowledge remind settings: {self.get_acknowledge_remind_timeout_display()}, "
f"{self.get_unacknowledge_timeout_display()}, "
)
return result
@property
def web_link(self):
@ -264,3 +246,24 @@ class Organization(MaintainableObject):
def __str__(self):
return f"{self.pk}: {self.org_title}"
# Insight logs
@property
def insight_logs_type_verbal(self):
return "organization"
@property
def insight_logs_verbal(self):
return self.org_title
@property
def insight_logs_serialized(self):
return {
"name": self.org_title,
"is_resolution_note_required": self.is_resolution_note_required,
"archive_alerts_from": self.archive_alerts_from.isoformat(),
}
@property
def insight_logs_metadata(self):
return {}

View file

@ -208,31 +208,6 @@ class User(models.Model):
return verbal
@property
def repr_settings_for_client_side_logging(self):
"""
Example of execution:
username: Alex, role: Admin, verified phone number: not added, unverified phone number: not added,
telegram connected: No,
notification policies: default: SMS - 5 min - :telephone:, important: :telephone:
"""
UserNotificationPolicy = apps.get_model("base", "UserNotificationPolicy")
default, important = UserNotificationPolicy.get_short_verbals_for_user(user=self)
notification_policies_verbal = f"default: {' - '.join(default)}, important: {' - '.join(important)}"
notification_policies_verbal = demojize(notification_policies_verbal)
result = (
f"username: {self.username}, role: {self.get_role_display()}, "
f"verified phone number: "
f"{self.verified_phone_number if self.verified_phone_number else 'not added'}, "
f"unverified phone number: "
f"{self.unverified_phone_number if self.unverified_phone_number else 'not added'}, "
f"telegram connected: {'Yes' if self.is_telegram_connected else 'No'}"
f"\nnotification policies: {notification_policies_verbal}"
)
return result
@property
def timezone(self):
if self._timezone:
@ -250,10 +225,44 @@ class User(models.Model):
def short(self):
return {"username": self.username, "pk": self.public_primary_key, "avatar": self.avatar_url}
# Insight logs
@property
def insight_logs_type_verbal(self):
return "user"
@property
def insight_logs_verbal(self):
return self.username
@property
def insight_logs_serialized(self):
UserNotificationPolicy = apps.get_model("base", "UserNotificationPolicy")
default, important = UserNotificationPolicy.get_short_verbals_for_user(user=self)
notification_policies_verbal = f"default: {' - '.join(default)}, important: {' - '.join(important)}"
notification_policies_verbal = demojize(notification_policies_verbal)
result = {
"username": self.username,
"role": self.get_role_display(),
"notification_policies": notification_policies_verbal,
}
if self.verified_phone_number:
result["verified_phone_number"] = self.unverified_phone_number
if self.unverified_phone_number:
result["unverified_phone_number"] = self.unverified_phone_number
return result
@property
def insight_logs_metadata(self):
return {}
# TODO: check whether this signal can be moved to save method of the model
@receiver(post_save, sender=User)
def listen_for_user_model_save(sender, instance, created, *args, **kwargs):
if created:
instance.notification_policies.create_default_policies_for_user(instance)
instance.notification_policies.create_important_policies_for_user(instance)
drop_cached_ical_for_custom_events_for_organization.apply_async(
(instance.organization_id,),
)

View file

@ -1,2 +0,0 @@
from .create_organization_log import create_organization_log # noqa: F401
from .organization_log_type import OrganizationLogType # noqa: F401

View file

@ -1,11 +0,0 @@
from django.apps import apps
def create_organization_log(organization, author, type, description):
OrganizationLogRecord = apps.get_model("base", "OrganizationLogRecord")
OrganizationLogRecord.objects.create(
organization=organization,
author=author,
type=type,
description=description,
)

View file

@ -1,52 +0,0 @@
class OrganizationLogType:
(
TYPE_SLACK_DEFAULT_CHANNEL_CHANGED,
TYPE_SLACK_WORKSPACE_CONNECTED,
TYPE_SLACK_WORKSPACE_DISCONNECTED,
TYPE_TELEGRAM_DEFAULT_CHANNEL_CHANGED,
TYPE_TELEGRAM_CHANNEL_CONNECTED,
TYPE_TELEGRAM_CHANNEL_DISCONNECTED,
TYPE_INTEGRATION_CREATED,
TYPE_INTEGRATION_DELETED,
TYPE_INTEGRATION_CHANGED,
TYPE_HEARTBEAT_CREATED,
TYPE_HEARTBEAT_CHANGED,
TYPE_CHANNEL_FILTER_CREATED,
TYPE_CHANNEL_FILTER_DELETED,
TYPE_CHANNEL_FILTER_CHANGED,
TYPE_ESCALATION_CHAIN_CREATED,
TYPE_ESCALATION_CHAIN_DELETED,
TYPE_ESCALATION_CHAIN_CHANGED,
TYPE_ESCALATION_STEP_CREATED,
TYPE_ESCALATION_STEP_DELETED,
TYPE_ESCALATION_STEP_CHANGED,
TYPE_MAINTENANCE_STARTED_FOR_ORGANIZATION,
TYPE_MAINTENANCE_STARTED_FOR_INTEGRATION,
TYPE_MAINTENANCE_STOPPED_FOR_ORGANIZATION,
TYPE_MAINTENANCE_STOPPED_FOR_INTEGRATION,
TYPE_MAINTENANCE_DEBUG_STARTED_FOR_ORGANIZATION,
TYPE_MAINTENANCE_DEBUG_STARTED_FOR_INTEGRATION,
TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_ORGANIZATION,
TYPE_MAINTENANCE_DEBUG_STOPPED_FOR_INTEGRATION,
TYPE_CUSTOM_ACTION_CREATED,
TYPE_CUSTOM_ACTION_DELETED,
TYPE_CUSTOM_ACTION_CHANGED,
TYPE_SCHEDULE_CREATED,
TYPE_SCHEDULE_DELETED,
TYPE_SCHEDULE_CHANGED,
TYPE_ON_CALL_SHIFT_CREATED,
TYPE_ON_CALL_SHIFT_DELETED,
TYPE_ON_CALL_SHIFT_CHANGED,
TYPE_NEW_USER_ADDED,
TYPE_ORGANIZATION_SETTINGS_CHANGED,
TYPE_USER_SETTINGS_CHANGED,
TYPE_TELEGRAM_TO_USER_CONNECTED,
TYPE_TELEGRAM_FROM_USER_DISCONNECTED,
TYPE_API_TOKEN_CREATED,
TYPE_API_TOKEN_REVOKED,
TYPE_ESCALATION_CHAIN_COPIED,
TYPE_SCHEDULE_EXPORT_TOKEN_CREATED,
TYPE_MESSAGING_BACKEND_CHANNEL_CHANGED,
TYPE_MESSAGING_BACKEND_CHANNEL_DELETED,
TYPE_MESSAGING_BACKEND_USER_DISCONNECTED,
) = range(49)

View file

@ -24,7 +24,6 @@ def test_organization_delete(
make_escalation_chain,
make_escalation_policy,
make_channel_filter,
make_organization_log_record,
make_user_notification_policy,
make_telegram_user_connector,
make_telegram_channel,
@ -74,8 +73,6 @@ def test_organization_delete(
alert_receive_channel = make_alert_receive_channel(organization=organization, author=user_1)
channel_filter = make_channel_filter(alert_receive_channel, is_default=True, escalation_chain=escalation_chain)
organization_log_record = make_organization_log_record(organization=organization, user=user_1)
alert_group = make_alert_group(
alert_receive_channel=alert_receive_channel,
acknowledged_by_user=user_1,
@ -142,7 +139,6 @@ def test_organization_delete(
escalation_policy,
alert_receive_channel,
channel_filter,
organization_log_record,
alert_group,
alert,
alert_group_log_record,

View file

@ -1 +1,6 @@
from .exceptions import MaintenanceCouldNotBeStartedError, TeamCanNotBeChangedError, UnableToSendDemoAlert # noqa: F401
from .exceptions import ( # noqa: F401
MaintenanceCouldNotBeStartedError,
TeamCanNotBeChangedError,
UnableToSendDemoAlert,
UserNotificationPolicyCouldNotBeDeleted,
)

View file

@ -17,3 +17,7 @@ class TeamCanNotBeChangedError(OperationCouldNotBePerformedError):
class UnableToSendDemoAlert(OperationCouldNotBePerformedError):
pass
class UserNotificationPolicyCouldNotBeDeleted(OperationCouldNotBePerformedError):
pass

View file

@ -0,0 +1,3 @@
from .chatops_insight_logs import ChatOpsEvent, ChatOpsType, write_chatops_insight_log # noqa
from .maintenance_insight_log import MaintenanceEvent, write_maintenance_insight_log # noqa
from .resource_insight_logs import EntityEvent, write_resource_insight_log # noqa

View file

@ -0,0 +1,45 @@
import enum
import json
import logging
from .insight_logs_enabled_check import is_insight_logs_enabled
insight_logger = logging.getLogger("insight_logger")
logger = logging.getLogger(__name__)
class ChatOpsEvent(enum.Enum):
WORKSPACE_CONNECTED = "started"
WORKSPACE_DISCONNECTED = "finished"
CHANNEL_CONNECTED = "channel_connected"
CHANNEL_DISCONNECTED = "channel_disconnected"
USER_LINKED = "user_linked"
USER_UNLINKED = "used_unlinked"
DEFAULT_CHANNEL_CHANGED = "default_channel_changed"
class ChatOpsType(enum.Enum):
# Keep in sync with messaging backends' id.
# In perfect world backend_ids should be used intead of this enums
# It can be achieved when we move refactor slack and telegram to use the messaging_backend system.
SLACK = "SLACK"
MSTEAMS = "MSTEAMS"
TELEGRAM = "TELEGRAM"
def write_chatops_insight_log(author, event_name: ChatOpsEvent, chatops_type: ChatOpsType, **kwargs):
try:
organization = author.organization
if is_insight_logs_enabled(organization):
tenant_id = organization.stack_id
user_id = author.public_primary_key
username = json.dumps(author.username)
log_line = f"tenant_id={tenant_id} author_id={user_id} author={username} action_type=chat_ops action_name={event_name.value} chat_ops_type={chatops_type.value}" # noqa
for k, v in kwargs.items():
log_line += f" {k}={json.dumps(v)}"
insight_logger.info(log_line)
except Exception as e:
logger.warning(f"insight_log.failed_to_write_chatops_insight_log exception={e}")

View file

@ -0,0 +1,15 @@
from django.apps import apps
def is_insight_logs_enabled(organization):
"""
is_insight_logs_enabled checks if inside logs enabled for given organization.
"""
DynamicSetting = apps.get_model("base", "DynamicSetting")
org_id_to_enable_insight_logs, _ = DynamicSetting.objects.get_or_create(
name="org_id_to_enable_insight_logs",
defaults={"json_value": []},
)
log_all = "all" in org_id_to_enable_insight_logs.json_value
insight_logs_enabled = organization.id in org_id_to_enable_insight_logs.json_value
return log_all or insight_logs_enabled

View file

@ -0,0 +1,38 @@
import enum
import json
import logging
from .insight_logs_enabled_check import is_insight_logs_enabled
insight_logger = logging.getLogger("insight_logger")
logger = logging.getLogger(__name__)
class MaintenanceEvent(enum.Enum):
STARTED = "started"
FINISHED = "finished"
def write_maintenance_insight_log(instance, user, event: MaintenanceEvent):
try:
organization = instance.get_organization()
tenant_id = organization.stack_id
team = instance.get_team()
entity_name = json.dumps(instance.insight_logs_verbal)
entity_id = instance.public_primary_key
maintenance_mode = instance.get_maintenance_mode_display()
if is_insight_logs_enabled(organization):
log_line = f"tenant_id={tenant_id} action_type=maintenance action_name={event.value} maintenance_mode={maintenance_mode} resource_id={entity_id} resource_name={entity_name}" # noqa
if team:
log_line += f" team={json.dumps(team.name)} team_id={team.public_primary_key}"
else:
log_line += f' team="General"'
if user:
username = json.dumps(user.username)
user_id = user.public_primary_key
log_line += f" user_id={user_id} username={username} "
insight_logger.info(log_line)
except Exception as e:
logger.warning(f"insight_log.failed_to_write_maintenance_insight_log exception={e}")

View file

@ -0,0 +1,126 @@
import enum
import json
import logging
import re
from abc import ABC, abstractmethod
from .insight_logs_enabled_check import is_insight_logs_enabled
insight_logger = logging.getLogger("insight_logger")
logger = logging.getLogger(__name__)
class EntityEvent(enum.Enum):
CREATED = "created"
UPDATED = "updated"
DELETED = "deleted"
class InsightLoggable(ABC):
@property
@abstractmethod
def public_primary_key(self):
pass
@property
@abstractmethod
def insight_logs_verbal(self) -> str:
"""
insight_logs_verbal returns resource name for insight_log
"""
pass
@property
@abstractmethod
def insight_logs_type_verbal(self) -> str:
"""
insight_logs_type_verbal resource type for insight_log
"""
pass
@property
@abstractmethod
def insight_logs_serialized(self) -> dict:
"""
insight_logs_serialized returns resource, serialized for insight_log
"""
pass
@property
@abstractmethod
def insight_logs_metadata(self) -> dict:
"""
insight_logs_metadata returns resource's fields which should be always present in the insight_log line even if
they weren't changed
"""
pass
def write_resource_insight_log(instance: InsightLoggable, author, event: EntityEvent, prev_state=None, new_state=None):
try:
organization = author.organization
if is_insight_logs_enabled(organization):
tenant_id = organization.stack_id
author_id = author.public_primary_key
author = json.dumps(author.username)
entity_type = instance.insight_logs_type_verbal
try:
entity_id = instance.public_primary_key
except AttributeError:
# Fallback for entities which have no public_primary_key, E.g. public api token, schedule export token
entity_id = instance.id
entity_name = json.dumps(instance.insight_logs_verbal)
metadata = instance.insight_logs_metadata
log_line = f"tenant_id={tenant_id} author_id={author_id} author={author} action_type=resource action={event.value} resource_type={entity_type} resource_id={entity_id} resource_name={entity_name}" # noqa
for k, v in metadata.items():
log_line += f" {k}={json.dumps(v)}"
if prev_state and new_state:
prev_state, new_state = state_diff_finder(prev_state, new_state)
prev_state = escape_json_str_for_insight_log(json.dumps(format_state_for_insight_log(prev_state)))
new_state = escape_json_str_for_insight_log(json.dumps(format_state_for_insight_log(new_state)))
log_line += f' prev_state="{prev_state}"'
log_line += f' new_state="{new_state}"'
insight_logger.info(log_line)
except Exception as e:
logger.warning(f"insight_log.failed_to_write_entity_insight_log exception={e}")
def state_diff_finder(prev_state: dict, new_state: dict):
"""
state_diff_finder returns diff between two serialized representations of the resource
"""
before_diff = {}
after_diff = {}
for k, v in prev_state.items():
if k not in new_state:
before_diff[k] = v
continue
if new_state[k] != v:
before_diff[k] = prev_state[k]
after_diff[k] = new_state[k]
for k, v in new_state.items():
if k not in prev_state:
after_diff[k] = v
return before_diff, after_diff
def escape_json_str_for_insight_log(string):
"""
escape_json_str escapes double quotes near keys and values in json string
"""
return re.sub(r"(?<!\\)(\")", r"\\\1", string)
def format_state_for_insight_log(diff_dict):
"""
format_state_for_insight_log formats serialized resource data for the insight log.
It hides and prunes fields which shouldn't be exposed
"""
fields_to_prune = ()
fields_to_hide = ("verified_phone_number", "unverified_phone_number")
for k, v in diff_dict.items():
if k in fields_to_prune:
diff_dict[k] = "Diff not supported"
if k in fields_to_hide:
diff_dict[k] = "*****"
return diff_dict

View file

@ -29,7 +29,6 @@ def generate_public_primary_key(prefix, length=settings.PUBLIC_PRIMARY_KEY_MIN_L
"H": ("slack", "SlackChannel"),
"Z": ("telegram", "TelegramToOrganizationConnector"),
"L": ("base", "LiveSetting"),
"V": ("base", "OrganizationLogRecord"),
"X": ("extensions", "Other models from extensions apps"),
:param length:
:return:

View file

@ -102,6 +102,17 @@ def getenv_boolean(variable_name: str, default: bool) -> bool:
return value.lower() in ("true", "1")
def getenv_integer(variable_name: str, default: int) -> int:
value = os.environ.get(variable_name)
if value is None:
return default
try:
value = int(value)
except ValueError:
return default
return value
def batch_queryset(qs, batch_size=1000):
qs_count = qs.count()
for start in range(0, qs_count, batch_size):

View file

@ -41,7 +41,6 @@ from apps.base.models.user_notification_policy_log_record import (
)
from apps.base.tests.factories import (
LiveSettingFactory,
OrganizationLogRecordFactory,
UserNotificationPolicyFactory,
UserNotificationPolicyLogRecordFactory,
)
@ -69,7 +68,7 @@ from apps.telegram.tests.factories import (
TelegramVerificationCodeFactory,
)
from apps.twilioapp.tests.factories import PhoneCallFactory, SMSFactory
from apps.user_management.organization_log_creator import OrganizationLogType
from apps.user_management.models.user import User, listen_for_user_model_save
from apps.user_management.tests.factories import OrganizationFactory, TeamFactory, UserFactory
from common.constants.role import Role
@ -77,7 +76,6 @@ register(OrganizationFactory)
register(UserFactory)
register(TeamFactory)
register(OrganizationLogRecordFactory)
register(AlertReceiveChannelFactory)
register(ChannelFilterFactory)
@ -153,7 +151,9 @@ def make_organization():
@pytest.fixture
def make_user_for_organization():
def _make_user_for_organization(organization, role=Role.ADMIN, **kwargs):
post_save.disconnect(listen_for_user_model_save, sender=User)
user = UserFactory(organization=organization, role=role, **kwargs)
post_save.disconnect(listen_for_user_model_save, sender=User)
return user
return _make_user_for_organization
@ -657,16 +657,6 @@ def make_integration_heartbeat():
return _make_integration_heartbeat
@pytest.fixture()
def make_organization_log_record():
def _make_organization_log_record(organization, user, **kwargs):
if "type" not in kwargs:
kwargs["type"] = OrganizationLogType.TYPE_SLACK_DEFAULT_CHANNEL_CHANGED
return OrganizationLogRecordFactory(organization=organization, author=user, **kwargs)
return _make_organization_log_record
@pytest.fixture()
def load_slack_urls(settings):
clear_url_caches()

View file

@ -3,7 +3,7 @@ from random import randrange
from celery.schedules import crontab
from common.utils import getenv_boolean
from common.utils import getenv_boolean, getenv_integer
VERSION = "dev-oss"
# Indicates if instance is OSS installation.
@ -175,7 +175,7 @@ LOGGING = {
"filters": {"request_id": {"()": "log_request_id.filters.RequestIDFilter"}},
"formatters": {
"standard": {"format": "source=engine:app google_trace_id=%(request_id)s logger=%(name)s %(message)s"},
"insight_logger": {"format": "insight_logs=true logger=%(name)s %(message)s"},
"insight_logger": {"format": "insight=true logger=%(name)s %(message)s"},
},
"handlers": {
"console": {
@ -451,7 +451,7 @@ SELF_HOSTED_SETTINGS = {
GRAFANA_INCIDENT_STATIC_API_KEY = os.environ.get("GRAFANA_INCIDENT_STATIC_API_KEY", None)
DATA_UPLOAD_MAX_MEMORY_SIZE = 5242880
DATA_UPLOAD_MAX_MEMORY_SIZE = getenv_integer("DATA_UPLOAD_MAX_MEMORY_SIZE", 1_048_576) # 1mb by default
# Log inbound/outbound calls as slow=1 if they exceed threshold
SLOW_THRESHOLD_SECONDS = 2.0

View file

@ -39,4 +39,4 @@ SENDGRID_SECRET_KEY = "dummy_sendgrid_secret_key"
TWILIO_ACCOUNT_SID = "dummy_twilio_account_sid"
TWILIO_AUTH_TOKEN = "dummy_twilio_auth_token"
EXTRA_MESSAGING_BACKENDS = ["apps.base.tests.messaging_backend.TestOnlyBackend"]
EXTRA_MESSAGING_BACKENDS = [("apps.base.tests.messaging_backend.TestOnlyBackend", 42)]

Some files were not shown because too many files have changed in this diff Show more