main -> dev (#5458)

# What this PR does

## Which issue(s) this PR closes

Related to [issue link here]

<!--
*Note*: If you want the issue to be auto-closed once the PR is merged,
change "Related to" to "Closes" in the line above.
If you have more than one GitHub issue that this PR closes, be sure to
preface
each issue link with a [closing
keyword](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/using-keywords-in-issues-and-pull-requests#linking-a-pull-request-to-an-issue).
This ensures that the issue(s) are auto-closed once the PR has been
merged.
-->

## Checklist

- [ ] Unit, integration, and e2e (if applicable) tests updated
- [ ] Documentation added (or `pr:no public docs` PR label added if not
required)
- [ ] Added the relevant release notes label (see labels prefixed w/
`release:`). These labels dictate how your PR will
    show up in the autogenerated release notes.

---------

Co-authored-by: Matias Bordese <mbordese@gmail.com>
Co-authored-by: GitHub Actions <actions@github.com>
Co-authored-by: grafana-irm-app[bot] <165293418+grafana-irm-app[bot]@users.noreply.github.com>
Co-authored-by: Michael Derynck <michael.derynck@grafana.com>
This commit is contained in:
Joey Orlando 2025-02-18 13:15:28 -05:00 committed by GitHub
commit 34eec39209
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 747 additions and 103 deletions

View file

@ -2,8 +2,8 @@ apiVersion: v2
name: oncall
description: Developer-friendly incident response with brilliant Slack integration
type: application
version: 1.14.1
appVersion: v1.14.1
version: 1.14.4
appVersion: v1.14.4
dependencies:
- name: cert-manager
version: v1.8.0

View file

@ -13,8 +13,8 @@ Currently the migration tool supports migrating from:
2. Build the docker image: `docker build -t oncall-migrator .`
3. Obtain a Grafana OnCall API token and API URL on the "Settings" page of your Grafana OnCall instance
4. Depending on which tool you are migrating from, see more specific instructions there:
- [PagerDuty](#prerequisites)
- [Splunk OnCall](#prerequisites-1)
- [PagerDuty](#prerequisites)
- [Splunk OnCall](#prerequisites-1)
5. Run a [migration plan](#migration-plan)
6. If you are pleased with the results of the migration plan, run the tool in [migrate mode](#migration)
@ -47,12 +47,12 @@ docker run --rm \
oncall-migrator
```
Please read the generated report carefully since depending on the content of the report, some resources
Please read the generated report carefully since depending on the content of the report, some resources
could be not migrated and some existing Grafana OnCall resources could be deleted.
```text
User notification rules report:
✅ John Doe (john.doe@example.com) (existing notification rules will be deleted)
✅ John Doe (john.doe@example.com) (existing notification rules will be preserved)
❌ Ben Thompson (ben@example.com) — no Grafana OnCall user found with this email
Schedule report:
@ -223,18 +223,24 @@ oncall-migrator
Configuration is done via environment variables passed to the docker container.
| Name | Description | Type | Default |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------- | ------- |
| `MIGRATING_FROM` | Set to `pagerduty` | String | N/A |
| `PAGERDUTY_API_TOKEN` | PagerDuty API **user token**. To create a token, refer to [PagerDuty docs](https://support.pagerduty.com/docs/api-access-keys#generate-a-user-token-rest-api-key). | String | N/A |
| `ONCALL_API_URL` | Grafana OnCall API URL. This can be found on the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `ONCALL_API_TOKEN` | Grafana OnCall API Token. To create a token, navigate to the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `MODE` | Migration mode (plan vs actual migration). | String (choices: `plan`, `migrate`) | `plan` |
| `SCHEDULE_MIGRATION_MODE` | Determines how on-call schedules are migrated. | String (choices: `ical`, `web`) | `ical` |
| `UNSUPPORTED_INTEGRATION_TO_WEBHOOKS` | When set to `true`, integrations with unsupported type will be migrated to Grafana OnCall integrations with type "webhook". When set to `false`, integrations with unsupported type won't be migrated. | Boolean | `false` |
| `EXPERIMENTAL_MIGRATE_EVENT_RULES` | Migrate global event rulesets to Grafana OnCall integrations. | Boolean | `false` |
| `EXPERIMENTAL_MIGRATE_EVENT_RULES_LONG_NAMES` | Include service & integrations names from PD in migrated integrations (only effective when `EXPERIMENTAL_MIGRATE_EVENT_RULES` is `true`). | Boolean | `false` |
| `MIGRATE_USERS` | If `false`, will allow you to important all objects, while ignoring user references in schedules and escalation policies. In addition, if `false`, will also skip importing User notification rules. This may be helpful in cases where you are unable to import your list of Grafana users, but would like to experiment with OnCall using your existing PagerDuty setup as a starting point for experimentation. | Boolean | `true` |
| Name | Description | Type | Default |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------- | ------- |
| `MIGRATING_FROM` | Set to `pagerduty` | String | N/A |
| `PAGERDUTY_API_TOKEN` | PagerDuty API **user token**. To create a token, refer to [PagerDuty docs](https://support.pagerduty.com/docs/api-access-keys#generate-a-user-token-rest-api-key). | String | N/A |
| `ONCALL_API_URL` | Grafana OnCall API URL. This can be found on the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `ONCALL_API_TOKEN` | Grafana OnCall API Token. To create a token, navigate to the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `MODE` | Migration mode (plan vs actual migration). | String (choices: `plan`, `migrate`) | `plan` |
| `SCHEDULE_MIGRATION_MODE` | Determines how on-call schedules are migrated. | String (choices: `ical`, `web`) | `ical` |
| `UNSUPPORTED_INTEGRATION_TO_WEBHOOKS` | When set to `true`, integrations with unsupported type will be migrated to Grafana OnCall integrations with type "webhook". When set to `false`, integrations with unsupported type won't be migrated. | Boolean | `false` |
| `EXPERIMENTAL_MIGRATE_EVENT_RULES` | Migrate global event rulesets to Grafana OnCall integrations. | Boolean | `false` |
| `EXPERIMENTAL_MIGRATE_EVENT_RULES_LONG_NAMES` | Include service & integrations names from PD in migrated integrations (only effective when `EXPERIMENTAL_MIGRATE_EVENT_RULES` is `true`). | Boolean | `false` |
| `MIGRATE_USERS` | If `false`, will allow you to important all objects, while ignoring user references in schedules and escalation policies. In addition, if `false`, will also skip importing User notification rules. This may be helpful in cases where you are unable to import your list of Grafana users, but would like to experiment with OnCall using your existing PagerDuty setup as a starting point for experimentation. | Boolean | `true` |
| `PAGERDUTY_FILTER_TEAM` | Filter resources by team name. Only resources associated with this team will be migrated. | String | N/A |
| `PAGERDUTY_FILTER_USERS` | Filter resources by PagerDuty user IDs (comma-separated). Only resources associated with these users will be migrated. | String | N/A |
| `PAGERDUTY_FILTER_SCHEDULE_REGEX` | Filter schedules by name using a regex pattern. Only schedules whose names match this pattern will be migrated. | String | N/A |
| `PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX` | Filter escalation policies by name using a regex pattern. Only policies whose names match this pattern will be migrated. | String | N/A |
| `PAGERDUTY_FILTER_INTEGRATION_REGEX` | Filter integrations by name using a regex pattern. Only integrations whose names match this pattern will be migrated. | String | N/A |
| `PRESERVE_EXISTING_USER_NOTIFICATION_RULES` | Whether to preserve existing notification rules when migrating users | Boolean | `true` |
### Resources
@ -246,7 +252,11 @@ taken into account and will be migrated to both default and important notificati
for each user. Note that delays between notification rules may be slightly different in Grafana OnCall,
see [Limitations](#limitations) for more info.
When running the migration, existing notification rules in Grafana OnCall will be deleted for every affected user.
By default (when `PRESERVE_EXISTING_USER_NOTIFICATION_RULES` is `true`), existing notification rules in Grafana OnCall will
be preserved and PagerDuty rules won't be imported for users who already have notification rules configured in Grafana OnCall.
If you want to replace existing notification rules with ones from PagerDuty, set `PRESERVE_EXISTING_USER_NOTIFICATION_RULES`
to `false`.
See [Migrating Users](#migrating-users) for some more information on how users are migrated.
@ -290,6 +300,20 @@ For every service in PD, the tool will migrate all integrations to Grafana OnCal
Any services that reference escalation policies that cannot be migrated won't be migrated as well.
Any integrations with unsupported type won't be migrated unless `UNSUPPORTED_INTEGRATION_TO_WEBHOOKS` is set to `true`.
The following integration types are supported:
- Datadog
- Pingdom
- Prometheus
- PRTG
- Stackdriver
- UptimeRobot
- New Relic
- Zabbix Webhook (for 5.0 and 5.2)
- Elastic Alerts
- Firebase
- Amazon CloudWatch (maps to Amazon SNS integration in Grafana OnCall)
#### Event rules (global event rulesets)
The tool is capable of migrating global event rulesets from PagerDuty to Grafana OnCall integrations. This feature is
@ -319,7 +343,7 @@ Resources that can be migrated using this tool:
- Escalation Policies
- On-Call Schedules (including Rotations + Scheduled Overrides)
- Teams + team memberships
<!-- - Teams + team memberships TODO: uncomment out once we support teams-->
- User Paging Policies
### Limitations
@ -337,14 +361,14 @@ Resources that can be migrated using this tool:
Configuration is done via environment variables passed to the docker container.
| Name | Description | Type | Default |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------- | ------- |
| `MIGRATING_FROM` | Set to `splunk` | String | N/A |
| `SPLUNK_API_KEY` | Splunk API **key**. To create an API Key, refer to [Splunk OnCall docs](https://help.victorops.com/knowledge-base/api/#:~:text=currently%20in%20place.-,API%20Configuration%20in%20Splunk%20On%2DCall,-To%20access%20the). | String | N/A |
| `SPLUNK_API_ID` | Splunk API **ID**. To retrieve this ID, refer to [Splunk OnCall docs](https://help.victorops.com/knowledge-base/api/#:~:text=currently%20in%20place.-,API%20Configuration%20in%20Splunk%20On%2DCall,-To%20access%20the). | String | N/A |
| `ONCALL_API_URL` | Grafana OnCall API URL. This can be found on the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `ONCALL_API_TOKEN` | Grafana OnCall API Token. To create a token, navigate to the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `MODE` | Migration mode (plan vs actual migration). | String (choices: `plan`, `migrate`) | `plan` |
| Name | Description | Type | Default |
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | ------- |
| `MIGRATING_FROM` | Set to `splunk` | String | N/A |
| `SPLUNK_API_KEY` | Splunk API **key**. To create an API Key, refer to [Splunk OnCall docs](https://help.victorops.com/knowledge-base/api/#:~:text=currently%20in%20place.-,API%20Configuration%20in%20Splunk%20On%2DCall,-To%20access%20the). | String | N/A |
| `SPLUNK_API_ID` | Splunk API **ID**. To retrieve this ID, refer to [Splunk OnCall docs](https://help.victorops.com/knowledge-base/api/#:~:text=currently%20in%20place.-,API%20Configuration%20in%20Splunk%20On%2DCall,-To%20access%20the). | String | N/A |
| `ONCALL_API_URL` | Grafana OnCall API URL. This can be found on the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `ONCALL_API_TOKEN` | Grafana OnCall API Token. To create a token, navigate to the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `MODE` | Migration mode (plan vs actual migration). | String (choices: `plan`, `migrate`) | `plan` |
### Resources
@ -359,7 +383,7 @@ unmatched users or schedules that cannot be migrated won't be migrated as well.
##### Caveats
- delays between escalation steps may be slightly different in Grafana OnCall, see [Limitations](#limitations-1) for
more info.
more info.
- the following Splunk OnCall escalation step types are not supported and will not be migrated:
- "Notify the next user(s) in the current on-duty shift"
- "Notify the previous user(s) in the current on-duty shift"
@ -391,9 +415,9 @@ See [Migrating Users](#migrating-users) for some more information on how users a
##### Caveats
- The WhatsApp escalation type is not supported and will not be migrated to the Grafana OnCall
user's personal notification policy
user's personal notification policy
- Note that delays between escalation steps may be slightly different in Grafana OnCall,
see [Limitations](#limitations-1) for more info.
see [Limitations](#limitations-1) for more info.
## Migrating Users

View file

@ -20,6 +20,7 @@ PAGERDUTY_TO_ONCALL_VENDOR_MAP = {
"Zabbix Webhook (for 5.0 and 5.2)": "zabbix",
"Elastic Alerts": "elastalert",
"Firebase": "fabric",
"Amazon CloudWatch": "amazon_sns",
}
# Experimental feature to migrate PD rulesets to OnCall integrations
@ -38,3 +39,25 @@ UNSUPPORTED_INTEGRATION_TO_WEBHOOKS = (
)
MIGRATE_USERS = os.getenv("MIGRATE_USERS", "true").lower() == "true"
# Filter resources by team
PAGERDUTY_FILTER_TEAM = os.getenv("PAGERDUTY_FILTER_TEAM")
# Filter resources by users (comma-separated list of PagerDuty user IDs)
PAGERDUTY_FILTER_USERS = [
user_id.strip()
for user_id in os.getenv("PAGERDUTY_FILTER_USERS", "").split(",")
if user_id.strip()
]
# Filter resources by name regex patterns
PAGERDUTY_FILTER_SCHEDULE_REGEX = os.getenv("PAGERDUTY_FILTER_SCHEDULE_REGEX")
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX = os.getenv(
"PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX"
)
PAGERDUTY_FILTER_INTEGRATION_REGEX = os.getenv("PAGERDUTY_FILTER_INTEGRATION_REGEX")
# Whether to preserve existing notification rules when migrating users
PRESERVE_EXISTING_USER_NOTIFICATION_RULES = (
os.getenv("PRESERVE_EXISTING_USER_NOTIFICATION_RULES", "true").lower() == "true"
)

View file

@ -1,4 +1,5 @@
import datetime
import re
from pdpyras import APISession
@ -11,6 +12,11 @@ from lib.pagerduty.config import (
MODE,
MODE_PLAN,
PAGERDUTY_API_TOKEN,
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX,
PAGERDUTY_FILTER_INTEGRATION_REGEX,
PAGERDUTY_FILTER_SCHEDULE_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
)
from lib.pagerduty.report import (
escalation_policy_report,
@ -43,6 +49,136 @@ from lib.pagerduty.resources.users import (
)
def filter_schedules(schedules):
"""Filter schedules based on configured filters"""
filtered_schedules = []
filtered_out = 0
for schedule in schedules:
should_include = True
reason = None
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = schedule.get("teams", [])
if not any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
should_include = False
reason = f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
# Filter by users
if should_include and PAGERDUTY_FILTER_USERS:
schedule_users = set()
for layer in schedule.get("schedule_layers", []):
for user in layer.get("users", []):
schedule_users.add(user["user"]["id"])
if not any(user_id in schedule_users for user_id in PAGERDUTY_FILTER_USERS):
should_include = False
reason = f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
# Filter by name regex
if should_include and PAGERDUTY_FILTER_SCHEDULE_REGEX:
if not re.match(PAGERDUTY_FILTER_SCHEDULE_REGEX, schedule["name"]):
should_include = False
reason = f"Schedule regex filter: {PAGERDUTY_FILTER_SCHEDULE_REGEX}"
if should_include:
filtered_schedules.append(schedule)
else:
filtered_out += 1
print(f"{TAB}Schedule {schedule['id']}: {reason}")
if filtered_out > 0:
print(f"Filtered out {filtered_out} schedules")
return filtered_schedules
def filter_escalation_policies(policies):
"""Filter escalation policies based on configured filters"""
filtered_policies = []
filtered_out = 0
for policy in policies:
should_include = True
reason = None
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = policy.get("teams", [])
if not any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
should_include = False
reason = f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
# Filter by users
if should_include and PAGERDUTY_FILTER_USERS:
policy_users = set()
for rule in policy.get("escalation_rules", []):
for target in rule.get("targets", []):
if target["type"] == "user":
policy_users.add(target["id"])
if not any(user_id in policy_users for user_id in PAGERDUTY_FILTER_USERS):
should_include = False
reason = f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
# Filter by name regex
if should_include and PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX:
if not re.match(PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX, policy["name"]):
should_include = False
reason = f"Escalation policy regex filter: {PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX}"
if should_include:
filtered_policies.append(policy)
else:
filtered_out += 1
print(f"{TAB}Policy {policy['id']}: {reason}")
if filtered_out > 0:
print(f"Filtered out {filtered_out} escalation policies")
return filtered_policies
def filter_integrations(integrations):
"""Filter integrations based on configured filters"""
filtered_integrations = []
filtered_out = 0
for integration in integrations:
should_include = True
reason = None
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = integration["service"].get("teams", [])
if not any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
should_include = False
reason = f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
# Filter by name regex
if should_include and PAGERDUTY_FILTER_INTEGRATION_REGEX:
integration_name = (
f"{integration['service']['name']} - {integration['name']}"
)
if not re.match(PAGERDUTY_FILTER_INTEGRATION_REGEX, integration_name):
should_include = False
reason = (
f"Integration regex filter: {PAGERDUTY_FILTER_INTEGRATION_REGEX}"
)
if should_include:
filtered_integrations.append(integration)
else:
filtered_out += 1
print(f"{TAB}Integration {integration['id']}: {reason}")
if filtered_out > 0:
print(f"Filtered out {filtered_out} integrations")
return filtered_integrations
def migrate() -> None:
session = APISession(PAGERDUTY_API_TOKEN)
session.timeout = 20
@ -59,9 +195,13 @@ def migrate() -> None:
print("▶ Fetching schedules...")
# Fetch schedules from PagerDuty
schedules = session.list_all(
"schedules", params={"include[]": "schedule_layers", "time_zone": "UTC"}
"schedules",
params={"include[]": ["schedule_layers", "teams"], "time_zone": "UTC"},
)
# Apply filters to schedules
schedules = filter_schedules(schedules)
# Fetch overrides from PagerDuty
since = datetime.datetime.now(datetime.timezone.utc)
until = since + datetime.timedelta(
@ -78,11 +218,19 @@ def migrate() -> None:
oncall_schedules = OnCallAPIClient.list_all("schedules")
print("▶ Fetching escalation policies...")
escalation_policies = session.list_all("escalation_policies")
escalation_policies = session.list_all(
"escalation_policies", params={"include[]": "teams"}
)
# Apply filters to escalation policies
escalation_policies = filter_escalation_policies(escalation_policies)
oncall_escalation_chains = OnCallAPIClient.list_all("escalation_chains")
print("▶ Fetching integrations...")
services = session.list_all("services", params={"include[]": "integrations"})
services = session.list_all(
"services", params={"include[]": ["integrations", "teams"]}
)
vendors = session.list_all("vendors")
integrations = []
@ -92,6 +240,9 @@ def migrate() -> None:
integration["service"] = service
integrations.append(integration)
# Apply filters to integrations
integrations = filter_integrations(integrations)
oncall_integrations = OnCallAPIClient.list_all("integrations")
rulesets = None

View file

@ -1,4 +1,5 @@
from lib.common.report import ERROR_SIGN, SUCCESS_SIGN, TAB, WARNING_SIGN
from lib.pagerduty.config import PRESERVE_EXISTING_USER_NOTIFICATION_RULES
def format_user(user: dict) -> str:
@ -88,8 +89,22 @@ def user_report(users: list[dict]) -> str:
for user in sorted(users, key=lambda u: bool(u["oncall_user"]), reverse=True):
result += "\n" + TAB + format_user(user)
if user["oncall_user"] and user["notification_rules"]:
result += " (existing notification rules will be deleted)"
if user["oncall_user"]:
if (
user["oncall_user"]["notification_rules"]
and PRESERVE_EXISTING_USER_NOTIFICATION_RULES
):
# already has user notification rules defined in OnCall.. we won't touch these
result += " (existing notification rules will be preserved due to the PRESERVE_EXISTING_USER_NOTIFICATION_RULES being set to True and this user already having notification rules defined in OnCall)"
elif (
user["oncall_user"]["notification_rules"]
and not PRESERVE_EXISTING_USER_NOTIFICATION_RULES
):
# already has user notification rules defined in OnCall.. we will overwrite these
result += " (existing notification rules will be overwritten due to the PRESERVE_EXISTING_USER_NOTIFICATION_RULES being set to False)"
elif user["notification_rules"]:
# user has notification rules defined in PagerDuty, but none defined in OnCall, we will migrate these
result += " (existing PagerDuty notification rules will be migrated due to this user not having any notification rules defined in OnCall)"
return result

View file

@ -17,6 +17,10 @@ def match_escalation_policy_for_integration(
policy_id = integration["service"]["escalation_policy"]["id"]
policy = find_by_id(escalation_policies, policy_id)
if policy is None:
integration["is_escalation_policy_flawed"] = True
return
integration["is_escalation_policy_flawed"] = bool(
policy["unmatched_users"] or policy["flawed_schedules"]
)

View file

@ -1,7 +1,10 @@
import copy
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.config import PAGERDUTY_TO_ONCALL_CONTACT_METHOD_MAP
from lib.pagerduty.config import (
PAGERDUTY_TO_ONCALL_CONTACT_METHOD_MAP,
PRESERVE_EXISTING_USER_NOTIFICATION_RULES,
)
from lib.utils import remove_duplicates, transform_wait_delay
@ -23,6 +26,13 @@ def remove_duplicate_rules_between_waits(rules: list[dict]) -> list[dict]:
def migrate_notification_rules(user: dict) -> None:
if (
PRESERVE_EXISTING_USER_NOTIFICATION_RULES
and user["oncall_user"]["notification_rules"]
):
print(f"Preserving existing notification rules for {user['email']}")
return
notification_rules = [
rule for rule in user["notification_rules"] if rule["urgency"] == "high"
]

View file

@ -1330,7 +1330,7 @@ expected_integrations_result = [
"scheduled_actions": [],
},
"oncall_integration": None,
"oncall_type": None,
"oncall_type": "amazon_sns",
"is_escalation_policy_flawed": False,
},
{
@ -1420,7 +1420,7 @@ expected_integrations_result = [
"scheduled_actions": [],
},
"oncall_integration": None,
"oncall_type": None,
"oncall_type": "amazon_sns",
"is_escalation_policy_flawed": True,
},
{
@ -1510,7 +1510,7 @@ expected_integrations_result = [
"scheduled_actions": [],
},
"oncall_integration": None,
"oncall_type": None,
"oncall_type": "amazon_sns",
"is_escalation_policy_flawed": True,
},
{

View file

@ -1,6 +1,11 @@
from unittest.mock import call, patch
from lib.pagerduty.migrate import migrate
from lib.pagerduty.migrate import (
filter_escalation_policies,
filter_integrations,
filter_schedules,
migrate,
)
@patch("lib.pagerduty.migrate.MIGRATE_USERS", False)
@ -17,11 +22,281 @@ def test_users_are_skipped_when_migrate_users_is_false(
# Assert no user-related fetching or migration occurs
assert mock_session.list_all.call_args_list == [
call("schedules", params={"include[]": "schedule_layers", "time_zone": "UTC"}),
call("escalation_policies"),
call("services", params={"include[]": "integrations"}),
call(
"schedules",
params={"include[]": ["schedule_layers", "teams"], "time_zone": "UTC"},
),
call("escalation_policies", params={"include[]": "teams"}),
call("services", params={"include[]": ["integrations", "teams"]}),
call("vendors"),
# no user notification rules fetching
]
mock_oncall_client.list_users_with_notification_rules.assert_not_called()
class TestPagerDutyFiltering:
def setup_method(self):
self.mock_schedule = {
"id": "SCHEDULE1",
"name": "Test Schedule",
"teams": [{"summary": "Team 1"}],
"schedule_layers": [
{
"users": [
{"user": {"id": "USER1"}},
{"user": {"id": "USER2"}},
]
}
],
}
self.mock_policy = {
"id": "POLICY1",
"name": "Test Policy",
"teams": [{"summary": "Team 1"}],
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER1"},
{"type": "user", "id": "USER2"},
]
}
],
}
self.mock_integration = {
"id": "INTEGRATION1",
"name": "Test Integration",
"service": {
"name": "Service 1",
"teams": [{"summary": "Team 1"}],
},
}
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_schedules_by_team(self):
schedules = [
self.mock_schedule,
{**self.mock_schedule, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_schedules_by_users(self):
schedules = [
self.mock_schedule,
{
**self.mock_schedule,
"schedule_layers": [{"users": [{"user": {"id": "USER3"}}]}],
},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_SCHEDULE_REGEX", "^Test")
def test_filter_schedules_by_regex(self):
schedules = [
self.mock_schedule,
{**self.mock_schedule, "name": "Production Schedule"},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_escalation_policies_by_team(self):
policies = [
self.mock_policy,
{**self.mock_policy, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_escalation_policies_by_users(self):
policies = [
self.mock_policy,
{
**self.mock_policy,
"escalation_rules": [{"targets": [{"type": "user", "id": "USER3"}]}],
},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX", "^Test")
def test_filter_escalation_policies_by_regex(self):
policies = [
self.mock_policy,
{**self.mock_policy, "name": "Production Policy"},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_integrations_by_team(self):
integrations = [
self.mock_integration,
{
**self.mock_integration,
"service": {"teams": [{"summary": "Team 2"}]},
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
@patch(
"lib.pagerduty.migrate.PAGERDUTY_FILTER_INTEGRATION_REGEX", "^Service 1 - Test"
)
def test_filter_integrations_by_regex(self):
integrations = [
self.mock_integration,
{
**self.mock_integration,
"service": {"name": "Service 2"},
"name": "Production Integration",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
class TestPagerDutyMigrationFiltering:
@patch("lib.pagerduty.migrate.filter_schedules")
@patch("lib.pagerduty.migrate.filter_escalation_policies")
@patch("lib.pagerduty.migrate.filter_integrations")
@patch("lib.pagerduty.migrate.APISession")
@patch("lib.pagerduty.migrate.OnCallAPIClient")
def test_migrate_calls_filters(
self,
MockOnCallAPIClient,
MockAPISession,
mock_filter_integrations,
mock_filter_policies,
mock_filter_schedules,
):
# Setup mock returns
mock_session = MockAPISession.return_value
mock_session.list_all.side_effect = [
[{"id": "U1", "name": "Test User", "email": "test@example.com"}], # users
[{"id": "S1"}], # schedules
[{"id": "P1"}], # policies
[{"id": "SVC1", "integrations": []}], # services
[{"id": "V1"}], # vendors
]
mock_session.jget.return_value = {"overrides": []} # Mock schedule overrides
mock_oncall_client = MockOnCallAPIClient.return_value
mock_oncall_client.list_all.return_value = []
# Run migration
migrate()
# Verify filters were called with correct data
mock_filter_schedules.assert_called_once_with([{"id": "S1"}])
mock_filter_policies.assert_called_once_with([{"id": "P1"}])
mock_filter_integrations.assert_called_once() # Service data is transformed, so just check it was called
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.migrate.filter_schedules")
@patch("lib.pagerduty.migrate.filter_escalation_policies")
@patch("lib.pagerduty.migrate.filter_integrations")
@patch("lib.pagerduty.migrate.APISession")
@patch("lib.pagerduty.migrate.OnCallAPIClient")
def test_migrate_with_team_filter(
self,
MockOnCallAPIClient,
MockAPISession,
mock_filter_integrations,
mock_filter_policies,
mock_filter_schedules,
):
# Setup mock returns
mock_session = MockAPISession.return_value
mock_session.list_all.side_effect = [
[{"id": "U1", "name": "Test User", "email": "test@example.com"}], # users
[{"id": "S1", "teams": [{"summary": "Team 1"}]}], # schedules
[{"id": "P1", "teams": [{"summary": "Team 1"}]}], # policies
[
{"id": "SVC1", "teams": [{"summary": "Team 1"}], "integrations": []}
], # services
[{"id": "V1"}], # vendors
]
mock_session.jget.return_value = {"overrides": []} # Mock schedule overrides
mock_oncall_client = MockOnCallAPIClient.return_value
mock_oncall_client.list_all.return_value = []
# Run migration
migrate()
# Verify filters were called and filtered by team
mock_filter_schedules.assert_called_once()
mock_filter_policies.assert_called_once()
mock_filter_integrations.assert_called_once()
# Verify team parameter was included in API calls
assert mock_session.list_all.call_args_list == [
call("users", params={"include[]": "notification_rules"}),
call(
"schedules",
params={"include[]": ["schedule_layers", "teams"], "time_zone": "UTC"},
),
call("escalation_policies", params={"include[]": "teams"}),
call("services", params={"include[]": ["integrations", "teams"]}),
call("vendors"),
]
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1"])
@patch("lib.pagerduty.migrate.filter_schedules")
@patch("lib.pagerduty.migrate.filter_escalation_policies")
@patch("lib.pagerduty.migrate.filter_integrations")
@patch("lib.pagerduty.migrate.APISession")
@patch("lib.pagerduty.migrate.OnCallAPIClient")
def test_migrate_with_users_filter(
self,
MockOnCallAPIClient,
MockAPISession,
mock_filter_integrations,
mock_filter_policies,
mock_filter_schedules,
):
# Setup mock returns
mock_session = MockAPISession.return_value
mock_session.list_all.side_effect = [
[{"id": "U1", "name": "Test User", "email": "test@example.com"}], # users
[
{
"id": "S1",
"schedule_layers": [{"users": [{"user": {"id": "USER1"}}]}],
}
], # schedules
[
{
"id": "P1",
"escalation_rules": [
{"targets": [{"type": "user", "id": "USER1"}]}
],
}
], # policies
[{"id": "SVC1", "integrations": []}], # services
[{"id": "V1"}], # vendors
]
mock_session.jget.return_value = {"overrides": []} # Mock schedule overrides
mock_oncall_client = MockOnCallAPIClient.return_value
mock_oncall_client.list_all.return_value = []
# Run migration
migrate()
# Verify filters were called and filtered by users
mock_filter_schedules.assert_called_once()
mock_filter_policies.assert_called_once()
mock_filter_integrations.assert_called_once()

View file

@ -1,14 +1,144 @@
from unittest.mock import call, patch
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.resources.notification_rules import migrate_notification_rules
@patch.object(OnCallAPIClient, "delete")
@patch.object(OnCallAPIClient, "create")
def test_migrate_notification_rules(api_client_create_mock, api_client_delete_mock):
migrate_notification_rules(
{
class TestNotificationRulesPreservation:
def setup_method(self):
self.pd_user = {
"id": "U1",
"name": "Test User",
"email": "test@example.com",
"notification_rules": [
{
"id": "PD1",
"urgency": "high",
"start_delay_in_minutes": 0,
"contact_method": {"type": "email_contact_method"},
}
],
}
self.oncall_user = {
"id": "OC1",
"email": "test@example.com",
"notification_rules": [],
}
self.pd_user["oncall_user"] = self.oncall_user
@patch(
"lib.pagerduty.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
True,
)
@patch("lib.pagerduty.resources.notification_rules.OnCallAPIClient")
def test_existing_notification_rules_are_preserved(self, MockOnCallAPIClient):
# Setup user with existing notification rules
self.oncall_user["notification_rules"] = [{"id": "NR1"}]
# Run migration
migrate_notification_rules(self.pd_user)
# Verify no notification rules were migrated
MockOnCallAPIClient.create.assert_not_called()
MockOnCallAPIClient.delete.assert_not_called()
@patch(
"lib.pagerduty.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
True,
)
@patch("lib.pagerduty.resources.notification_rules.OnCallAPIClient")
def test_notification_rules_migrated_when_none_exist(self, MockOnCallAPIClient):
# Run migration
migrate_notification_rules(self.pd_user)
# Verify notification rules were migrated for both important and non-important cases
expected_calls = [
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": False},
),
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": True},
),
]
MockOnCallAPIClient.create.assert_has_calls(expected_calls)
MockOnCallAPIClient.delete.assert_not_called()
@patch(
"lib.pagerduty.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
False,
)
@patch("lib.pagerduty.resources.notification_rules.OnCallAPIClient")
def test_existing_notification_rules_are_replaced_when_preserve_is_false(
self, MockOnCallAPIClient
):
# Setup user with existing notification rules
self.oncall_user["notification_rules"] = [
{"id": "NR1", "important": False},
{"id": "NR2", "important": True},
]
# Run migration
migrate_notification_rules(self.pd_user)
# Verify old rules were deleted
expected_delete_calls = [
call("personal_notification_rules/NR1"),
call("personal_notification_rules/NR2"),
]
MockOnCallAPIClient.delete.assert_has_calls(
expected_delete_calls, any_order=True
)
# Verify new rules were created
expected_create_calls = [
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": False},
),
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": True},
),
]
MockOnCallAPIClient.create.assert_has_calls(expected_create_calls)
@patch(
"lib.pagerduty.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
False,
)
@patch("lib.pagerduty.resources.notification_rules.OnCallAPIClient")
def test_notification_rules_migrated_when_none_exist_and_preserve_is_false(
self, MockOnCallAPIClient
):
# Run migration
migrate_notification_rules(self.pd_user)
# Verify no rules were deleted (since none existed)
MockOnCallAPIClient.delete.assert_not_called()
# Verify new rules were created
expected_create_calls = [
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": False},
),
call(
"personal_notification_rules",
{"user_id": "OC1", "type": "notify_by_email", "important": True},
),
]
MockOnCallAPIClient.create.assert_has_calls(expected_create_calls)
@patch(
"lib.pagerduty.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
False,
)
@patch("lib.pagerduty.resources.notification_rules.OnCallAPIClient")
def test_complex_notification_rules_migration(self, MockOnCallAPIClient):
# Test a more complex case with multiple notification methods and delays
user = {
"email": "test@example.com",
"notification_rules": [
{
"contact_method": {"type": "sms_contact_method"},
@ -29,57 +159,69 @@ def test_migrate_notification_rules(api_client_create_mock, api_client_delete_mo
],
},
}
)
assert api_client_create_mock.call_args_list == [
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_sms",
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "wait",
"duration": 300,
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_mobile_app",
"important": False,
},
),
call(
"personal_notification_rules",
{"user_id": "EXISTING_USER_ID", "type": "notify_by_sms", "important": True},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "wait",
"duration": 300,
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_mobile_app",
"important": True,
},
),
]
assert api_client_delete_mock.call_args_list == [
call("personal_notification_rules/EXISTING_RULE_ID_1"),
call("personal_notification_rules/EXISTING_RULE_ID_2"),
]
migrate_notification_rules(user)
# Verify old rules were deleted
expected_delete_calls = [
call("personal_notification_rules/EXISTING_RULE_ID_1"),
call("personal_notification_rules/EXISTING_RULE_ID_2"),
]
MockOnCallAPIClient.delete.assert_has_calls(
expected_delete_calls, any_order=True
)
# Verify new rules were created in correct order with correct delays
expected_create_calls = [
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_sms",
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "wait",
"duration": 300,
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_mobile_app",
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_sms",
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "wait",
"duration": 300,
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "EXISTING_USER_ID",
"type": "notify_by_mobile_app",
"important": True,
},
),
]
MockOnCallAPIClient.create.assert_has_calls(expected_create_calls)