feat: add opsgenie to migrator script (#5495)

This PR adds support for migrating data from OpsGenie to Grafana IRM.

Closes https://github.com/grafana/irm/issues/1179
This commit is contained in:
Joey Orlando 2025-04-07 08:47:27 -04:00 committed by GitHub
parent 4c72781d6d
commit e4728ea69f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
59 changed files with 4261 additions and 1496 deletions

View file

@ -0,0 +1,100 @@
---
description: A structured approach to task planning and execution for PlanIt tasks
globs:
alwaysApply: false
---
# PlanIt Mode
A structured approach to task planning and execution that emphasizes thorough analysis before action.
## Core Philosophy
Before diving into solutions:
1. Take time to understand the full scope of the problem
2. Look for existing similar solutions in the codebase
3. Consider different approaches and their tradeoffs
4. Think about potential edge cases and complications
5. Question your initial assumptions
## Execution Flow
When a user message starts with "PlanIt:", ALWAYS follow this exact sequence:
0. STOP AND THINK FIRST
- When you see "PlanIt:", this is a signal to pause and analyze
- DO NOT jump to conclusions or start planning immediately
- Take time to:
* Understand the full context
* Look for similar existing solutions
* Consider different approaches
* Question your assumptions
* Think about potential complications
- Only proceed to planning once you have a thorough understanding
1. Initial Prompt Refinement:
- Review and analyze the initial prompt for clarity and completeness
- Look for ambiguities or unstated requirements
- Consider edge cases and potential complications
- Suggest improvements if needed
- Seek confirmation before proceeding with any suggested revisions
2. Thoughtful Analysis Phase:
Before taking any action:
- Analyze task requirements thoroughly
- Review relevant parts of the codebase
- Look for similar existing solutions
- Consider different implementation approaches
- Document understanding and assumptions
- List potential challenges or edge cases
- Confirm understanding with user before proceeding
3. Structured Planning and Progress Tracking:
- Create a detailed action plan in `.cursor_tasks.md` using this format:
([Timestamp] should have date and time in hh:mm:ss)
```markdown
# Task: [Task Name]
Created: [Timestamp]
## Action Plan
- [ ] Step 1
- [ ] Step 2
- [ ] Substep 2.1
- [ ] Substep 2.2
- [ ] Step 3
## Progress Notes
- [Timestamp] Started implementation of...
- [Timestamp] Completed step 1...
```
- After creating the plan, STOP and ask the user: "Does this plan look good to you? Should I proceed with implementation?"
- Only proceed with implementation after explicit user approval
- Update the plan continuously as tasks progress
- Document any new steps identified during execution
4. Continuous Learning and Adaptation:
- CRITICAL! If you make a mistake or get feedback, create or update cursor rules with your corrections!
- Document learnings and improvements
- Update approach based on new information
## Best Practices
1. Never rush to implementation
2. Question your initial assumptions
3. Look for existing solutions first
4. Consider multiple approaches
5. Think about edge cases early
6. Maintain clear and specific communication
7. Provide context for all decisions
8. Use iterative refinement when needed
9. Document all significant decisions and changes
10. Keep the user informed of progress
11. Seek clarification when requirements are ambiguous
12. ALWAYS get user approval before starting implementation
## Task Execution Flow
1. Initial analysis and understanding
2. Prompt refinement if needed
3. Thorough exploration of existing solutions
4. Create/update `.cursor_tasks.md`
5. GET USER APPROVAL OF PLAN
6. Execute planned steps
7. Document progress and learnings
8. Update plan as needed
9. Seek user feedback at key points

3
tools/migrators/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
.session
__pycache__/
*.pyc

View file

@ -7,4 +7,12 @@ COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt
COPY . .
# Create data directory and generate session ID
RUN mkdir -p /app/data && \
python3 -c "import uuid; open('/app/data/.session', 'w').write(str(uuid.uuid4()))"
# Set session file location
ENV SESSION_FILE=/app/data/.session
CMD ["python3", "main.py"]

View file

@ -6,6 +6,7 @@ Currently the migration tool supports migrating from:
- PagerDuty
- Splunk OnCall (VictorOps)
- OpsGenie
## Getting Started
@ -15,6 +16,7 @@ Currently the migration tool supports migrating from:
4. Depending on which tool you are migrating from, see more specific instructions there:
- [PagerDuty](#prerequisites)
- [Splunk OnCall](#prerequisites-1)
- [OpsGenie](#prerequisites-2)
5. Run a [migration plan](#migration-plan)
6. If you are pleased with the results of the migration plan, run the tool in [migrate mode](#migration)
@ -47,6 +49,18 @@ docker run --rm \
oncall-migrator
```
#### OpsGenie
```shell
docker run --rm \
-e MIGRATING_FROM="opsgenie" \
-e MODE="plan" \
-e ONCALL_API_URL="<ONCALL_API_URL>" \
-e ONCALL_API_TOKEN="<ONCALL_API_TOKEN>" \
-e OPSGENIE_API_KEY="<OPSGENIE_API_KEY>" \
oncall-migrator
```
Please read the generated report carefully since depending on the content of the report, some resources
could be not migrated and some existing Grafana OnCall resources could be deleted.
@ -104,6 +118,18 @@ docker run --rm \
oncall-migrator
```
#### OpsGenie
```shell
docker run --rm \
-e MIGRATING_FROM="opsgenie" \
-e MODE="migrate" \
-e ONCALL_API_URL="<ONCALL_API_URL>" \
-e ONCALL_API_TOKEN="<ONCALL_API_TOKEN>" \
-e OPSGENIE_API_KEY="<OPSGENIE_API_KEY>" \
oncall-migrator
```
When performing a migration, only resources that are marked with ✅ or ⚠️ on the plan stage will be migrated.
The migrator is designed to be idempotent, so it's safe to run it multiple times. On every migration run, the tool will
check if the resource already exists in Grafana OnCall and will delete it before creating a new one.
@ -557,6 +583,126 @@ See [Migrating Users](#migrating-users) for some more information on how users a
- Note that delays between escalation steps may be slightly different in Grafana OnCall,
see [Limitations](#limitations-1) for more info.
## OpsGenie
### Overview
Resources that can be migrated using this tool:
- User notification rules
- On-call schedules (including rotations and overrides)
- Escalation policies
- Integrations
### Limitations
- Not all integration types are supported
- Not all Escalation Policy rule types are supported
- OpsGenie schedules with time restrictions (time-of-day or weekday-and-time-of-day) are not supported
- Delays between migrated notification/escalation rules could be slightly different from original
### Prerequisites
- Obtain an OpsGenie API key: <https://docs.opsgenie.com/docs/api-key-management>
### Configuration
Configuration is done via environment variables passed to the docker container.
| Name | Description | Type | Default |
| --------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | ------- |
| `MIGRATING_FROM` | Set to `opsgenie` | String | N/A |
| `OPSGENIE_API_KEY` | OpsGenie API key. To create a key, refer to [OpsGenie docs](https://docs.opsgenie.com/docs/api-key-management). | String | N/A |
| `OPSGENIE_API_URL` | OpsGenie API URL. Use `https://api.eu.opsgenie.com/v2` for EU instances. | String | `https://api.opsgenie.com/v2` |
| `ONCALL_API_URL` | Grafana OnCall API URL. This can be found on the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `ONCALL_API_TOKEN` | Grafana OnCall API Token. To create a token, navigate to the "Settings" page of your Grafana OnCall instance. | String | N/A |
| `MODE` | Migration mode (plan vs actual migration). | String (choices: `plan`, `migrate`) | `plan` |
| `UNSUPPORTED_INTEGRATION_TO_WEBHOOKS` | When set to `true`, integrations with unsupported type will be migrated to Grafana OnCall integrations with type "webhook". When set to `false`, integrations with unsupported type won't be migrated. | Boolean | `false` |
| `MIGRATE_USERS` | If `false`, will allow you to import all objects while ignoring user references in schedules and escalation policies. In addition, if `false`, will also skip importing User notification rules. | Boolean | `true` |
| `OPSGENIE_FILTER_TEAM` | Filter resources by team name. Only resources associated with this team will be migrated. | String | N/A |
| `OPSGENIE_FILTER_USERS` | Filter resources by OpsGenie user IDs (comma-separated). Only resources associated with these users will be migrated. | String | N/A |
| `OPSGENIE_FILTER_SCHEDULE_REGEX` | Filter schedules by name using a regex pattern. Only schedules whose names match this pattern will be migrated. | String | N/A |
| `OPSGENIE_FILTER_ESCALATION_POLICY_REGEX` | Filter escalation policies by name using a regex pattern. Only policies whose names match this pattern will be migrated. | String | N/A |
| `OPSGENIE_FILTER_INTEGRATION_REGEX` | Filter integrations by name using a regex pattern. Only integrations whose names match this pattern will be migrated. | String | N/A |
| `PRESERVE_EXISTING_USER_NOTIFICATION_RULES` | Whether to preserve existing notification rules when migrating users | Boolean | `true` |
### Resources
#### User notification rules
The tool is capable of migrating user notification rules from OpsGenie to Grafana OnCall.
Notification rules from OpsGenie will be migrated to both default and important notification rules in Grafana OnCall
for each user. Note that delays between notification rules may be slightly different in Grafana OnCall.
By default (when `PRESERVE_EXISTING_USER_NOTIFICATION_RULES` is `true`), existing notification rules in Grafana OnCall will
be preserved and OpsGenie rules won't be imported for users who already have notification rules configured in Grafana OnCall.
If you want to replace existing notification rules with ones from OpsGenie, set `PRESERVE_EXISTING_USER_NOTIFICATION_RULES`
to `false`.
See [Migrating Users](#migrating-users) for some more information on how users are migrated.
#### On-call schedules
The tool is capable of migrating on-call schedules from OpsGenie to Grafana OnCall.
Schedules are migrated with their rotations. The following features are supported:
- Daily, weekly, and hourly rotations
- Multiple rotations per schedule
- Schedule overrides
On-call schedules will be migrated to new Grafana OnCall schedules with the same name as in OpsGenie.
Any existing schedules with the same name will be deleted before migration.
Any on-call schedules that reference unmatched users won't be migrated. Any OpsGenie schedule which
uses time restrictions will not be migrated as migrating these is not supported.
#### Escalation policies
The tool is capable of migrating escalation policies from OpsGenie to Grafana OnCall.
Every escalation policy will be migrated to a new Grafana OnCall escalation chain with name convention of
`{team name} - {escalation policy name}`.
Caveats:
- Only the "Notify user" and "Notify on-call user(s) in schedule" rule types are supported. If an OpsGenie Escalation
Policy references a rule other than these, those rule steps are simply ignored in the migration
- Any existing escalation chains with the same name will be deleted, in Grafana OnCall, before migration.
Note that delays between escalation steps may be slightly different in Grafana OnCall
- Grafana OnCall Escalation Policies which are migrated, are not attached to any Integration/Route, and must
be done manually
#### Integrations
The tool is capable of migrating integrations from OpsGenie to Grafana OnCall.
For every integration in OpsGenie, the tool will migrate it to a Grafana OnCall integration.
Any integrations with unsupported type won't be migrated unless `UNSUPPORTED_INTEGRATION_TO_WEBHOOKS` is set to `true`.
The following integration types are supported:
- Amazon CloudWatch (maps to Amazon SNS integration in Grafana OnCall)
- Amazon SNS
- AppDynamics
- Datadog
- Email
- Jira (including Jira Service Desk)
- Kapacitor
- New Relic (including legacy New Relic)
- Pingdom (including Pingdom Server Monitor (Scout))
- Prometheus (maps to Alertmanager in Grafana OnCall)
- PRTG
- Sentry
- Stackdriver
- UptimeRobot
- Webhook
- Zabbix
### After migration
- Connect integrations (press the "How to connect" button on the integration page)
- Make sure users connect their phone numbers, Slack accounts, etc. in their user settings
- Review and adjust any webhook integrations that were migrated from unsupported OpsGenie integration types
## Migrating Users
Note that users are matched by email, so if there are users in the report with "no Grafana OnCall user found with
@ -608,3 +754,34 @@ docker run --rm \
-e SPLUNK_API_KEY="<SPLUNK_API_KEY>" \
oncall-migrator python /app/add_users_to_grafana.py
```
### OpsGenie
```bash
docker run --rm \
-e MIGRATING_FROM="opsgenie" \
-e GRAFANA_URL="<GRAFANA_API_URL>" \
-e GRAFANA_USERNAME="<GRAFANA_USERNAME>" \
-e GRAFANA_PASSWORD="<GRAFANA_PASSWORD>" \
-e OPSGENIE_API_KEY="<OPSGENIE_API_KEY>" \
-e OPSGENIE_API_URL="<OPSGENIE_API_URL>" \
oncall-migrator python /app/add_users_to_grafana.py
```
You can also filter which OpsGenie users are added to Grafana by using the `OPSGENIE_FILTER_USERS` environment variable:
```bash
docker run --rm \
-e MIGRATING_FROM="opsgenie" \
-e GRAFANA_URL="<GRAFANA_API_URL>" \
-e GRAFANA_USERNAME="<GRAFANA_USERNAME>" \
-e GRAFANA_PASSWORD="<GRAFANA_PASSWORD>" \
-e OPSGENIE_API_KEY="<OPSGENIE_API_KEY>" \
-e OPSGENIE_API_URL="<OPSGENIE_API_URL>" \
-e OPSGENIE_FILTER_USERS="OPSGENIE_USER_ID_1,OPSGENIE_USER_ID_2,OPSGENIE_USER_ID_3" \
oncall-migrator python /app/add_users_to_grafana.py
```
This is useful when you want to selectively add users to Grafana, such as when testing the migration process
or when you only need to add specific users from a large OpsGenie organization.
The `OPSGENIE_FILTER_USERS` variable should contain a comma-separated list of OpsGenie user IDs.

View file

@ -4,15 +4,19 @@ import sys
from pdpyras import APISession
from lib.grafana.api_client import GrafanaAPIClient
from lib.opsgenie.api_client import OpsGenieAPIClient
from lib.splunk.api_client import SplunkOnCallAPIClient
MIGRATING_FROM = os.environ["MIGRATING_FROM"]
PAGERDUTY = "pagerduty"
SPLUNK = "splunk"
OPSGENIE = "opsgenie"
PAGERDUTY_API_TOKEN = os.environ.get("PAGERDUTY_API_TOKEN")
SPLUNK_API_ID = os.environ.get("SPLUNK_API_ID")
SPLUNK_API_KEY = os.environ.get("SPLUNK_API_KEY")
OPSGENIE_API_KEY = os.environ.get("OPSGENIE_API_KEY")
OPSGENIE_API_URL = os.environ.get("OPSGENIE_API_URL", "https://api.opsgenie.com/v2")
GRAFANA_URL = os.environ["GRAFANA_URL"] # Example: http://localhost:3000
GRAFANA_USERNAME = os.environ["GRAFANA_USERNAME"]
@ -25,6 +29,13 @@ if PAGERDUTY_FILTER_USERS:
else:
PAGERDUTY_FILTER_USERS = []
# Get optional filter for OpsGenie user IDs
OPSGENIE_FILTER_USERS = os.environ.get("OPSGENIE_FILTER_USERS", "")
if OPSGENIE_FILTER_USERS:
OPSGENIE_FILTER_USERS = OPSGENIE_FILTER_USERS.split(",")
else:
OPSGENIE_FILTER_USERS = []
SUCCESS_SIGN = ""
ERROR_SIGN = ""
@ -63,6 +74,30 @@ def migrate_splunk_users():
create_grafana_user(f"{user['firstName']} {user['lastName']}", user["email"])
def migrate_opsgenie_users():
"""
Migrate users from OpsGenie to Grafana.
If OPSGENIE_FILTER_USERS is set, only users with IDs in that list will be migrated.
"""
client = OpsGenieAPIClient(OPSGENIE_API_KEY, OPSGENIE_API_URL)
all_users = client.list_users()
# Filter users if OPSGENIE_FILTER_USERS is set
if OPSGENIE_FILTER_USERS:
filtered_users = [
user for user in all_users if user["id"] in OPSGENIE_FILTER_USERS
]
skipped_count = len(all_users) - len(filtered_users)
if skipped_count > 0:
print(f"Skipping {skipped_count} users not in OPSGENIE_FILTER_USERS.")
users_to_migrate = filtered_users
else:
users_to_migrate = all_users
for user in users_to_migrate:
create_grafana_user(user["fullName"], user["username"])
def create_grafana_user(name: str, email: str):
response = grafana_client.create_user_with_random_password(name, email)
@ -81,5 +116,7 @@ if __name__ == "__main__":
migrate_pagerduty_users()
elif MIGRATING_FROM == SPLUNK:
migrate_splunk_users()
elif MIGRATING_FROM == OPSGENIE:
migrate_opsgenie_users()
else:
raise ValueError("Invalid value for MIGRATING_FROM")

View file

@ -3,8 +3,9 @@ from urllib.parse import urljoin
PAGERDUTY = "pagerduty"
SPLUNK = "splunk"
OPSGENIE = "opsgenie"
MIGRATING_FROM = os.getenv("MIGRATING_FROM")
assert MIGRATING_FROM in (PAGERDUTY, SPLUNK)
assert MIGRATING_FROM in (PAGERDUTY, SPLUNK, OPSGENIE)
MODE_PLAN = "plan"
MODE_MIGRATE = "migrate"

View file

@ -2,3 +2,10 @@ TAB = " " * 4
SUCCESS_SIGN = ""
ERROR_SIGN = ""
WARNING_SIGN = "⚠️" # TODO: warning sign does not renders properly
def format_error_list(errors: list[str]) -> str:
"""Format a list of errors into a string with bullet points."""
if not errors:
return ""
return "\n".join(f"{TAB}- {error}" for error in errors)

View file

@ -1,75 +0,0 @@
"""
Common service filtering functionality.
"""
import re
from typing import Any, Dict, List
from lib.pagerduty.config import (
PAGERDUTY_FILTER_SERVICE_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
)
def filter_services(
services: List[Dict[str, Any]], tab: str = ""
) -> List[Dict[str, Any]]:
"""
Filter services based on configured filters.
Args:
services: List of service dictionaries to filter
tab: Optional indentation prefix for logging
Returns:
List of filtered services
"""
filtered_services = []
filtered_out = 0
for service in services:
should_include = True
reason = None
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = service.get("teams", [])
if not any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
should_include = False
reason = f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
# Filter by users (for technical services)
if (
should_include
and PAGERDUTY_FILTER_USERS
and service.get("type") != "business_service"
):
service_users = set()
# Get users from escalation policy if present
if service.get("escalation_policy"):
for rule in service["escalation_policy"].get("escalation_rules", []):
for target in rule.get("targets", []):
if target["type"] == "user":
service_users.add(target["id"])
if not any(user_id in service_users for user_id in PAGERDUTY_FILTER_USERS):
should_include = False
reason = f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
# Filter by name regex
if should_include and PAGERDUTY_FILTER_SERVICE_REGEX:
if not re.match(PAGERDUTY_FILTER_SERVICE_REGEX, service["name"]):
should_include = False
reason = f"Service name does not match regex: {PAGERDUTY_FILTER_SERVICE_REGEX}"
if should_include:
filtered_services.append(service)
else:
filtered_out += 1
print(f"{tab}Service {service['id']}: {reason}")
if filtered_out > 0:
print(f"Filtered out {filtered_out} services")
return filtered_services

View file

@ -0,0 +1 @@
ONCALL_SHIFT_WEB_SOURCE = 0 # alias for "web"

View file

@ -1,242 +0,0 @@
"""
Migration logic for converting PagerDuty services to Grafana's service model.
This module provides functions to migrate PagerDuty services to Grafana's service model,
including creating the required 'pagerduty' Group and handling both individual and batch migrations.
"""
import json
import logging
from typing import Any, Dict, List, Optional
from lib.common.report import TAB
from lib.grafana.service_model_client import ServiceModelClient
from lib.grafana.transform import transform_service, validate_component
from lib.pagerduty.report import format_service
from lib.pagerduty.resources.business_service import BusinessService
from lib.pagerduty.resources.services import TechnicalService
# Configure logging
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
def migrate_technical_service(
client: ServiceModelClient, service: TechnicalService, dry_run: bool = False
) -> Optional[Dict[str, Any]]:
"""
Migrate a single technical service to Grafana's service model.
Args:
client: The ServiceModelClient to use
service: The technical service to migrate
dry_run: If True, only validate and log what would be done
Returns:
The created component if successful, None otherwise
"""
try:
# Transform the service
component = transform_service(service)
# Check if component already exists
existing = client.get_component(component["metadata"]["name"])
if existing:
print(TAB + format_service(service, True) + " (preserved)")
service.preserved = True
service.migration_errors = None
return existing
# Validate the transformed component
errors = validate_component(component)
if errors:
service.migration_errors = errors
service.preserved = False
print(TAB + format_service(service, False))
return None
if dry_run:
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (would create)")
return component
# Create the component
created = client.create_component(component)
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (created)")
return created
except Exception as e:
service.migration_errors = str(e)
service.preserved = False
print(TAB + format_service(service, False))
return None
def migrate_business_service(
client: ServiceModelClient, service: BusinessService, dry_run: bool = False
) -> Optional[Dict[str, Any]]:
"""
Migrate a single business service to Grafana's service model.
Args:
client: The ServiceModelClient to use
service: The business service to migrate
dry_run: If True, only validate and log what would be done
Returns:
The created component if successful, None otherwise
"""
try:
# Transform the service
component = transform_service(service)
# Check if component already exists
existing = client.get_component(component["metadata"]["name"])
if existing:
print(TAB + format_service(service, True) + " (preserved)")
service.preserved = True
service.migration_errors = None
return existing
# Validate the transformed component
errors = validate_component(component)
if errors:
service.migration_errors = errors
service.preserved = False
print(TAB + format_service(service, False))
return None
if dry_run:
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (would create)")
return component
# Create the component
created = client.create_component(component)
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (created)")
return created
except Exception as e:
service.migration_errors = str(e)
service.preserved = False
print(TAB + format_service(service, False))
return None
def _migrate_service_batch(
client: ServiceModelClient,
services: List[Any],
migrate_func: callable,
dry_run: bool = False,
) -> Dict[str, Any]:
"""
Migrate a batch of services using the provided migration function.
Args:
client: The ServiceModelClient to use
services: List of services to migrate
migrate_func: Function to use for migrating each service
dry_run: If True, only validate and log what would be done
Returns:
Dictionary containing migration statistics and created components
"""
created_components = {}
for service in services:
component = migrate_func(client, service, dry_run)
if component:
created_components[service.id] = component
return created_components
def _update_service_dependencies(
client: ServiceModelClient,
services: List[Any],
created_components: Dict[str, Any],
dry_run: bool = False,
) -> None:
"""
Update dependencies for all services with proper refs.
Args:
client: The ServiceModelClient to use
services: List of services to update
created_components: Dictionary of created components by service ID
dry_run: If True, only validate and log what would be done
"""
for service in services:
if service.id in created_components and service.dependencies:
component_name = created_components[service.id]["metadata"]["name"]
depends_on_refs = [
{
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"name": created_components[dep.id]["metadata"]["name"],
}
for dep in service.dependencies
if dep.id in created_components
]
if depends_on_refs:
# Create patch payload with only the dependsOnRefs field
patch_payload = {"spec": {"dependsOnRefs": depends_on_refs}}
if not dry_run:
try:
client.patch_component(component_name, patch_payload)
print(f"Updated dependencies for service: {service.name}")
except Exception as e:
print(
f"Failed to update dependencies for service {service.name}: {e}"
)
# Log the full error details for debugging
print(f"Patch payload: {json.dumps(patch_payload, indent=2)}")
def migrate_all_services(
client: ServiceModelClient,
technical_services: List[TechnicalService],
business_services: List[BusinessService],
dry_run: bool = False,
) -> None:
"""
Migrate all PagerDuty services to Grafana's service model.
Args:
client: The ServiceModelClient to use
technical_services: List of technical services to migrate
business_services: List of business services to migrate
dry_run: If True, only validate and log what would be done
Returns:
Dictionary containing migration statistics
"""
# Migrate technical services
tech_components = _migrate_service_batch(
client, technical_services, migrate_technical_service, dry_run
)
# Migrate business services
bus_components = _migrate_service_batch(
client, business_services, migrate_business_service, dry_run
)
# Update dependencies
created_components = {**tech_components, **bus_components}
_update_service_dependencies(
client, technical_services + business_services, created_components, dry_run
)
return

View file

@ -1,117 +0,0 @@
"""
Transformation logic for converting PagerDuty services to Grafana Service Model format.
This module provides functions to transform PagerDuty technical and business services
into the Backstage Catalog format used by Grafana's Service Model.
"""
from typing import Any, Dict, List, Union
from lib.pagerduty.resources.business_service import BusinessService
from lib.pagerduty.resources.services import TechnicalService
def transform_service(
service: Union[TechnicalService, BusinessService]
) -> Dict[str, Any]:
"""
Transform a PagerDuty service (technical or business) into a Backstage Component.
Args:
service: The PagerDuty service to transform (either TechnicalService or BusinessService)
Returns:
A dictionary containing the transformed service in Backstage Component format
"""
# Determine service type and required fields
is_technical = isinstance(service, TechnicalService)
service_type = "service" if is_technical else "business_service"
# Create the base component structure
component = {
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"metadata": {
"name": service.name.lower().replace(
" ", "-"
), # Convert to k8s-friendly name
"annotations": {"pagerduty.com/service-id": service.id},
},
"spec": {"type": service_type, "description": service.description},
}
# Add status annotation for technical services
if is_technical and hasattr(service, "status"):
component["metadata"]["annotations"]["pagerduty.com/status"] = service.status
# Add PagerDuty URLs to annotations
if service.html_url:
component["metadata"]["annotations"][
"pagerduty.com/html-url"
] = service.html_url
if service.self_url:
component["metadata"]["annotations"]["pagerduty.com/api-url"] = service.self_url
return component
def validate_component(component: Dict[str, Any]) -> List[str]:
"""
Validate a transformed Component resource.
Args:
component: The Component resource to validate
Returns:
List of validation errors. Empty list means valid.
"""
errors = []
# Check required fields
required_fields = [
("apiVersion", str),
("kind", str),
("metadata", dict),
("spec", dict),
]
for field, field_type in required_fields:
if field not in component:
errors.append(f"Missing required field: {field}")
elif not isinstance(component[field], field_type):
errors.append(f"Field {field} must be of type {field_type.__name__}")
# If we're missing required fields, don't continue with deeper validation
if errors:
return errors
# Check metadata requirements
metadata = component["metadata"]
if "name" not in metadata:
errors.append("metadata.name is required")
elif not isinstance(metadata["name"], str):
errors.append("metadata.name must be a string")
# Check required annotations
if "annotations" not in metadata:
errors.append("metadata.annotations is required")
else:
annotations = metadata["annotations"]
if "pagerduty.com/service-id" not in annotations:
errors.append("Required annotation missing: pagerduty.com/service-id")
if (
component["spec"]["type"] == "service"
and "pagerduty.com/status" not in annotations
):
errors.append("Required annotation missing: pagerduty.com/status")
# Check spec requirements
spec = component["spec"]
if "type" not in spec:
errors.append("spec.type is required")
elif not isinstance(spec["type"], str):
errors.append("spec.type must be a string")
elif spec["type"] not in ["service", "business_service"]:
errors.append("spec.type must be either 'service' or 'business_service'")
return errors

View file

@ -22,7 +22,7 @@ def api_call(method: str, base_url: str, path: str, **kwargs) -> requests.Respon
response.raise_for_status()
except HTTPError as e:
if e.response.status_code == 429:
cooldown_seconds = int(e.response.headers["Retry-After"])
cooldown_seconds = int(e.response.headers.get("Retry-After", 0.2))
sleep(cooldown_seconds)
return api_call(method, base_url, path, **kwargs)
elif e.response.status_code == 400:

View file

@ -1,20 +1,28 @@
import requests
from lib.base_config import ONCALL_API_TOKEN, ONCALL_API_URL
from lib.base_config import MIGRATING_FROM, ONCALL_API_TOKEN, ONCALL_API_URL
from lib.network import api_call as _api_call
from lib.session import get_or_create_session_id
class OnCallAPIClient:
_session_id = None
@classmethod
def api_call(cls, method: str, path: str, **kwargs) -> requests.Response:
return _api_call(
method,
ONCALL_API_URL,
path,
headers={"Authorization": ONCALL_API_TOKEN},
**kwargs,
if cls._session_id is None:
cls._session_id = get_or_create_session_id()
kwargs.setdefault("headers", {})
kwargs["headers"].update(
{
"Authorization": ONCALL_API_TOKEN,
"User-Agent": f"IRM Migrator - {MIGRATING_FROM} - {cls._session_id}",
}
)
return _api_call(method, ONCALL_API_URL, path, **kwargs)
@classmethod
def list_all(cls, path: str) -> list[dict]:
response = cls.api_call("get", path)
@ -52,9 +60,7 @@ class OnCallAPIClient:
@classmethod
def list_users_with_notification_rules(cls):
oncall_users = cls.list_all("users")
oncall_notification_rules = cls.list_all(
"personal_notification_rules/?important=false"
)
oncall_notification_rules = cls.list_all("personal_notification_rules")
for user in oncall_users:
user["notification_rules"] = [

View file

@ -0,0 +1,175 @@
import typing
from urllib.parse import parse_qs, urlparse
from lib.network import api_call
from lib.opsgenie.config import OPSGENIE_API_KEY, OPSGENIE_API_URL
class OpsGenieAPIClient:
DEFAULT_LIMIT = 100 # Maximum allowed by OpsGenie API
def __init__(
self, api_key: str = OPSGENIE_API_KEY, api_url: str = OPSGENIE_API_URL
):
self.api_key = api_key
self.api_url = api_url
self.headers = {
"Authorization": f"GenieKey {self.api_key}",
"Content-Type": "application/json",
}
def _make_request(
self,
method: str,
path: str,
params: typing.Optional[dict] = None,
json: typing.Optional[dict] = None,
paginate: bool = True,
) -> dict:
"""
Make a request to the OpsGenie API with automatic pagination handling.
If paginate=True and method is GET, it will automatically handle pagination
and combine all results into a single response.
NOTE: we need to be careful with rate limiting, this is handled inside of lib.network.api_call
(see HTTP 429 exception handling)
# https://docs.opsgenie.com/docs/api-rate-limiting
"""
if params is None:
params = {}
# Only handle pagination for GET requests when pagination is requested
if method.upper() != "GET" or not paginate:
response = api_call(
method,
self.api_url,
path,
headers=self.headers,
params=params,
json=json,
)
return response.json()
# Set default pagination parameters
if "limit" not in params:
params["limit"] = self.DEFAULT_LIMIT
if "offset" not in params:
params["offset"] = 0
# Initialize combined response
combined_response = None
while True:
response = api_call(
method,
self.api_url,
path,
headers=self.headers,
params=params,
json=json,
)
response_json = response.json()
if combined_response is None:
combined_response = response_json
else:
# Extend the data array with new items
combined_response["data"].extend(response_json.get("data", []))
# Check if there's more data to fetch
data = response_json.get("data", [])
if not data:
break
# Check if there's a next page in the paging information
paging = response_json.get("paging", {})
next_url = paging.get("next")
if not next_url:
break
# Parse the next URL to get the new offset
parsed_url = urlparse(next_url)
query_params = parse_qs(parsed_url.query)
try:
params["offset"] = int(query_params.get("offset", [0])[0])
except (ValueError, IndexError):
break
return combined_response
def list_users(self) -> list[dict]:
"""List all users with their notification rules."""
users = []
response = self._make_request("GET", "v2/users")
for user in response.get("data", []):
# Map username to email for compatibility with matching function
user["email"] = user["username"]
# Get notification rules for each user
user_id = user["id"]
rules_response = self._make_request(
"GET", f"v2/users/{user_id}/notification-rules"
)
# Find the create-alert notification rule
create_alert_rule = None
for rule in rules_response.get("data", []):
if rule.get("actionType") == "create-alert":
create_alert_rule = rule
break
if create_alert_rule:
# Get steps for the create-alert rule
steps_response = self._make_request(
"GET",
f"v2/users/{user_id}/notification-rules/{create_alert_rule['id']}/steps",
)
user["notification_rules"] = steps_response.get("data", [])
else:
user["notification_rules"] = []
# Get teams for each user
teams_response = self._make_request("GET", f"v2/users/{user_id}/teams")
user["teams"] = teams_response.get("data", [])
users.append(user)
return users
def list_schedules(self) -> list[dict]:
"""List all schedules with their rotations."""
response = self._make_request(
"GET", "v2/schedules", params={"expand": "rotation"}
)
schedules = response.get("data", [])
# Fetch overrides for each schedule
for schedule in schedules:
overrides_response = self._make_request(
"GET", f"v2/schedules/{schedule['id']}/overrides"
)
schedule["overrides"] = overrides_response.get("data", [])
return schedules
def list_escalation_policies(self) -> list[dict]:
"""List all escalation policies."""
response = self._make_request("GET", "v2/escalations")
return response.get("data", [])
def list_teams(self) -> list[dict]:
"""List all teams."""
response = self._make_request("GET", "v2/teams")
return response.get("data", [])
def list_integrations(self) -> list[dict]:
"""List all integrations."""
response = self._make_request("GET", "v2/integrations")
return response.get("data", [])
def list_services(self) -> list[dict]:
"""List all services."""
response = self._make_request("GET", "services")
return response.get("data", [])

View file

@ -0,0 +1,66 @@
import os
from lib.base_config import * # noqa: F401,F403
OPSGENIE_API_KEY = os.environ["OPSGENIE_API_KEY"]
OPSGENIE_API_URL = os.getenv("OPSGENIE_API_URL", "https://api.opsgenie.com/v2")
OPSGENIE_TO_ONCALL_CONTACT_METHOD_MAP = {
"sms": "notify_by_sms",
"voice": "notify_by_phone_call",
"email": "notify_by_email",
"mobile": "notify_by_mobile_app",
}
OPSGENIE_TO_ONCALL_VENDOR_MAP = {
"Amazon CloudWatch": "amazon_sns",
"AmazonSns": "amazon_sns",
"AppDynamics": "appdynamics",
"CloudWatch": "amazon_sns",
"CloudWatchEvents": "amazon_sns",
"Datadog": "datadog",
"Email": "inbound_email",
"Jira": "jira",
"JiraServiceDesk": "jira",
"Kapacitor": "kapacitor",
"NewRelic": "newrelic",
"NewRelicV2": "newrelic",
"PingdomV2": "pingdom",
"Prometheus": "alertmanager",
"Prtg": "prtg",
"Scout": "webhook",
"Sentry": "sentry",
"Stackdriver": "stackdriver",
"UptimeRobot": "uptimerobot",
"Webhook": "webhook",
"Zabbix": "zabbix",
}
# Set to true to migrate unsupported integrations to OnCall webhook integration
UNSUPPORTED_INTEGRATION_TO_WEBHOOKS = (
os.getenv("UNSUPPORTED_INTEGRATION_TO_WEBHOOKS", "false").lower() == "true"
)
MIGRATE_USERS = os.getenv("MIGRATE_USERS", "true").lower() == "true"
# Filter resources by team
OPSGENIE_FILTER_TEAM = os.getenv("OPSGENIE_FILTER_TEAM")
# Filter resources by users (comma-separated list of OpsGenie user IDs)
OPSGENIE_FILTER_USERS = [
user_id.strip()
for user_id in os.getenv("OPSGENIE_FILTER_USERS", "").split(",")
if user_id.strip()
]
# Filter resources by name regex patterns
OPSGENIE_FILTER_SCHEDULE_REGEX = os.getenv("OPSGENIE_FILTER_SCHEDULE_REGEX")
OPSGENIE_FILTER_ESCALATION_POLICY_REGEX = os.getenv(
"OPSGENIE_FILTER_ESCALATION_POLICY_REGEX"
)
OPSGENIE_FILTER_INTEGRATION_REGEX = os.getenv("OPSGENIE_FILTER_INTEGRATION_REGEX")
# Whether to preserve existing notification rules when migrating users
PRESERVE_EXISTING_USER_NOTIFICATION_RULES = (
os.getenv("PRESERVE_EXISTING_USER_NOTIFICATION_RULES", "true").lower() == "true"
)

View file

@ -0,0 +1,156 @@
from lib.common.report import TAB
from lib.common.resources.users import match_user
from lib.oncall.api_client import OnCallAPIClient
from lib.opsgenie.api_client import OpsGenieAPIClient
from lib.opsgenie.config import (
MIGRATE_USERS,
MODE,
MODE_PLAN,
UNSUPPORTED_INTEGRATION_TO_WEBHOOKS,
)
from lib.opsgenie.report import (
escalation_policy_report,
format_escalation_policy,
format_integration,
format_schedule,
format_user,
integration_report,
schedule_report,
user_report,
)
from lib.opsgenie.resources.escalation_policies import (
filter_escalation_policies,
match_escalation_policy,
match_users_and_schedules_for_escalation_policy,
migrate_escalation_policy,
)
from lib.opsgenie.resources.integrations import (
filter_integrations,
match_integration,
migrate_integration,
)
from lib.opsgenie.resources.notification_rules import migrate_notification_rules
from lib.opsgenie.resources.schedules import (
filter_schedules,
match_schedule,
match_users_for_schedule,
migrate_schedule,
)
from lib.opsgenie.resources.users import filter_users
def migrate() -> None:
client = OpsGenieAPIClient()
if MIGRATE_USERS:
print("▶ Fetching users...")
users = client.list_users()
users = filter_users(users)
else:
print("▶ Skipping user migration as MIGRATE_USERS is false...")
users = []
oncall_users = OnCallAPIClient.list_users_with_notification_rules()
print("▶ Fetching schedules...")
schedules = client.list_schedules()
schedules = filter_schedules(schedules)
oncall_schedules = OnCallAPIClient.list_all("schedules")
print("▶ Fetching escalation policies...")
escalation_policies = client.list_escalation_policies()
escalation_policies = filter_escalation_policies(escalation_policies)
oncall_escalation_chains = OnCallAPIClient.list_all("escalation_chains")
print("▶ Fetching integrations...")
integrations = client.list_integrations()
integrations = filter_integrations(integrations)
oncall_integrations = OnCallAPIClient.list_all("integrations")
# Match users with their Grafana OnCall counterparts
if MIGRATE_USERS:
print("\n▶ Matching users...")
for user in users:
match_user(user, oncall_users)
print(user_report(users))
# Match schedules with their Grafana OnCall counterparts
print("\n▶ Matching schedules...")
user_id_map = {
u["id"]: u["oncall_user"]["id"] for u in users if u.get("oncall_user")
}
for schedule in schedules:
match_schedule(schedule, oncall_schedules, user_id_map)
match_users_for_schedule(schedule, users)
print(schedule_report(schedules))
# Match escalation policies with their Grafana OnCall counterparts
print("\n▶ Matching escalation policies...")
for policy in escalation_policies:
match_escalation_policy(policy, oncall_escalation_chains)
match_users_and_schedules_for_escalation_policy(policy, users, schedules)
print(escalation_policy_report(escalation_policies))
# Match integrations with their Grafana OnCall counterparts
print("\n▶ Matching integrations...")
for integration in integrations:
match_integration(integration, oncall_integrations)
print(integration_report(integrations))
if MODE == MODE_PLAN:
return
# Migrate users
if MIGRATE_USERS:
print("\n▶ Migrating users...")
for user in users:
if user.get("oncall_user"):
print(f"{TAB}Migrating {format_user(user)}...")
migrate_notification_rules(user)
# Migrate schedules
print("\n▶ Migrating schedules...")
for schedule in schedules:
if not schedule.get("migration_errors"):
print(f"{TAB}Migrating {format_schedule(schedule)}...")
migrate_schedule(schedule, user_id_map)
# Migrate escalation policies
print("\n▶ Migrating escalation policies...")
for policy in escalation_policies:
if all(rule["notifyType"] != "default" for rule in policy["rules"]):
print(
f"{TAB}Skipping migrating {format_escalation_policy(policy)} because all of its rules "
"have a non-default notifyType"
)
continue
elif any(rule["notifyType"] != "default" for rule in policy["rules"]):
print(
f"{TAB}Migrating {format_escalation_policy(policy)} but some of its rules "
"have a non-default notifyType, and those rules will not be migrated"
)
else:
print(f"{TAB}Migrating {format_escalation_policy(policy)}...")
migrate_escalation_policy(policy, users, schedules)
# Migrate integrations
print("\n▶ Migrating integrations...")
for integration in integrations:
print(f"{TAB}Migrating {format_integration(integration)}...")
if (
integration["oncall_type"] is None
and not UNSUPPORTED_INTEGRATION_TO_WEBHOOKS
):
print(
f"{TAB}Skipping {format_integration(integration)} because it is not supported and UNSUPPORTED_INTEGRATION_TO_WEBHOOKS is false"
)
continue
elif integration["oncall_type"] is None and UNSUPPORTED_INTEGRATION_TO_WEBHOOKS:
print(
f"{TAB}Migrating {format_integration(integration)} as webhook because it is not supported and UNSUPPORTED_INTEGRATION_TO_WEBHOOKS is true"
)
continue
migrate_integration(integration)

View file

@ -0,0 +1,112 @@
from lib.common.report import ERROR_SIGN, SUCCESS_SIGN, TAB, WARNING_SIGN
from lib.opsgenie.config import (
PRESERVE_EXISTING_USER_NOTIFICATION_RULES,
UNSUPPORTED_INTEGRATION_TO_WEBHOOKS,
)
from lib.opsgenie.resources.escalation_policies import determine_policy_name
def format_user(user: dict) -> str:
"""Format user for display in reports."""
return f"{user['fullName']} ({user['username']})"
def format_schedule(schedule: dict) -> str:
"""Format schedule for display in reports."""
return schedule["name"]
def format_escalation_policy(policy: dict) -> str:
"""Format escalation policy for display in reports."""
return determine_policy_name(policy)
def format_integration(integration: dict) -> str:
"""Format integration for display in reports."""
return f"{integration['name']} ({integration['type']})"
def user_report(users: list[dict]) -> str:
"""Generate report for user migration status."""
report = ["User notification rules report:"]
for user in users:
if user.get("oncall_user"):
if (
user["oncall_user"]["notification_rules"]
and PRESERVE_EXISTING_USER_NOTIFICATION_RULES
):
report.append(
f"{TAB}{WARNING_SIGN} {format_user(user)} (existing notification rules will be preserved)"
)
elif (
user["oncall_user"]["notification_rules"]
and not PRESERVE_EXISTING_USER_NOTIFICATION_RULES
):
report.append(
f"{TAB}{WARNING_SIGN} {format_user(user)} (existing notification rules will be deleted)"
)
else:
report.append(f"{TAB}{SUCCESS_SIGN} {format_user(user)}")
else:
report.append(
f"{TAB}{ERROR_SIGN} {format_user(user)} — no Grafana OnCall user found with this email"
)
return "\n".join(report)
def schedule_report(schedules: list[dict]) -> str:
"""Generate report for schedule migration status."""
report = ["Schedule report:"]
for schedule in schedules:
if schedule.get("migration_errors"):
errors = schedule["migration_errors"]
error_msg = "" + errors[0] if len(errors) == 1 else ""
report.append(f"{TAB}{ERROR_SIGN} {format_schedule(schedule)}{error_msg}")
# Add additional errors as bullet points if more than one
if len(errors) > 1:
for error in errors:
report.append(f"{TAB}{TAB}- {error}")
elif schedule.get("oncall_schedule"):
report.append(
f"{TAB}{WARNING_SIGN} {format_schedule(schedule)} (existing schedule will be deleted)"
)
else:
report.append(f"{TAB}{SUCCESS_SIGN} {format_schedule(schedule)}")
return "\n".join(report)
def escalation_policy_report(policies: list[dict]) -> str:
"""Generate report for escalation policy migration status."""
report = ["Escalation policy report:"]
for policy in policies:
if policy.get("oncall_escalation_chain"):
report.append(
f"{TAB}{WARNING_SIGN} {format_escalation_policy(policy)} (existing escalation chain will be deleted)"
)
else:
report.append(f"{TAB}{SUCCESS_SIGN} {format_escalation_policy(policy)}")
return "\n".join(report)
def integration_report(integrations: list[dict]) -> str:
"""Generate report for integration migration status."""
report = ["Integration report:"]
for integration in integrations:
if integration.get("oncall_integration"):
report.append(
f"{TAB}{WARNING_SIGN} {format_integration(integration)} (existing integration will be deleted)"
)
elif (
not integration.get("oncall_type")
and not UNSUPPORTED_INTEGRATION_TO_WEBHOOKS
):
report.append(
f"{TAB}{ERROR_SIGN} {format_integration(integration)} — unsupported integration type"
)
elif not integration.get("oncall_type") and UNSUPPORTED_INTEGRATION_TO_WEBHOOKS:
report.append(
f"{TAB}{WARNING_SIGN} {format_integration(integration)} — unsupported integration type, will be migrated as webhook"
)
else:
report.append(f"{TAB}{SUCCESS_SIGN} {format_integration(integration)}")
return "\n".join(report)

View file

@ -0,0 +1,131 @@
import re
from typing import List
from lib.oncall.api_client import OnCallAPIClient
from lib.opsgenie.config import (
OPSGENIE_FILTER_ESCALATION_POLICY_REGEX,
OPSGENIE_FILTER_TEAM,
)
from lib.utils import transform_wait_delay
def determine_policy_name(policy: dict) -> str:
"""Determine the name of the policy."""
return f"{policy['ownerTeam']['name']} - {policy['name']}"
def filter_escalation_policies(policies: list[dict]) -> list[dict]:
"""Apply filters to escalation policies."""
if OPSGENIE_FILTER_TEAM:
filtered_policies = []
for p in policies:
if p["ownerTeam"]["id"] == OPSGENIE_FILTER_TEAM:
filtered_policies.append(p)
policies = filtered_policies
if OPSGENIE_FILTER_ESCALATION_POLICY_REGEX:
pattern = re.compile(OPSGENIE_FILTER_ESCALATION_POLICY_REGEX)
policies = [p for p in policies if pattern.match(p["name"])]
return policies
def match_escalation_policy(policy: dict, oncall_escalation_chains: List[dict]) -> None:
"""
Match OpsGenie escalation policy with Grafana OnCall escalation chain.
"""
oncall_chain = None
for candidate in oncall_escalation_chains:
if (
determine_policy_name(policy).lower().strip()
== candidate["name"].lower().strip()
):
oncall_chain = candidate
policy["oncall_escalation_chain"] = oncall_chain
def match_users_and_schedules_for_escalation_policy(
policy: dict, users: List[dict], schedules: List[dict]
) -> None:
"""
Match users and schedules referenced in escalation policy.
"""
policy["matched_users"] = []
policy["matched_schedules"] = []
for rule in policy["rules"]:
recipient = rule.get("recipient", {})
if recipient.get("type") == "user":
for user in users:
if user["id"] == recipient.get("id") and user.get("oncall_user"):
policy["matched_users"].append(user)
elif recipient.get("type") == "schedule":
for schedule in schedules:
if schedule["id"] == recipient.get("id") and not schedule.get(
"migration_errors"
):
policy["matched_schedules"].append(schedule)
def migrate_escalation_policy(
policy: dict, users: List[dict], schedules: List[dict]
) -> None:
"""
Migrate OpsGenie escalation policy to Grafana OnCall.
"""
if policy["oncall_escalation_chain"]:
OnCallAPIClient.delete(
f"escalation_chains/{policy['oncall_escalation_chain']['id']}"
)
# Create new escalation chain
chain_payload = {"name": determine_policy_name(policy), "team_id": None}
chain = OnCallAPIClient.create("escalation_chains", chain_payload)
policy["oncall_escalation_chain"] = chain
# Create escalation policies for each rule
position = 0
for rule in policy["rules"]:
if rule.get("notifyType") != "default":
continue
# Convert wait duration from minutes to seconds + add wait step if there's a delay
delay = rule.get("delay", {}).get("timeAmount")
if delay:
wait_payload = {
"escalation_chain_id": chain["id"],
"position": position,
"type": "wait",
"duration": transform_wait_delay(delay),
}
OnCallAPIClient.create("escalation_policies", wait_payload)
position += 1
# Create notification step
recipient = rule["recipient"]
if recipient["type"] == "user":
user = next((u for u in users if u["id"] == recipient["id"]), None)
if user and user.get("oncall_user"):
policy_payload = {
"escalation_chain_id": chain["id"],
"position": position,
"type": "notify_persons",
"persons_to_notify": [user["oncall_user"]["id"]],
"important": False,
}
OnCallAPIClient.create("escalation_policies", policy_payload)
position += 1
elif recipient["type"] == "schedule":
schedule = next((s for s in schedules if s["id"] == recipient["id"]), None)
if schedule and schedule.get("oncall_schedule"):
policy_payload = {
"escalation_chain_id": chain["id"],
"position": position,
"type": "notify_on_call_from_schedule",
"notify_on_call_from_schedule": schedule["oncall_schedule"]["id"],
"important": False,
}
OnCallAPIClient.create("escalation_policies", policy_payload)
position += 1

View file

@ -0,0 +1,63 @@
import re
from typing import List
from lib.oncall.api_client import OnCallAPIClient
from lib.opsgenie.config import (
OPSGENIE_FILTER_INTEGRATION_REGEX,
OPSGENIE_FILTER_TEAM,
OPSGENIE_TO_ONCALL_VENDOR_MAP,
UNSUPPORTED_INTEGRATION_TO_WEBHOOKS,
)
def filter_integrations(integrations: list[dict]) -> list[dict]:
"""Apply filters to integrations."""
if OPSGENIE_FILTER_TEAM:
integrations = [
i for i in integrations if i.get("teamId") == OPSGENIE_FILTER_TEAM
]
if OPSGENIE_FILTER_INTEGRATION_REGEX:
pattern = re.compile(OPSGENIE_FILTER_INTEGRATION_REGEX)
integrations = [i for i in integrations if pattern.match(i["name"])]
return integrations
def match_integration(integration: dict, oncall_integrations: List[dict]) -> None:
"""
Match OpsGenie integration with Grafana OnCall integration + match opsgenie
integration type with Grafana OnCall integration type.
"""
oncall_integration = None
for candidate in oncall_integrations:
name = integration["name"].lower().strip()
if name == candidate["name"].lower().strip():
oncall_integration = candidate
integration["oncall_integration"] = oncall_integration
integration_type = OPSGENIE_TO_ONCALL_VENDOR_MAP.get(integration["type"])
if not integration_type and UNSUPPORTED_INTEGRATION_TO_WEBHOOKS:
integration_type = "webhook"
integration["oncall_type"] = integration_type
def migrate_integration(integration: dict) -> None:
"""Migrate OpsGenie integration to Grafana OnCall."""
if integration["oncall_integration"]:
OnCallAPIClient.delete(
f"integrations/{integration['oncall_integration']['id']}"
)
# Create new integration
payload = {
"name": integration["name"],
"type": integration["oncall_type"],
"team_id": None,
}
if integration.get("oncall_escalation_chain"):
payload["escalation_chain_id"] = integration["oncall_escalation_chain"]["id"]
integration["oncall_integration"] = OnCallAPIClient.create("integrations", payload)

View file

@ -0,0 +1,92 @@
from lib.oncall.api_client import OnCallAPIClient
from lib.opsgenie.config import (
OPSGENIE_TO_ONCALL_CONTACT_METHOD_MAP,
PRESERVE_EXISTING_USER_NOTIFICATION_RULES,
)
from lib.utils import transform_wait_delay
def migrate_notification_rules(user: dict) -> None:
"""Migrate user notification rules from OpsGenie to Grafana OnCall."""
if (
PRESERVE_EXISTING_USER_NOTIFICATION_RULES
and user["oncall_user"]["notification_rules"]
):
print(
f"Preserving existing notification rules for {user.get('email', user.get('username'))}"
)
return
# If not preserving, delete ALL existing notification rules first
if (
not PRESERVE_EXISTING_USER_NOTIFICATION_RULES
and user["oncall_user"]["notification_rules"]
):
print(
f"Deleting existing notification rules for {user.get('email', user.get('username'))}"
)
for rule in user["oncall_user"]["notification_rules"]:
OnCallAPIClient.delete(f"personal_notification_rules/{rule['id']}")
# Create notification rules for both important=False and important=True
for important in (False, True):
# Get the OnCall rules for the current importance level
oncall_rules = transform_notification_rules(
user["notification_rules"], user["oncall_user"]["id"], important
)
# Create the new rules
for rule in oncall_rules:
OnCallAPIClient.create("personal_notification_rules", rule)
def transform_notification_rules(
notification_steps: list[dict], user_id: str, important: bool
) -> list[dict]:
"""
Transform OpsGenie notification steps to OnCall personal notification rules.
If a step has timeAmount > 0, add a wait step before the notification.
"""
# Sort steps by sendAfter minutes (or 0 if not present)
sorted_steps = sorted(
notification_steps,
key=lambda step: step.get("sendAfter", {}).get("timeAmount", 0),
)
oncall_rules = []
# Process steps in order
for step in sorted_steps:
if not step.get("enabled", False):
continue
# Get the current time amount
time_amount = step.get("sendAfter", {}).get("timeAmount", 0)
# If time amount is not 0, add a wait rule
if time_amount > 0:
wait_rule = {
"user_id": user_id,
"type": "wait",
"duration": transform_wait_delay(time_amount),
"important": important,
}
oncall_rules.append(wait_rule)
# Get the method type from the contact object inside the step
contact_method = step.get("contact", {}).get("method")
# Special handling for mobile notifications when important=True
if contact_method == "mobile" and important:
oncall_type = "notify_by_mobile_app_critical"
else:
oncall_type = OPSGENIE_TO_ONCALL_CONTACT_METHOD_MAP.get(contact_method)
if not oncall_type:
continue
# Add the notification rule
notify_rule = {"user_id": user_id, "type": oncall_type, "important": important}
oncall_rules.append(notify_rule)
return oncall_rules

View file

@ -0,0 +1,342 @@
import re
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Dict, List, Optional
from uuid import uuid4
from lib.constants import ONCALL_SHIFT_WEB_SOURCE
from lib.oncall.api_client import OnCallAPIClient
from lib.opsgenie.config import (
OPSGENIE_FILTER_SCHEDULE_REGEX,
OPSGENIE_FILTER_TEAM,
OPSGENIE_FILTER_USERS,
)
from lib.utils import dt_to_oncall_datetime, duration_to_frequency_and_interval
def filter_schedules(schedules: list[dict]) -> list[dict]:
"""Apply filters to schedules."""
if OPSGENIE_FILTER_TEAM:
filtered_schedules = []
for s in schedules:
if s["ownerTeam"]["id"] == OPSGENIE_FILTER_TEAM:
filtered_schedules.append(s)
schedules = filtered_schedules
if OPSGENIE_FILTER_USERS:
filtered_schedules = []
for schedule in schedules:
# Check if any rotation has a participant with ID in OPSGENIE_FILTER_USERS
include_schedule = False
for rotation in schedule.get("rotations", []):
for participant in rotation.get("participants", []):
if (
participant.get("type") == "user"
and participant.get("id") in OPSGENIE_FILTER_USERS
):
include_schedule = True
break
if include_schedule:
break
# Also check overrides for the filtered users
if not include_schedule:
for override in schedule.get("overrides", []):
if (
override.get("user", {}).get("type") == "user"
and override.get("user", {}).get("id") in OPSGENIE_FILTER_USERS
):
include_schedule = True
break
if include_schedule:
filtered_schedules.append(schedule)
schedules = filtered_schedules
if OPSGENIE_FILTER_SCHEDULE_REGEX:
pattern = re.compile(OPSGENIE_FILTER_SCHEDULE_REGEX)
schedules = [s for s in schedules if pattern.match(s["name"])]
return schedules
def match_schedule(
schedule: dict, oncall_schedules: List[dict], user_id_map: Dict[str, str]
) -> None:
"""
Match OpsGenie schedule with Grafana OnCall schedule.
"""
oncall_schedule = None
for candidate in oncall_schedules:
if schedule["name"].lower().strip() == candidate["name"].lower().strip():
oncall_schedule = candidate
# Check if any rotation has time restrictions
has_time_restrictions = False
for rotation in schedule.get("rotations", []):
if rotation.get("timeRestriction"):
has_time_restrictions = True
break
if has_time_restrictions:
schedule["migration_errors"] = [
"Schedule contains time restrictions which are not supported for migration"
]
return
_, errors = Schedule.from_dict(schedule).to_oncall_schedule(user_id_map)
schedule["migration_errors"] = errors
schedule["oncall_schedule"] = oncall_schedule
def match_users_for_schedule(schedule: dict, users: List[dict]) -> None:
"""
Match users referenced in schedule.
"""
schedule["matched_users"] = []
for rotation in schedule["rotations"]:
for participant in rotation["participants"]:
if participant["type"] == "user":
for user in users:
if user["id"] == participant["id"] and user.get("oncall_user"):
schedule["matched_users"].append(user)
def migrate_schedule(schedule: dict, user_id_map: Dict[str, str]) -> None:
"""
Migrate OpsGenie schedule to Grafana OnCall.
"""
if schedule["oncall_schedule"]:
OnCallAPIClient.delete(f"schedules/{schedule['oncall_schedule']['id']}")
schedule["oncall_schedule"] = Schedule.from_dict(schedule).migrate(user_id_map)
@dataclass
class Schedule:
"""
Utility class for converting an OpsGenie schedule to an OnCall schedule.
An OpsGenie schedule has multiple rotations, each with a set of participants.
"""
name: str
timezone: str
rotations: list["Rotation"]
overrides: list["Override"]
@classmethod
def from_dict(cls, schedule: dict) -> "Schedule":
"""Create a Schedule object from an OpsGenie API response for a schedule."""
rotations = []
for rotation_dict in schedule["rotations"]:
# Skip disabled rotations
if not rotation_dict.get("enabled", True):
continue
rotations.append(Rotation.from_dict(rotation_dict))
# Process overrides
overrides = []
for override_dict in schedule.get("overrides", []):
overrides.append(Override.from_dict(override_dict))
return cls(
name=schedule["name"],
timezone=schedule["timezone"],
rotations=rotations,
overrides=overrides,
)
def to_oncall_schedule(
self, user_id_map: Dict[str, str]
) -> tuple[Optional[dict], list[str]]:
"""
Convert a Schedule object to an OnCall schedule.
Note that it also returns shifts, but these are not created at the same time as the schedule.
"""
shifts = []
errors = []
for rotation in self.rotations:
# Check if all users in the rotation exist in OnCall
missing_user_ids = [
p["id"]
for p in rotation.participants
if p["type"] == "user" and p["id"] not in user_id_map
]
if missing_user_ids:
errors.append(
f"{rotation.name}: Users with IDs {missing_user_ids} not found in OnCall."
)
continue
shifts.append(rotation.to_oncall_shift(user_id_map))
# Process overrides
for override in self.overrides:
# Check if the user exists in OnCall
if override.user_id not in user_id_map:
errors.append(
f"Override: User with ID '{override.user_id}' not found in OnCall."
)
continue
shifts.append(override.to_oncall_override_shift(user_id_map))
if errors:
return None, errors
return {
"name": self.name,
"type": "web",
"team_id": None,
"time_zone": self.timezone,
"shifts": shifts,
}, []
def migrate(self, user_id_map: Dict[str, str]) -> dict:
"""
Create an OnCall schedule and its shifts.
First create the shifts, then create a schedule with shift IDs provided.
"""
schedule, errors = self.to_oncall_schedule(user_id_map)
assert not errors, "Unexpected errors: {}".format(errors)
# Create shifts in OnCall
shift_ids = []
for shift in schedule["shifts"]:
created_shift = OnCallAPIClient.create("on_call_shifts", shift)
shift_ids.append(created_shift["id"])
# Create schedule in OnCall with shift IDs provided
schedule["shifts"] = shift_ids
new_schedule = OnCallAPIClient.create("schedules", schedule)
return new_schedule
@dataclass
class Override:
"""
Utility class for representing a schedule override in OpsGenie.
"""
start_date: datetime
end_date: datetime
user_id: str
@classmethod
def from_dict(cls, override: dict) -> "Override":
"""Create an Override object from an OpsGenie API response for a schedule override."""
# Convert string dates to datetime objects
start_date = datetime.fromisoformat(
override["startDate"].replace("Z", "+00:00")
)
end_date = datetime.fromisoformat(override["endDate"].replace("Z", "+00:00"))
# Extract user ID from the user object
user_id = override.get("user", {}).get("id")
if not user_id:
raise ValueError(f"Could not extract user ID from override: {override}")
return cls(
start_date=start_date,
end_date=end_date,
user_id=user_id,
)
def to_oncall_override_shift(self, user_id_map: Dict[str, str]) -> dict:
"""Convert an Override object to an OnCall override shift."""
duration = int((self.end_date - self.start_date).total_seconds())
oncall_user_id = user_id_map[self.user_id]
return {
"name": f"Override-{uuid4().hex[:8]}",
"type": "override",
"team_id": None,
"start": dt_to_oncall_datetime(self.start_date),
"duration": duration,
"rotation_start": dt_to_oncall_datetime(self.start_date),
"users": [oncall_user_id],
"time_zone": "UTC",
"source": ONCALL_SHIFT_WEB_SOURCE,
}
@dataclass
class Rotation:
"""
Utility class for converting an OpsGenie rotation to an OnCall shift.
"""
name: str
type: str
length: int
start_date: datetime
end_date: Optional[datetime]
participants: List[dict]
@classmethod
def from_dict(cls, rotation: dict) -> "Rotation":
"""Create a Rotation object from an OpsGenie API response for a rotation."""
# Keep start_date in UTC format
start_date = datetime.fromisoformat(
rotation["startDate"].replace("Z", "+00:00")
)
end_date = None
if rotation.get("endDate"):
end_date = datetime.fromisoformat(
rotation["endDate"].replace("Z", "+00:00")
)
return cls(
name=rotation["name"],
type=rotation["type"],
length=rotation["length"],
start_date=start_date,
end_date=end_date,
participants=rotation["participants"],
)
def to_oncall_shift(self, user_id_map: Dict[str, str]) -> dict:
"""Convert a Rotation object to an OnCall shift."""
# Calculate base duration based on type and length
if self.type == "daily":
base_duration = timedelta(days=self.length)
elif self.type == "weekly":
base_duration = timedelta(weeks=self.length)
elif self.type == "hourly":
base_duration = timedelta(hours=self.length)
else:
base_duration = timedelta(days=self.length) # Default to daily
# Use duration_to_frequency_and_interval to get the natural frequency
frequency, interval = duration_to_frequency_and_interval(base_duration)
shift = {
"name": self.name or uuid4().hex,
"type": "rolling_users",
"time_zone": "UTC",
"team_id": None,
"level": 1,
"start": dt_to_oncall_datetime(self.start_date),
"duration": int(base_duration.total_seconds()),
"frequency": frequency,
"interval": interval,
"rolling_users": [
[user_id_map[p["id"]]]
for p in self.participants
if p["type"] == "user" and p["id"] in user_id_map
],
"start_rotation_from_user_index": 0,
"week_start": "MO",
"source": ONCALL_SHIFT_WEB_SOURCE,
}
if self.end_date:
shift["until"] = dt_to_oncall_datetime(self.end_date)
return shift

View file

@ -0,0 +1,16 @@
from lib.opsgenie.config import OPSGENIE_FILTER_TEAM, OPSGENIE_FILTER_USERS
def filter_users(users: list[dict]) -> list[dict]:
"""Apply filters to users."""
if OPSGENIE_FILTER_TEAM:
filtered_users = []
for u in users:
if any(t["id"] == OPSGENIE_FILTER_TEAM for t in u["teams"]):
filtered_users.append(u)
users = filtered_users
if OPSGENIE_FILTER_USERS:
users = [u for u in users if u["id"] in OPSGENIE_FILTER_USERS]
return users

View file

@ -1,13 +1,9 @@
import datetime
import re
from typing import Any, Dict, List
from pdpyras import APISession
from lib.common.report import TAB
from lib.common.resources.services import filter_services
from lib.common.resources.users import match_user
from lib.grafana.service_migrate import migrate_all_services
from lib.grafana.service_model_client import ServiceModelClient
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.config import (
@ -16,13 +12,8 @@ from lib.pagerduty.config import (
MODE,
MODE_PLAN,
PAGERDUTY_API_TOKEN,
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX,
PAGERDUTY_FILTER_INTEGRATION_REGEX,
PAGERDUTY_FILTER_SCHEDULE_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
PAGERDUTY_MIGRATE_SERVICES,
VERBOSE_LOGGING,
)
from lib.pagerduty.report import (
escalation_policy_report,
@ -37,269 +28,40 @@ from lib.pagerduty.report import (
services_report,
user_report,
)
from lib.pagerduty.resources.business_service import (
BusinessService,
get_all_business_services_with_metadata,
)
from lib.pagerduty.resources.escalation_policies import (
filter_escalation_policies,
match_escalation_policy,
match_escalation_policy_for_integration,
migrate_escalation_policy,
)
from lib.pagerduty.resources.integrations import (
filter_integrations,
match_integration,
match_integration_type,
migrate_integration,
)
from lib.pagerduty.resources.notification_rules import migrate_notification_rules
from lib.pagerduty.resources.rulesets import match_ruleset, migrate_ruleset
from lib.pagerduty.resources.schedules import match_schedule, migrate_schedule
from lib.pagerduty.resources.schedules import (
filter_schedules,
match_schedule,
migrate_schedule,
)
from lib.pagerduty.resources.services import (
BusinessService,
TechnicalService,
filter_services,
get_all_business_services_with_metadata,
get_all_technical_services_with_metadata,
migrate_all_services,
)
from lib.pagerduty.resources.users import (
filter_users,
match_users_and_schedules_for_escalation_policy,
match_users_for_schedule,
)
def filter_users(users: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Filter users based on PAGERDUTY_FILTER_USERS.
When PAGERDUTY_FILTER_USERS is set, only users with IDs in that list will be included.
"""
if not PAGERDUTY_FILTER_USERS:
return users # No filtering, return all users
filtered_users = []
filtered_out = 0
for user in users:
if user["id"] in PAGERDUTY_FILTER_USERS:
filtered_users.append(user)
else:
filtered_out += 1
if filtered_out > 0:
summary = f"Filtered out {filtered_out} users (keeping only users specified in PAGERDUTY_FILTER_USERS)"
print(summary)
# Only print detailed info in verbose mode
if VERBOSE_LOGGING:
print(
f"{TAB}Keeping only users with IDs: {', '.join(PAGERDUTY_FILTER_USERS)}"
)
return filtered_users
def filter_schedules(schedules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Filter schedules based on configured filters.
If multiple filters are specified, a schedule only needs to match one of them
to be included (OR operation between filters).
"""
if not any(
[PAGERDUTY_FILTER_TEAM, PAGERDUTY_FILTER_USERS, PAGERDUTY_FILTER_SCHEDULE_REGEX]
):
return schedules # No filters specified, return all
filtered_schedules = []
filtered_out = 0
filtered_reasons = {}
for schedule in schedules:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = schedule.get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by users
if PAGERDUTY_FILTER_USERS:
schedule_users = set()
for layer in schedule.get("schedule_layers", []):
for user in layer.get("users", []):
schedule_users.add(user["user"]["id"])
if any(user_id in schedule_users for user_id in PAGERDUTY_FILTER_USERS):
matches_any_filter = True
else:
reasons.append(
f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
)
# Filter by name regex
if PAGERDUTY_FILTER_SCHEDULE_REGEX:
if re.match(PAGERDUTY_FILTER_SCHEDULE_REGEX, schedule["name"]):
matches_any_filter = True
else:
reasons.append(
f"Schedule regex filter: {PAGERDUTY_FILTER_SCHEDULE_REGEX}"
)
if matches_any_filter:
filtered_schedules.append(schedule)
else:
filtered_out += 1
filtered_reasons[schedule["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} schedules"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for schedule_id, reasons in filtered_reasons.items():
print(f"{TAB}Schedule {schedule_id}: {', '.join(reasons)}")
return filtered_schedules
def filter_escalation_policies(policies: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Filter escalation policies based on configured filters.
If multiple filters are specified, a policy only needs to match one of them
to be included (OR operation between filters).
"""
if not any(
[
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX,
]
):
return policies # No filters specified, return all
filtered_policies = []
filtered_out = 0
filtered_reasons = {}
for policy in policies:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = policy.get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by users
if PAGERDUTY_FILTER_USERS:
policy_users = set()
for rule in policy.get("escalation_rules", []):
for target in rule.get("targets", []):
if target["type"] == "user":
policy_users.add(target["id"])
if any(user_id in policy_users for user_id in PAGERDUTY_FILTER_USERS):
matches_any_filter = True
else:
reasons.append(
f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
)
# Filter by name regex
if PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX:
if re.match(PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX, policy["name"]):
matches_any_filter = True
else:
reasons.append(
f"Escalation policy regex filter: {PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX}"
)
if matches_any_filter:
filtered_policies.append(policy)
else:
filtered_out += 1
filtered_reasons[policy["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} escalation policies"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for policy_id, reasons in filtered_reasons.items():
print(f"{TAB}Policy {policy_id}: {', '.join(reasons)}")
return filtered_policies
def filter_integrations(integrations: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Filter integrations based on configured filters.
If multiple filters are specified, an integration only needs to match one of them
to be included (OR operation between filters).
"""
if not any([PAGERDUTY_FILTER_TEAM, PAGERDUTY_FILTER_INTEGRATION_REGEX]):
return integrations # No filters specified, return all
filtered_integrations = []
filtered_out = 0
filtered_reasons = {}
for integration in integrations:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = integration["service"].get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by name regex
if PAGERDUTY_FILTER_INTEGRATION_REGEX:
integration_name = (
f"{integration['service']['name']} - {integration['name']}"
)
if re.match(PAGERDUTY_FILTER_INTEGRATION_REGEX, integration_name):
matches_any_filter = True
else:
reasons.append(
f"Integration regex filter: {PAGERDUTY_FILTER_INTEGRATION_REGEX}"
)
if matches_any_filter:
filtered_integrations.append(integration)
else:
filtered_out += 1
filtered_reasons[integration["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} integrations"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for integration_id, reasons in filtered_reasons.items():
print(f"{TAB}Integration {integration_id}: {', '.join(reasons)}")
return filtered_integrations
def migrate() -> None:
# Set up API sessions and timeout
session = APISession(PAGERDUTY_API_TOKEN)
@ -452,10 +214,10 @@ def migrate() -> None:
# Apply filters to services
filtered_technical_data = filter_services(
[service.raw_data for service in all_technical_services], TAB
[service.raw_data for service in all_technical_services]
)
filtered_business_data = filter_services(
[service.raw_data for service in all_business_services], TAB
[service.raw_data for service in all_business_services]
)
# Convert filtered data back to service objects
@ -479,9 +241,8 @@ def migrate() -> None:
f"Escalation policies: {sum(1 for p in escalation_policies if not p.get('unmatched_users') and not p.get('flawed_schedules'))} eligible of {filtered_resources_summary['escalation_policies']} filtered"
)
print(
f"Integrations: {sum(1 for i in integrations if i.get('oncall_type') and not i.get('is_escalation_policy_flawed'))} eligible of {filtered_resources_summary['integrations']} filtered"
f"Integrations: {sum(1 for i in integrations if i.get('oncall_type') and not i.get('is_escalation_policy_flawed'))} eligible of {filtered_resources_summary['integrations']} filtered\n"
)
print("")
if MODE == MODE_PLAN:
if MIGRATE_USERS:

View file

@ -1,7 +1,95 @@
import re
import typing
from lib.common.report import TAB
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.config import (
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
VERBOSE_LOGGING,
)
from lib.utils import find_by_id, transform_wait_delay
def filter_escalation_policies(
policies: typing.List[typing.Dict[str, typing.Any]],
) -> typing.List[typing.Dict[str, typing.Any]]:
"""
Filter escalation policies based on configured filters.
If multiple filters are specified, a policy only needs to match one of them
to be included (OR operation between filters).
"""
if not any(
[
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX,
]
):
return policies # No filters specified, return all
filtered_policies = []
filtered_out = 0
filtered_reasons = {}
for policy in policies:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = policy.get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by users
if PAGERDUTY_FILTER_USERS:
policy_users = set()
for rule in policy.get("escalation_rules", []):
for target in rule.get("targets", []):
if target["type"] == "user":
policy_users.add(target["id"])
if any(user_id in policy_users for user_id in PAGERDUTY_FILTER_USERS):
matches_any_filter = True
else:
reasons.append(
f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
)
# Filter by name regex
if PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX:
if re.match(PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX, policy["name"]):
matches_any_filter = True
else:
reasons.append(
f"Escalation policy regex filter: {PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX}"
)
if matches_any_filter:
filtered_policies.append(policy)
else:
filtered_out += 1
filtered_reasons[policy["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} escalation policies"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for policy_id, reasons in filtered_reasons.items():
print(f"{TAB}Policy {policy_id}: {', '.join(reasons)}")
return filtered_policies
def match_escalation_policy(policy: dict, oncall_escalation_chains: list[dict]) -> None:
oncall_escalation_chain = None
for candidate in oncall_escalation_chains:

View file

@ -1,11 +1,78 @@
import re
import typing
from lib.common.report import TAB
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.config import (
PAGERDUTY_FILTER_INTEGRATION_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_TO_ONCALL_VENDOR_MAP,
UNSUPPORTED_INTEGRATION_TO_WEBHOOKS,
VERBOSE_LOGGING,
)
from lib.utils import find_by_id
def filter_integrations(
integrations: typing.List[typing.Dict[str, typing.Any]],
) -> typing.List[typing.Dict[str, typing.Any]]:
"""
Filter integrations based on configured filters.
If multiple filters are specified, an integration only needs to match one of them
to be included (OR operation between filters).
"""
if not any([PAGERDUTY_FILTER_TEAM, PAGERDUTY_FILTER_INTEGRATION_REGEX]):
return integrations # No filters specified, return all
filtered_integrations = []
filtered_out = 0
filtered_reasons = {}
for integration in integrations:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = integration["service"].get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by name regex
if PAGERDUTY_FILTER_INTEGRATION_REGEX:
integration_name = (
f"{integration['service']['name']} - {integration['name']}"
)
if re.match(PAGERDUTY_FILTER_INTEGRATION_REGEX, integration_name):
matches_any_filter = True
else:
reasons.append(
f"Integration regex filter: {PAGERDUTY_FILTER_INTEGRATION_REGEX}"
)
if matches_any_filter:
filtered_integrations.append(integration)
else:
filtered_out += 1
filtered_reasons[integration["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} integrations"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for integration_id, reasons in filtered_reasons.items():
print(f"{TAB}Integration {integration_id}: {', '.join(reasons)}")
return filtered_integrations
def match_integration(integration: dict, oncall_integrations: list[dict]) -> None:
oncall_integration = None
for candidate in oncall_integrations:

View file

@ -1,18 +1,99 @@
import datetime
import re
import typing
from dataclasses import dataclass
from enum import Enum
from typing import Optional
from uuid import uuid4
from lib.common.report import TAB
from lib.constants import ONCALL_SHIFT_WEB_SOURCE
from lib.oncall.api_client import OnCallAPIClient
from lib.pagerduty.config import (
PAGERDUTY_FILTER_SCHEDULE_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
SCHEDULE_MIGRATION_MODE,
SCHEDULE_MIGRATION_MODE_ICAL,
SCHEDULE_MIGRATION_MODE_WEB,
VERBOSE_LOGGING,
)
from lib.utils import dt_to_oncall_datetime, duration_to_frequency_and_interval
def filter_schedules(
schedules: typing.List[typing.Dict[str, typing.Any]]
) -> typing.List[typing.Dict[str, typing.Any]]:
"""
Filter schedules based on configured filters.
If multiple filters are specified, a schedule only needs to match one of them
to be included (OR operation between filters).
"""
if not any(
[PAGERDUTY_FILTER_TEAM, PAGERDUTY_FILTER_USERS, PAGERDUTY_FILTER_SCHEDULE_REGEX]
):
return schedules # No filters specified, return all
filtered_schedules = []
filtered_out = 0
filtered_reasons = {}
for schedule in schedules:
matches_any_filter = False
reasons = []
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = schedule.get("teams", [])
if any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
matches_any_filter = True
else:
reasons.append(
f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
)
# Filter by users
if PAGERDUTY_FILTER_USERS:
schedule_users = set()
for layer in schedule.get("schedule_layers", []):
for user in layer.get("users", []):
schedule_users.add(user["user"]["id"])
if any(user_id in schedule_users for user_id in PAGERDUTY_FILTER_USERS):
matches_any_filter = True
else:
reasons.append(
f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
)
# Filter by name regex
if PAGERDUTY_FILTER_SCHEDULE_REGEX:
if re.match(PAGERDUTY_FILTER_SCHEDULE_REGEX, schedule["name"]):
matches_any_filter = True
else:
reasons.append(
f"Schedule regex filter: {PAGERDUTY_FILTER_SCHEDULE_REGEX}"
)
if matches_any_filter:
filtered_schedules.append(schedule)
else:
filtered_out += 1
filtered_reasons[schedule["id"]] = reasons
if filtered_out > 0:
summary = f"Filtered out {filtered_out} schedules"
print(summary)
# Only print detailed reasons in verbose mode
if VERBOSE_LOGGING:
for schedule_id, reasons in filtered_reasons.items():
print(f"{TAB}Schedule {schedule_id}: {', '.join(reasons)}")
return filtered_schedules
def match_schedule(
schedule: dict, oncall_schedules: list[dict], user_id_map: dict[str, str]
) -> None:
@ -243,7 +324,7 @@ class Layer:
"start_rotation_from_user_index": 0,
"week_start": "MO",
"time_zone": "UTC",
"source": 0, # 0 is alias for "web"
"source": ONCALL_SHIFT_WEB_SOURCE,
}
], None
@ -363,7 +444,7 @@ class Layer:
"start_rotation_from_user_index": 0,
"week_start": shift[2],
"time_zone": "UTC",
"source": 0, # 0 is alias for "web"
"source": ONCALL_SHIFT_WEB_SOURCE,
}
payloads.append(payload)
return payloads, None
@ -594,5 +675,5 @@ class Override:
"duration": duration,
"rotation_start": start,
"users": [user_id],
"source": 0, # 0 is alias for "web"
"source": ONCALL_SHIFT_WEB_SOURCE,
}

View file

@ -1,14 +1,112 @@
"""
PagerDuty services resource module.
This module provides functions for fetching PagerDuty services and extracting
relevant metadata for migration to Grafana's service model.
"""
from typing import Any, Dict, List
import json
import re
from typing import Any, Dict, List, Optional, Union
from pdpyras import APISession
from lib.common.report import TAB
from lib.grafana.service_model_client import ServiceModelClient
from lib.pagerduty.config import (
PAGERDUTY_FILTER_SERVICE_REGEX,
PAGERDUTY_FILTER_TEAM,
PAGERDUTY_FILTER_USERS,
)
from lib.pagerduty.report import format_service
def filter_services(services: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Filter services based on configured filters.
Args:
services: List of service dictionaries to filter
Returns:
List of filtered services
"""
filtered_services = []
filtered_out = 0
for service in services:
should_include = True
reason = None
# Filter by team
if PAGERDUTY_FILTER_TEAM:
teams = service.get("teams", [])
if not any(team["summary"] == PAGERDUTY_FILTER_TEAM for team in teams):
should_include = False
reason = f"No teams found for team filter: {PAGERDUTY_FILTER_TEAM}"
# Filter by users (for technical services)
if (
should_include
and PAGERDUTY_FILTER_USERS
and service.get("type") != "business_service"
):
service_users = set()
# Get users from escalation policy if present
if service.get("escalation_policy"):
for rule in service["escalation_policy"].get("escalation_rules", []):
for target in rule.get("targets", []):
if target["type"] == "user":
service_users.add(target["id"])
if not any(user_id in service_users for user_id in PAGERDUTY_FILTER_USERS):
should_include = False
reason = f"No users found for user filter: {','.join(PAGERDUTY_FILTER_USERS)}"
# Filter by name regex
if should_include and PAGERDUTY_FILTER_SERVICE_REGEX:
if not re.match(PAGERDUTY_FILTER_SERVICE_REGEX, service["name"]):
should_include = False
reason = f"Service name does not match regex: {PAGERDUTY_FILTER_SERVICE_REGEX}"
if should_include:
filtered_services.append(service)
else:
filtered_out += 1
print(f"{TAB}Service {service['id']}: {reason}")
if filtered_out > 0:
print(f"Filtered out {filtered_out} services")
return filtered_services
class BusinessService:
"""Class representing a PagerDuty business service with all necessary metadata."""
def __init__(self, service_data: Dict[str, Any]):
"""
Initialize a PagerDuty business service from API data.
Args:
service_data: Raw business service data from the PagerDuty API
"""
self.id = service_data.get("id")
self.name = service_data.get("name", "")
self.description = service_data.get("description", "")
self.point_of_contact = service_data.get("point_of_contact", "")
self.created_at = service_data.get("created_at")
self.updated_at = service_data.get("updated_at")
# URLs and permalinks
self.html_url = service_data.get("html_url")
self.self_url = service_data.get("self")
# Related entities
self.teams = service_data.get("teams", [])
# Dependencies - will be populated separately
self.dependencies = []
# Store raw data for access to any fields we might need later
self.raw_data = service_data
def __str__(self) -> str:
return f"BusinessService(id={self.id}, name={self.name})"
class TechnicalService:
"""Class representing a PagerDuty technical service with all necessary metadata for migration."""
@ -137,6 +235,25 @@ def fetch_service_dependencies(
print(f"Completed fetching dependencies for {len(services)} services.")
def fetch_business_services(session: APISession) -> List[BusinessService]:
"""
Fetch all PagerDuty business services with their metadata.
Args:
session: Authenticated PagerDuty API session
Returns:
List of BusinessService objects
"""
# Fetch all business services
services_data = session.list_all("business_services")
# Convert to BusinessService objects
services = [BusinessService(service) for service in services_data]
return services
def get_all_technical_services_with_metadata(
session: APISession,
) -> List[TechnicalService]:
@ -158,3 +275,411 @@ def get_all_technical_services_with_metadata(
fetch_service_dependencies(session, services)
return services
def fetch_business_service_dependencies(
session: APISession,
business_services: List[BusinessService],
technical_services: Dict[str, Any],
) -> None:
"""
Fetch and populate business service dependencies on technical services.
This function modifies the provided business services list in-place by populating
the dependencies field for each service.
Args:
session: Authenticated PagerDuty API session
business_services: List of BusinessService objects to update with dependencies
technical_services: Dictionary mapping service IDs to technical service objects
"""
print("Fetching business service dependencies...")
# Process each business service to find its dependencies
for service in business_services:
try:
# Use the business service dependencies endpoint
response = session.get(
f"service_dependencies/business_services/{service.id}"
)
# Parse the response
dependencies_data = response
if hasattr(response, "json"):
dependencies_data = response.json()
# Extract relationships from the response
if (
dependencies_data
and isinstance(dependencies_data, dict)
and "relationships" in dependencies_data
):
for relationship in dependencies_data["relationships"]:
# A dependency relationship has a supporting_service that the business service depends on
if "supporting_service" in relationship:
dep_id = relationship["supporting_service"]["id"]
if (
dep_id in technical_services
): # Only add if it's a technical service
service.dependencies.append(technical_services[dep_id])
else:
print(
f"No valid relationship data found for business service {service.name} (ID: {service.id})"
)
except Exception as e:
# Log but continue if we can't fetch dependencies for a service
print(
f"Error fetching dependencies for business service {service.name}: {e}"
)
print(
f"Completed fetching dependencies for {len(business_services)} business services."
)
def get_all_business_services_with_metadata(
session: APISession, technical_services: Dict[str, Any]
) -> List[BusinessService]:
"""
Fetch all PagerDuty business services with complete metadata including dependencies.
Args:
session: Authenticated PagerDuty API session
technical_services: Dictionary mapping service IDs to technical service objects
Returns:
List of BusinessService objects with all required metadata
"""
# Fetch business services with their basic metadata
business_services = fetch_business_services(session)
# Fetch and populate dependencies
fetch_business_service_dependencies(session, business_services, technical_services)
return business_services
def _migrate_service_batch(
client: ServiceModelClient,
services: List[Any],
migrate_func: callable,
dry_run: bool = False,
) -> Dict[str, Any]:
"""
Migrate a batch of services using the provided migration function.
Args:
client: The ServiceModelClient to use
services: List of services to migrate
migrate_func: Function to use for migrating each service
dry_run: If True, only validate and log what would be done
Returns:
Dictionary containing migration statistics and created components
"""
created_components = {}
for service in services:
component = migrate_func(client, service, dry_run)
if component:
created_components[service.id] = component
return created_components
def _update_service_dependencies(
client: ServiceModelClient,
services: List[Any],
created_components: Dict[str, Any],
dry_run: bool = False,
) -> None:
"""
Update dependencies for all services with proper refs.
Args:
client: The ServiceModelClient to use
services: List of services to update
created_components: Dictionary of created components by service ID
dry_run: If True, only validate and log what would be done
"""
for service in services:
if service.id in created_components and service.dependencies:
component_name = created_components[service.id]["metadata"]["name"]
depends_on_refs = [
{
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"name": created_components[dep.id]["metadata"]["name"],
}
for dep in service.dependencies
if dep.id in created_components
]
if depends_on_refs:
# Create patch payload with only the dependsOnRefs field
patch_payload = {"spec": {"dependsOnRefs": depends_on_refs}}
if not dry_run:
try:
client.patch_component(component_name, patch_payload)
print(f"Updated dependencies for service: {service.name}")
except Exception as e:
print(
f"Failed to update dependencies for service {service.name}: {e}"
)
# Log the full error details for debugging
print(f"Patch payload: {json.dumps(patch_payload, indent=2)}")
def _transform_service(
service: Union[TechnicalService, BusinessService]
) -> Dict[str, Any]:
"""
Transform a PagerDuty service (technical or business) into a Backstage Component.
Args:
service: The PagerDuty service to transform (either TechnicalService or BusinessService)
Returns:
A dictionary containing the transformed service in Backstage Component format
"""
# Determine service type and required fields
is_technical = isinstance(service, TechnicalService)
service_type = "service" if is_technical else "business_service"
# Create the base component structure
component = {
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"metadata": {
"name": service.name.lower().replace(
" ", "-"
), # Convert to k8s-friendly name
"annotations": {"pagerduty.com/service-id": service.id},
},
"spec": {"type": service_type, "description": service.description},
}
# Add status annotation for technical services
if is_technical and hasattr(service, "status"):
component["metadata"]["annotations"]["pagerduty.com/status"] = service.status
# Add PagerDuty URLs to annotations
if service.html_url:
component["metadata"]["annotations"][
"pagerduty.com/html-url"
] = service.html_url
if service.self_url:
component["metadata"]["annotations"]["pagerduty.com/api-url"] = service.self_url
return component
def _validate_component(component: Dict[str, Any]) -> List[str]:
"""
Validate a transformed Component resource.
Args:
component: The Component resource to validate
Returns:
List of validation errors. Empty list means valid.
"""
errors = []
# Check required fields
required_fields = [
("apiVersion", str),
("kind", str),
("metadata", dict),
("spec", dict),
]
for field, field_type in required_fields:
if field not in component:
errors.append(f"Missing required field: {field}")
elif not isinstance(component[field], field_type):
errors.append(f"Field {field} must be of type {field_type.__name__}")
# If we're missing required fields, don't continue with deeper validation
if errors:
return errors
# Check metadata requirements
metadata = component["metadata"]
if "name" not in metadata:
errors.append("metadata.name is required")
elif not isinstance(metadata["name"], str):
errors.append("metadata.name must be a string")
# Check required annotations
if "annotations" not in metadata:
errors.append("metadata.annotations is required")
else:
annotations = metadata["annotations"]
if "pagerduty.com/service-id" not in annotations:
errors.append("Required annotation missing: pagerduty.com/service-id")
if (
component["spec"]["type"] == "service"
and "pagerduty.com/status" not in annotations
):
errors.append("Required annotation missing: pagerduty.com/status")
# Check spec requirements
spec = component["spec"]
if "type" not in spec:
errors.append("spec.type is required")
elif not isinstance(spec["type"], str):
errors.append("spec.type must be a string")
elif spec["type"] not in ["service", "business_service"]:
errors.append("spec.type must be either 'service' or 'business_service'")
return errors
def _migrate_technical_service(
client: ServiceModelClient, service: TechnicalService, dry_run: bool = False
) -> Optional[Dict[str, Any]]:
"""
Migrate a single technical service to Grafana's service model.
Args:
client: The ServiceModelClient to use
service: The technical service to migrate
dry_run: If True, only validate and log what would be done
Returns:
The created component if successful, None otherwise
"""
try:
# Transform the service
component = _transform_service(service)
# Check if component already exists
existing = client.get_component(component["metadata"]["name"])
if existing:
print(TAB + format_service(service, True) + " (preserved)")
service.preserved = True
service.migration_errors = None
return existing
# Validate the transformed component
errors = _validate_component(component)
if errors:
service.migration_errors = errors
service.preserved = False
print(TAB + format_service(service, False))
return None
if dry_run:
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (would create)")
return component
# Create the component
created = client.create_component(component)
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (created)")
return created
except Exception as e:
service.migration_errors = str(e)
service.preserved = False
print(TAB + format_service(service, False))
return None
def _migrate_business_service(
client: ServiceModelClient, service: BusinessService, dry_run: bool = False
) -> Optional[Dict[str, Any]]:
"""
Migrate a single business service to Grafana's service model.
Args:
client: The ServiceModelClient to use
service: The business service to migrate
dry_run: If True, only validate and log what would be done
Returns:
The created component if successful, None otherwise
"""
try:
# Transform the service
component = _transform_service(service)
# Check if component already exists
existing = client.get_component(component["metadata"]["name"])
if existing:
print(TAB + format_service(service, True) + " (preserved)")
service.preserved = True
service.migration_errors = None
return existing
# Validate the transformed component
errors = _validate_component(component)
if errors:
service.migration_errors = errors
service.preserved = False
print(TAB + format_service(service, False))
return None
if dry_run:
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (would create)")
return component
# Create the component
created = client.create_component(component)
service.migration_errors = None
service.preserved = False
print(TAB + format_service(service, True) + " (created)")
return created
except Exception as e:
service.migration_errors = str(e)
service.preserved = False
print(TAB + format_service(service, False))
return None
def migrate_all_services(
client: ServiceModelClient,
technical_services: List[TechnicalService],
business_services: List[BusinessService],
dry_run: bool = False,
) -> None:
"""
Migrate all PagerDuty services to Grafana's service model.
Args:
client: The ServiceModelClient to use
technical_services: List of technical services to migrate
business_services: List of business services to migrate
dry_run: If True, only validate and log what would be done
Returns:
Dictionary containing migration statistics
"""
# Migrate technical services
tech_components = _migrate_service_batch(
client, technical_services, _migrate_technical_service, dry_run
)
# Migrate business services
bus_components = _migrate_service_batch(
client, business_services, _migrate_business_service, dry_run
)
# Update dependencies
created_components = {**tech_components, **bus_components}
_update_service_dependencies(
client, technical_services + business_services, created_components, dry_run
)
return

View file

@ -1,6 +1,43 @@
import typing
from lib.common.report import TAB
from lib.pagerduty.config import PAGERDUTY_FILTER_USERS, VERBOSE_LOGGING
from lib.utils import find_by_id
def filter_users(
users: typing.List[typing.Dict[str, typing.Any]]
) -> typing.List[typing.Dict[str, typing.Any]]:
"""
Filter users based on PAGERDUTY_FILTER_USERS.
When PAGERDUTY_FILTER_USERS is set, only users with IDs in that list will be included.
"""
if not PAGERDUTY_FILTER_USERS:
return users # No filtering, return all users
filtered_users = []
filtered_out = 0
for user in users:
if user["id"] in PAGERDUTY_FILTER_USERS:
filtered_users.append(user)
else:
filtered_out += 1
if filtered_out > 0:
summary = f"Filtered out {filtered_out} users (keeping only users specified in PAGERDUTY_FILTER_USERS)"
print(summary)
# Only print detailed info in verbose mode
if VERBOSE_LOGGING:
print(
f"{TAB}Keeping only users with IDs: {', '.join(PAGERDUTY_FILTER_USERS)}"
)
return filtered_users
def match_users_for_schedule(schedule: dict, users: list[dict]) -> None:
unmatched_users = []

View file

@ -0,0 +1,27 @@
import os
import uuid
from pathlib import Path
# Use environment variable for session file location, with fallback
SESSION_FILE = Path(
os.environ.get("SESSION_FILE", str(Path(__file__).parent.parent / ".session"))
)
def get_or_create_session_id() -> str:
"""Get an existing session ID or create a new one if it doesn't exist."""
if os.path.exists(SESSION_FILE):
with open(SESSION_FILE, "r") as f:
return f.read().strip()
# Create new session ID
session_id = str(uuid.uuid4())
# Ensure directory exists
SESSION_FILE.parent.mkdir(parents=True, exist_ok=True)
# Save session ID
with open(SESSION_FILE, "w") as f:
f.write(session_id)
return session_id

View file

@ -4,6 +4,7 @@ from dataclasses import dataclass
from typing import Optional
from uuid import uuid4
from lib.constants import ONCALL_SHIFT_WEB_SOURCE
from lib.oncall import types as oncall_types
from lib.oncall.api_client import OnCallAPIClient
from lib.splunk import types
@ -14,7 +15,6 @@ TIME_ZONE = "UTC"
Note: The Splunk schedule rotations do return a `timezone` attribute, but I don't think
we need to worry about this as the all of the timestamps that we touch are in UTC.
"""
ONCALL_SHIFT_WEB_SOURCE = 0 # alias for "web"
def generate_splunk_schedule_name(

View file

@ -0,0 +1,9 @@
from lib.common.resources.users import match_user
def test_match_user_email_case_insensitive():
pd_user = {"email": "test@test.com"}
oncall_users = [{"email": "TEST@TEST.COM"}]
match_user(pd_user, oncall_users)
assert pd_user["oncall_user"] == oncall_users[0]

View file

@ -1,110 +0,0 @@
"""
Unit tests for the Grafana Service Model transformation logic.
"""
from unittest.mock import Mock
import pytest
from lib.grafana.transform import transform_service, validate_component
from lib.pagerduty.resources.business_service import BusinessService
from lib.pagerduty.resources.services import TechnicalService
@pytest.fixture
def technical_service():
"""Create a mock technical service for testing."""
service = Mock(spec=TechnicalService)
service.name = "Test Service"
service.description = "A test service"
service.id = "P123456"
service.status = "active"
service.html_url = "https://pagerduty.com/services/P123456"
service.self_url = "https://api.pagerduty.com/services/P123456"
return service
@pytest.fixture
def business_service():
"""Create a mock business service for testing."""
service = Mock(spec=BusinessService)
service.name = "Test Business Service"
service.description = "A test business service"
service.id = "P789012"
service.html_url = "https://pagerduty.com/services/P789012"
service.self_url = "https://api.pagerduty.com/services/P789012"
return service
def test_transform_technical_service(technical_service):
"""Test transforming a technical service."""
component = transform_service(technical_service)
# Verify the component structure
assert component["apiVersion"] == "servicemodel.ext.grafana.com/v1alpha1"
assert component["kind"] == "Component"
assert component["metadata"]["name"] == "test-service"
assert component["spec"]["type"] == "service"
assert component["spec"]["description"] == "A test service"
# Verify annotations
annotations = component["metadata"]["annotations"]
assert annotations["pagerduty.com/service-id"] == "P123456"
assert annotations["pagerduty.com/status"] == "active"
assert (
annotations["pagerduty.com/html-url"]
== "https://pagerduty.com/services/P123456"
)
assert (
annotations["pagerduty.com/api-url"]
== "https://api.pagerduty.com/services/P123456"
)
def test_transform_business_service(business_service):
"""Test transforming a business service."""
component = transform_service(business_service)
# Verify the component structure
assert component["apiVersion"] == "servicemodel.ext.grafana.com/v1alpha1"
assert component["kind"] == "Component"
assert component["metadata"]["name"] == "test-business-service"
assert component["spec"]["type"] == "business_service"
assert component["spec"]["description"] == "A test business service"
# Verify annotations
annotations = component["metadata"]["annotations"]
assert annotations["pagerduty.com/service-id"] == "P789012"
assert (
annotations["pagerduty.com/html-url"]
== "https://pagerduty.com/services/P789012"
)
assert (
annotations["pagerduty.com/api-url"]
== "https://api.pagerduty.com/services/P789012"
)
def test_validate_component():
"""Test component validation."""
# Test valid component
valid_component = {
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"metadata": {
"name": "test-service",
"annotations": {
"pagerduty.com/service-id": "P123456",
"pagerduty.com/status": "active",
},
},
"spec": {"type": "service", "description": "A test service"},
}
errors = validate_component(valid_component)
assert errors == []
# Test missing required field
invalid_component = valid_component.copy()
del invalid_component["spec"]
errors = validate_component(invalid_component)
assert "Missing required field: spec" in errors

View file

@ -0,0 +1,167 @@
from unittest.mock import call, patch
from lib.opsgenie.resources.escalation_policies import (
match_escalation_policy,
match_users_and_schedules_for_escalation_policy,
migrate_escalation_policy,
)
def test_match_escalation_policy():
policy = {
"id": "ep1",
"name": "Critical Alerts",
"ownerTeam": {
"name": "Team A",
},
"rules": [],
}
oncall_chains = [
{"id": "oc1", "name": "Team A - Critical Alerts"},
{"id": "oc2", "name": "Team B - Non-Critical Alerts"},
]
match_escalation_policy(policy, oncall_chains)
assert policy["oncall_escalation_chain"]["id"] == "oc1"
def test_match_users_and_schedules_for_escalation_policy():
policy = {
"id": "ep1",
"name": "Critical Alerts",
"ownerTeam": {
"name": "Team A",
},
"rules": [
{
"recipient": {"type": "user", "id": "u1"},
},
{
"recipient": {"type": "schedule", "id": "s1"},
},
],
"matched_users": [],
"matched_schedules": [],
}
users = [
{"id": "u1", "oncall_user": {"id": "ou1"}},
{"id": "u2", "oncall_user": None},
]
schedules = [
{"id": "s1", "name": "Primary Schedule", "migration_errors": []},
{"id": "s2", "name": "Secondary Schedule", "migration_errors": ["error"]},
]
match_users_and_schedules_for_escalation_policy(policy, users, schedules)
assert len(policy["matched_users"]) == 1
assert policy["matched_users"][0]["id"] == "u1"
assert len(policy["matched_schedules"]) == 1
assert policy["matched_schedules"][0]["id"] == "s1"
@patch("lib.opsgenie.resources.escalation_policies.OnCallAPIClient")
def test_migrate_escalation_policy(mock_client):
mock_client.create.return_value = {"id": "oc1"}
policy = {
"id": "ep1",
"name": "Critical Alerts",
"ownerTeam": {
"name": "Team A",
},
"rules": [
{
"recipient": {
"type": "user",
"id": "u1",
},
"notifyType": "default",
"delay": {
"timeAmount": 5,
},
},
{
"recipient": {
"type": "schedule",
"id": "s1",
},
"notifyType": "default",
"delay": {
"timeAmount": 12,
},
},
{
"recipient": {
"type": "user",
"id": "u2",
},
"notifyType": "somethingElse",
},
],
"oncall_escalation_chain": {"id": "oc_old"},
"matched_users": [{"id": "u1", "oncall_user": {"id": "ou1"}}],
"matched_schedules": [{"id": "s1", "oncall_schedule": {"id": "os1"}}],
}
# Create test data
users = [{"id": "u1", "oncall_user": {"id": "ou1"}}]
schedules = [{"id": "s1", "oncall_schedule": {"id": "os1"}}]
migrate_escalation_policy(policy, users, schedules)
# Verify that existing chain is deleted
mock_client.delete.assert_called_once_with("escalation_chains/oc_old")
mock_client.create.assert_has_calls(
[
# Verify new escalation chain is created
call(
"escalation_chains",
{
"name": "Team A - Critical Alerts",
"team_id": None,
},
),
# Verify first wait and policy steps are created
call(
"escalation_policies",
{
"escalation_chain_id": "oc1",
"position": 0,
"type": "wait",
"duration": 300, # 5 minutes in seconds
},
),
call(
"escalation_policies",
{
"escalation_chain_id": "oc1",
"position": 1,
"type": "notify_persons",
"persons_to_notify": ["ou1"],
"important": False,
},
),
# Verify second policy and wait step
call(
"escalation_policies",
{
"escalation_chain_id": "oc1",
"position": 2,
"type": "wait",
"duration": 900, # 15 minutes in seconds
},
),
call(
"escalation_policies",
{
"escalation_chain_id": "oc1",
"position": 3,
"type": "notify_on_call_from_schedule",
"notify_on_call_from_schedule": "os1",
"important": False,
},
),
],
any_order=False, # Order of calls is important
)

View file

@ -0,0 +1,198 @@
from unittest.mock import patch
from lib.opsgenie.resources.integrations import (
filter_integrations,
match_integration,
migrate_integration,
)
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_TEAM", "team1")
def test_filter_integrations_by_team():
integrations = [
{
"id": "i1",
"name": "Integration 1",
"teamId": "team1",
},
{
"id": "i2",
"name": "Integration 2",
"teamId": "team2",
},
{
"id": "i3",
"name": "Integration 3",
"teamId": "team1",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 2
assert filtered[0]["id"] == "i1"
assert filtered[1]["id"] == "i3"
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_TEAM", None)
@patch(
"lib.opsgenie.resources.integrations.OPSGENIE_FILTER_INTEGRATION_REGEX", "^Prod.*"
)
def test_filter_integrations_by_regex():
integrations = [
{
"id": "i1",
"name": "Production Alert",
"teamId": "team1",
},
{
"id": "i2",
"name": "Staging Alert",
"teamId": "team2",
},
{
"id": "i3",
"name": "Prod DB Alert",
"teamId": "team1",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 2
assert filtered[0]["id"] == "i1"
assert filtered[1]["id"] == "i3"
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_TEAM", "team1")
@patch(
"lib.opsgenie.resources.integrations.OPSGENIE_FILTER_INTEGRATION_REGEX", "^Prod.*"
)
def test_filter_integrations_by_team_and_regex():
integrations = [
{
"id": "i1",
"name": "Production Alert",
"teamId": "team1",
},
{
"id": "i2",
"name": "Staging Alert",
"teamId": "team1",
},
{
"id": "i3",
"name": "Prod DB Alert",
"teamId": "team2",
},
{
"id": "i4",
"name": "Prod API Alert",
"teamId": "team1",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 2
assert filtered[0]["id"] == "i1"
assert filtered[1]["id"] == "i4"
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_TEAM", None)
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_INTEGRATION_REGEX", None)
def test_filter_integrations_no_filters():
integrations = [
{
"id": "i1",
"name": "Integration 1",
"teamId": "team1",
},
{
"id": "i2",
"name": "Integration 2",
"teamId": "team2",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 2
assert filtered == integrations
@patch("lib.opsgenie.resources.integrations.OPSGENIE_FILTER_TEAM", "team1")
def test_filter_integrations_missing_team_id():
integrations = [
{
"id": "i1",
"name": "Integration 1",
"teamId": "team1",
},
{
"id": "i2",
"name": "Integration 2",
},
{
"id": "i3",
"name": "Integration 3",
"teamId": "team1",
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 2
assert filtered[0]["id"] == "i1"
assert filtered[1]["id"] == "i3"
def test_match_integration():
# supported type
integration = {
"id": "i1",
"name": "Prometheus Alerts",
"type": "Prometheus",
}
oncall_integrations = [
{"id": "oi1", "name": "Prometheus Alerts"},
{"id": "oi2", "name": "Datadog Alerts"},
]
match_integration(integration, oncall_integrations)
assert integration["oncall_integration"]["id"] == "oi1"
assert integration["oncall_type"] == "alertmanager"
# unsupported type
integration = {
"id": "i1",
"name": "Custom Integration",
"type": "Custom",
}
match_integration(integration, oncall_integrations)
assert integration["oncall_integration"] is None
assert integration.get("oncall_type") is None
@patch("lib.opsgenie.resources.integrations.OnCallAPIClient")
def test_migrate_integration(mock_client):
mock_client.create.return_value = {"id": "oi1"}
integration = {
"id": "i1",
"name": "Prometheus Alerts",
"type": "Prometheus",
"oncall_type": "alertmanager",
"oncall_integration": {"id": "oi_old"},
"oncall_escalation_chain": {"id": "oc1"},
}
migrate_integration(integration)
# Verify integration creation
mock_client.delete.assert_called_once_with("integrations/oi_old")
mock_client.create.assert_called_once_with(
"integrations",
{
"name": "Prometheus Alerts",
"type": "alertmanager",
"team_id": None,
"escalation_chain_id": "oc1",
},
)

View file

@ -0,0 +1,135 @@
from unittest.mock import call, patch
from lib.opsgenie.resources.notification_rules import migrate_notification_rules
@patch("lib.opsgenie.resources.notification_rules.OnCallAPIClient")
@patch(
"lib.opsgenie.resources.notification_rules.PRESERVE_EXISTING_USER_NOTIFICATION_RULES",
False,
)
def test_migrate_notification_rules(mock_client):
user = {
"id": "u1",
"username": "test.user@example.com",
"notification_rules": [
{
"enabled": True,
"contact": {"method": "sms"},
"sendAfter": {"timeAmount": 5, "timeUnit": "minutes"},
},
{
"enabled": True,
"contact": {"method": "voice"},
"sendAfter": {"timeAmount": 10, "timeUnit": "minutes"},
},
{
"enabled": True,
"contact": {"method": "mobile"},
"sendAfter": {"timeAmount": 0, "timeUnit": "minutes"},
},
],
"oncall_user": {
"id": "ou1",
"notification_rules": [{"id": "nr_old"}],
},
}
migrate_notification_rules(user)
# Verify old rules deletion
mock_client.delete.assert_called_once_with("personal_notification_rules/nr_old")
# Verify new rules creation
assert mock_client.create.call_count == 10
mock_client.create.assert_has_calls(
[
# Non-important notifications (sorted by sendAfter time)
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_mobile_app",
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "wait",
"duration": 300, # 5 minutes in seconds
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_sms",
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "wait",
"duration": 300, # 5 minutes in seconds
"important": False,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_phone_call",
"important": False,
},
),
# Important notifications (sorted by sendAfter time)
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_mobile_app_critical",
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "wait",
"duration": 300, # 5 minutes in seconds
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_sms",
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "wait",
"duration": 300, # 5 minutes in seconds
"important": True,
},
),
call(
"personal_notification_rules",
{
"user_id": "ou1",
"type": "notify_by_phone_call",
"important": True,
},
),
],
any_order=False, # Order matters
)

View file

@ -0,0 +1,149 @@
from unittest.mock import patch
from lib.opsgenie.resources.schedules import (
match_schedule,
match_users_for_schedule,
migrate_schedule,
)
def test_match_schedule():
schedule = {
"id": "s1",
"name": "Primary Schedule",
"timezone": "UTC",
"rotations": [],
}
oncall_schedules = [
{"id": "os1", "name": "Primary Schedule"},
{"id": "os2", "name": "Secondary Schedule"},
]
user_id_map = {}
match_schedule(schedule, oncall_schedules, user_id_map)
assert schedule["oncall_schedule"]["id"] == "os1"
assert not schedule["migration_errors"]
def test_match_schedule_case_insensitive():
schedule = {
"id": "s1",
"name": "Primary Schedule",
"timezone": "UTC",
"rotations": [],
}
oncall_schedules = [
{"id": "os1", "name": "primary SCHEDULE"},
{"id": "os2", "name": "Secondary Schedule"},
]
user_id_map = {}
match_schedule(schedule, oncall_schedules, user_id_map)
assert schedule["oncall_schedule"]["id"] == "os1"
assert not schedule["migration_errors"]
def test_match_users_for_schedule():
schedule = {
"id": "s1",
"name": "Primary Schedule",
"rotations": [
{
"participants": [
{"type": "user", "id": "u1"},
{"type": "user", "id": "u2"},
],
}
],
}
users = [
{"id": "u1", "oncall_user": {"id": "ou1"}},
{"id": "u2", "oncall_user": None},
{"id": "u3", "oncall_user": {"id": "ou3"}},
]
match_users_for_schedule(schedule, users)
assert len(schedule["matched_users"]) == 1
assert schedule["matched_users"][0]["id"] == "u1"
@patch("lib.opsgenie.resources.schedules.OnCallAPIClient")
def test_migrate_schedule(mock_client):
# Mock OnCall API responses
mock_client.create.side_effect = [
{"id": "or1"}, # First rotation
{"id": "or2"}, # Second rotation
{"id": "os1", "name": "Primary Schedule"}, # Schedule creation
]
schedule = {
"id": "s1",
"name": "Primary Schedule",
"timezone": "UTC",
"rotations": [
{
"name": "Daily Rotation",
"type": "daily",
"length": 1,
"participants": [{"type": "user", "id": "u1"}],
"startDate": "2024-01-01T00:00:00Z",
"enabled": True,
},
{
"name": "Weekly Rotation",
"type": "weekly",
"length": 1,
"participants": [{"type": "user", "id": "u2"}],
"startDate": "2024-01-01T00:00:00Z",
"enabled": True,
"timeRestriction": {
"type": "weekday-and-time-of-day",
"restrictions": [
{
"startDay": "MONDAY",
"endDay": "FRIDAY",
}
],
},
},
],
"oncall_schedule": {"id": "os_old"},
}
user_id_map = {"u1": "ou1", "u2": "ou2"}
migrate_schedule(schedule, user_id_map)
# Verify schedule creation
mock_client.delete.assert_called_once_with("schedules/os_old")
# Verify shift creation calls
mock_client.create.assert_any_call(
"on_call_shifts",
{
"name": "Daily Rotation",
"type": "rolling_users",
"time_zone": "UTC",
"team_id": None,
"level": 1,
"start": "2024-01-01T00:00:00",
"duration": 86400, # 1 day in seconds
"frequency": "daily",
"interval": 1,
"rolling_users": [["ou1"]],
"start_rotation_from_user_index": 0,
"week_start": "MO",
"source": 0,
},
)
# Verify schedule creation with shift IDs
mock_client.create.assert_called_with(
"schedules",
{
"name": "Primary Schedule",
"type": "web",
"team_id": None,
"time_zone": "UTC",
"shifts": ["or1", "or2"],
},
)

View file

@ -0,0 +1,105 @@
from unittest.mock import patch
from lib.opsgenie.resources.users import filter_users
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", None)
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", None)
def test_filter_users_no_filters():
"""Test that filter_users returns all users when no filters are set."""
users = [
{"id": "u1", "teams": [{"id": "t1"}, {"id": "t2"}]},
{"id": "u2", "teams": [{"id": "t3"}]},
]
filtered = filter_users(users)
assert filtered == users
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", None)
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", "t1")
def test_filter_users_by_team():
"""Test filtering users by team ID."""
users = [
{"id": "u1", "teams": [{"id": "t1"}, {"id": "t2"}]},
{"id": "u2", "teams": [{"id": "t3"}]},
{"id": "u3", "teams": [{"id": "t1"}, {"id": "t3"}]},
]
filtered = filter_users(users)
assert len(filtered) == 2
assert filtered[0]["id"] == "u1"
assert filtered[1]["id"] == "u3"
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", ["u1", "u3"])
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", None)
def test_filter_users_by_user_ids():
"""Test filtering users by specific user IDs."""
users = [
{"id": "u1", "teams": [{"id": "t1"}]},
{"id": "u2", "teams": [{"id": "t2"}]},
{"id": "u3", "teams": [{"id": "t3"}]},
]
filtered = filter_users(users)
assert len(filtered) == 2
assert filtered[0]["id"] == "u1"
assert filtered[1]["id"] == "u3"
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", ["u1", "u4"])
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", "t1")
def test_filter_users_by_team_and_user_ids():
"""Test filtering users by both team ID and user IDs."""
users = [
{"id": "u1", "teams": [{"id": "t1"}]}, # Matches both filters
{"id": "u2", "teams": [{"id": "t1"}]}, # Matches team only
{"id": "u3", "teams": [{"id": "t2"}]}, # Matches neither
{"id": "u4", "teams": [{"id": "t2"}]}, # Matches user ID only
]
filtered = filter_users(users)
assert len(filtered) == 1
assert filtered[0]["id"] == "u1" # Only user matching both filters
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", ["u1"])
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", "t1")
def test_filter_users_empty_list():
"""Test filtering an empty user list."""
filtered = filter_users([])
assert filtered == []
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", None)
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", "t3")
def test_filter_users_no_matching_team():
"""Test filtering when no users match the team filter."""
users = [
{"id": "u1", "teams": [{"id": "t1"}]},
{"id": "u2", "teams": [{"id": "t2"}]},
]
filtered = filter_users(users)
assert filtered == []
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", ["u3", "u4"])
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", None)
def test_filter_users_no_matching_user_ids():
"""Test filtering when no users match the user ID filter."""
users = [
{"id": "u1", "teams": [{"id": "t1"}]},
{"id": "u2", "teams": [{"id": "t2"}]},
]
filtered = filter_users(users)
assert filtered == []
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_USERS", None)
@patch("lib.opsgenie.resources.users.OPSGENIE_FILTER_TEAM", "t1")
def test_filter_users_with_empty_teams():
"""Test filtering users that have no teams."""
users = [
{"id": "u1", "teams": []},
{"id": "u2", "teams": [{"id": "t1"}]},
]
filtered = filter_users(users)
assert len(filtered) == 1
assert filtered[0]["id"] == "u2"

View file

@ -0,0 +1,158 @@
from lib.opsgenie.report import (
escalation_policy_report,
format_escalation_policy,
format_integration,
format_schedule,
format_user,
integration_report,
schedule_report,
user_report,
)
def test_format_user():
user = {
"fullName": "John Doe",
"username": "john.doe@example.com",
}
assert format_user(user) == "John Doe (john.doe@example.com)"
def test_format_schedule():
schedule = {
"name": "Primary Schedule",
}
assert format_schedule(schedule) == "Primary Schedule"
def test_format_escalation_policy():
policy = {
"name": "Critical Alerts",
"ownerTeam": {
"name": "Team A",
},
}
assert format_escalation_policy(policy) == "Team A - Critical Alerts"
def test_format_integration():
integration = {
"name": "Prometheus Alerts",
"type": "Prometheus",
}
assert format_integration(integration) == "Prometheus Alerts (Prometheus)"
def test_user_report():
users = [
{
"fullName": "John Doe",
"username": "john.doe@example.com",
"oncall_user": {
"notification_rules": [],
},
},
{
"fullName": "Jane Smith",
"username": "jane.smith@example.com",
"oncall_user": {
"notification_rules": [{"id": "nr1"}],
},
},
{
"fullName": "Bob Wilson",
"username": "bob.wilson@example.com",
"oncall_user": None,
},
]
report = user_report(users)
assert "✅ John Doe (john.doe@example.com)" in report
assert (
"⚠️ Jane Smith (jane.smith@example.com) (existing notification rules will be preserved)"
in report
)
assert (
"❌ Bob Wilson (bob.wilson@example.com) — no Grafana OnCall user found with this email"
in report
)
def test_schedule_report():
schedules = [
{
"name": "Primary Schedule",
"migration_errors": [],
"oncall_schedule": None,
},
{
"name": "Secondary Schedule",
"migration_errors": [],
"oncall_schedule": {"id": "os1"},
},
{
"name": "Broken Schedule",
"migration_errors": ["schedule references unmatched users"],
},
]
report = schedule_report(schedules)
assert "✅ Primary Schedule" in report
assert "⚠️ Secondary Schedule (existing schedule will be deleted)" in report
assert "❌ Broken Schedule — schedule references unmatched users" in report
def test_escalation_policy_report():
policies = [
{
"name": "Critical Alerts",
"oncall_escalation_chain": None,
"ownerTeam": {
"name": "Team A",
},
},
{
"name": "Non-Critical Alerts",
"oncall_escalation_chain": {"id": "oc1"},
"ownerTeam": {
"name": "Team B",
},
},
]
report = escalation_policy_report(policies)
assert "✅ Team A - Critical Alerts" in report
assert (
"⚠️ Team B - Non-Critical Alerts (existing escalation chain will be deleted)"
in report
)
def test_integration_report():
integrations = [
{
"name": "Prometheus Alerts",
"type": "Prometheus",
"oncall_integration": None,
"oncall_type": "alertmanager",
},
{
"name": "Datadog Alerts",
"type": "Datadog",
"oncall_integration": {"id": "oi1"},
"oncall_type": "datadog",
},
{
"name": "Custom Integration",
"type": "Custom",
"oncall_integration": None,
"oncall_type": None,
},
]
report = integration_report(integrations)
assert "✅ Prometheus Alerts (Prometheus)" in report
assert (
"⚠️ Datadog Alerts (Datadog) (existing integration will be deleted)" in report
)
assert "❌ Custom Integration (Custom) — unsupported integration type" in report

View file

@ -0,0 +1,130 @@
from unittest.mock import patch
import pytest
from lib.pagerduty.resources.escalation_policies import (
filter_escalation_policies,
match_escalation_policy,
)
@pytest.fixture
def mock_escalation_policy():
return {
"id": "POLICY1",
"name": "Test Policy",
"teams": [{"summary": "Team 1"}],
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER1"},
{"type": "user", "id": "USER2"},
],
},
],
}
@patch("lib.pagerduty.resources.escalation_policies.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_escalation_policies_by_team(mock_escalation_policy):
policies = [
mock_escalation_policy,
{**mock_escalation_policy, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.resources.escalation_policies.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_escalation_policies_by_users(mock_escalation_policy):
policies = [
mock_escalation_policy,
{
**mock_escalation_policy,
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER3"},
{"type": "user", "id": "USER4"},
]
}
],
},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch(
"lib.pagerduty.resources.escalation_policies.PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX",
"^Test",
)
def test_filter_escalation_policies_by_regex(mock_escalation_policy):
policies = [
mock_escalation_policy,
{**mock_escalation_policy, "name": "Another Policy"},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.resources.escalation_policies.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.resources.escalation_policies.PAGERDUTY_FILTER_USERS", ["USER3"])
def test_filter_escalation_policies_with_multiple_filters_or_logic(
mock_escalation_policy,
):
"""Test that OR logic is applied between filters - a policy matching any filter is included"""
policies = [
mock_escalation_policy, # Has Team 1 but not USER3
{
"id": "POLICY2",
"name": "Test Policy 2",
"teams": [{"summary": "Team 2"}], # Not Team 1
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER3"}, # Has USER3
]
}
],
},
{
"id": "POLICY3",
"name": "Test Policy 3",
"teams": [{"summary": "Team 3"}], # Not Team 1
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER4"}, # Not USER3
]
}
],
},
]
filtered = filter_escalation_policies(policies)
# POLICY1 matches team filter, POLICY2 matches user filter, POLICY3 matches neither
assert len(filtered) == 2
assert {p["id"] for p in filtered} == {"POLICY1", "POLICY2"}
def test_match_escalation_policy_name_case_insensitive():
pd_escalation_policy = {"name": "Test"}
oncall_escalation_chains = [{"name": "test"}]
match_escalation_policy(pd_escalation_policy, oncall_escalation_chains)
assert (
pd_escalation_policy["oncall_escalation_chain"] == oncall_escalation_chains[0]
)
def test_match_escalation_policy_name_extra_spaces():
pd_escalation_policy = {"name": " test "}
oncall_escalation_chains = [{"name": "test"}]
match_escalation_policy(pd_escalation_policy, oncall_escalation_chains)
assert (
pd_escalation_policy["oncall_escalation_chain"] == oncall_escalation_chains[0]
)

View file

@ -0,0 +1,102 @@
from unittest.mock import patch
import pytest
from lib.pagerduty.resources.integrations import filter_integrations, match_integration
@pytest.fixture
def mock_integration():
return {
"id": "INTEGRATION1",
"name": "Test Integration",
"service": {
"name": "Service 1",
"teams": [{"summary": "Team 1"}],
},
}
@patch("lib.pagerduty.resources.integrations.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_integrations_by_team(mock_integration):
integrations = [
mock_integration,
{
**mock_integration,
"service": {
"name": "Service 1",
"teams": [{"summary": "Team 2"}],
},
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
@patch(
"lib.pagerduty.resources.integrations.PAGERDUTY_FILTER_INTEGRATION_REGEX",
"^Service 1 - Test",
)
def test_filter_integrations_by_regex(mock_integration):
integrations = [
mock_integration,
{
**mock_integration,
"service": {"name": "Service 2", "teams": [{"summary": "Team 1"}]},
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
@patch("lib.pagerduty.resources.integrations.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch(
"lib.pagerduty.resources.integrations.PAGERDUTY_FILTER_INTEGRATION_REGEX",
"^Service 2 - Test",
)
def test_filter_integrations_with_multiple_filters_or_logic(mock_integration):
"""Test that OR logic is applied between filters - an integration matching any filter is included"""
integrations = [
mock_integration, # Has Team 1 but doesn't match regex
{
"id": "INTEGRATION2",
"name": "Test Integration",
"service": {
"name": "Service 2", # Matches regex
"teams": [{"summary": "Team 2"}], # Not Team 1
},
},
{
"id": "INTEGRATION3",
"name": "Test Integration",
"service": {
"name": "Service 3", # Doesn't match regex
"teams": [{"summary": "Team 3"}], # Not Team 1
},
},
]
filtered = filter_integrations(integrations)
# INTEGRATION1 matches team filter, INTEGRATION2 matches regex filter, INTEGRATION3 matches neither
assert len(filtered) == 2
assert {i["id"] for i in filtered} == {"INTEGRATION1", "INTEGRATION2"}
def test_match_integration_name_case_insensitive():
pd_integration = {"service": {"name": "Test service"}, "name": "test Integration"}
oncall_integrations = [{"name": "test Service - Test integration"}]
match_integration(pd_integration, oncall_integrations)
assert pd_integration["oncall_integration"] == oncall_integrations[0]
def test_match_integration_name_extra_spaces():
pd_integration = {
"service": {"name": " test service "},
"name": " test integration ",
}
oncall_integrations = [{"name": "test service - test integration"}]
match_integration(pd_integration, oncall_integrations)
assert pd_integration["oncall_integration"] == oncall_integrations[0]

View file

@ -1,6 +1,14 @@
import datetime
from unittest.mock import patch
from lib.pagerduty.resources.schedules import Restriction, Schedule
import pytest
from lib.pagerduty.resources.schedules import (
Restriction,
Schedule,
filter_schedules,
match_schedule,
)
user_id_map = {
"USER_ID_1": "USER_ID_1",
@ -9,6 +17,100 @@ user_id_map = {
}
@pytest.fixture
def mock_schedule():
return {
"id": "SCHEDULE1",
"name": "Test Schedule",
"teams": [{"summary": "Team 1"}],
"schedule_layers": [
{
"users": [
{"user": {"id": "USER1"}},
{"user": {"id": "USER2"}},
],
},
],
}
@patch("lib.pagerduty.resources.schedules.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_schedules_by_team(mock_schedule):
schedules = [
mock_schedule,
{**mock_schedule, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.resources.schedules.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_schedules_by_users(mock_schedule):
schedules = [
mock_schedule,
{
**mock_schedule,
"schedule_layers": [{"users": [{"user": {"id": "USER3"}}]}],
},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.resources.schedules.PAGERDUTY_FILTER_SCHEDULE_REGEX", "^Test")
def test_filter_schedules_by_regex(mock_schedule):
schedules = [
mock_schedule,
{**mock_schedule, "name": "Another Schedule"},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.resources.schedules.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.resources.schedules.PAGERDUTY_FILTER_USERS", ["USER3"])
def test_filter_schedules_with_multiple_filters_or_logic(mock_schedule):
"""Test that OR logic is applied between filters - a schedule matching any filter is included"""
schedules = [
mock_schedule, # Has Team 1 but not USER3
{
"id": "SCHEDULE2",
"name": "Test Schedule 2",
"teams": [{"summary": "Team 2"}], # Not Team 1
"schedule_layers": [{"users": [{"user": {"id": "USER3"}}]}], # Has USER3
},
{
"id": "SCHEDULE3",
"name": "Test Schedule 3",
"teams": [{"summary": "Team 3"}], # Not Team 1
"schedule_layers": [{"users": [{"user": {"id": "USER4"}}]}], # Not USER3
},
]
filtered = filter_schedules(schedules)
# SCHEDULE1 matches team filter, SCHEDULE2 matches user filter, SCHEDULE3 matches neither
assert len(filtered) == 2
assert {s["id"] for s in filtered} == {"SCHEDULE1", "SCHEDULE2"}
def test_match_schedule_name_case_insensitive():
pd_schedule = {"name": "Test"}
oncall_schedules = [{"name": "test"}]
match_schedule(pd_schedule, oncall_schedules, user_id_map={})
assert pd_schedule["oncall_schedule"] == oncall_schedules[0]
def test_match_schedule_name_extra_spaces():
pd_schedule = {"name": " test "}
oncall_schedules = [{"name": "test"}]
match_schedule(pd_schedule, oncall_schedules, user_id_map={})
assert pd_schedule["oncall_schedule"] == oncall_schedules[0]
def test_merge_restrictions():
restrictions = [
Restriction(
@ -2166,23 +2268,23 @@ def test_overrides():
"time_zone": "Europe/London",
"overrides": [
{
"start": "2023-03-02T11:00:00",
"end": "2023-03-02T12:00:00",
"start": "2023-03-02T11:00:00Z",
"end": "2023-03-02T12:00:00Z",
"user": {"id": "USER_ID_1"},
},
{
"start": "2023-03-02T11:00:00+00:00",
"end": "2023-03-02T12:00:00+00:00",
"start": "2023-03-02T11:00:00Z",
"end": "2023-03-02T12:00:00Z",
"user": {"id": "USER_ID_1"},
},
{
"start": "2023-03-02T12:00:00+01:00",
"end": "2023-03-02T13:00:00+01:00",
"start": "2023-03-02T11:00:00Z",
"end": "2023-03-02T12:00:00Z",
"user": {"id": "USER_ID_1"},
},
{
"start": "2023-03-02T10:00:00-01:00",
"end": "2023-03-02T11:00:00-01:00",
"start": "2023-03-02T11:00:00Z",
"end": "2023-03-02T12:00:00Z",
"user": {"id": "USER_ID_1"},
},
],

View file

@ -0,0 +1,344 @@
from unittest.mock import MagicMock, Mock, patch
import pytest
from lib.pagerduty.resources.services import (
BusinessService,
TechnicalService,
_transform_service,
_validate_component,
fetch_service_dependencies,
fetch_services,
filter_services,
get_all_technical_services_with_metadata,
)
@pytest.fixture
def mock_session():
"""Create a mock API session."""
return MagicMock()
@pytest.fixture
def service_data():
"""Basic service data fixture."""
return {
"id": "SERVICE123",
"name": "Test Service",
"description": "A test service",
"status": "active",
"created_at": "2023-01-01T00:00:00Z",
"updated_at": "2023-01-02T00:00:00Z",
"html_url": "https://example.pagerduty.com/service/SERVICE123",
"self": "https://api.pagerduty.com/services/SERVICE123",
"escalation_policy": {"id": "EP123", "name": "Test Policy"},
"teams": [{"id": "TEAM1", "summary": "Team 1"}],
}
@pytest.fixture
def sample_services():
"""Sample service data for testing."""
return [
{
"id": "P123",
"name": "Production Service",
"type": "service",
"teams": [{"summary": "Platform Team"}],
"escalation_policy": {
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "U123"},
{"type": "user", "id": "U456"},
]
}
]
},
},
{
"id": "P456",
"name": "Staging Service",
"type": "service",
"teams": [{"summary": "DevOps Team"}],
"escalation_policy": {
"escalation_rules": [{"targets": [{"type": "user", "id": "U789"}]}]
},
},
{
"id": "B123",
"name": "Business Service",
"type": "business_service",
"teams": [{"summary": "Platform Team"}],
},
]
@pytest.fixture
def mock_services():
"""Create mock services for dependency testing."""
service1 = TechnicalService({"id": "SERVICE1", "name": "Service 1"})
service2 = TechnicalService({"id": "SERVICE2", "name": "Service 2"})
return [service1, service2]
@pytest.fixture
def technical_service():
"""Create a mock technical service for testing."""
service = Mock(spec=TechnicalService)
service.name = "Test Service"
service.description = "A test service"
service.id = "P123456"
service.status = "active"
service.html_url = "https://pagerduty.com/services/P123456"
service.self_url = "https://api.pagerduty.com/services/P123456"
return service
@pytest.fixture
def business_service():
"""Create a mock business service for testing."""
service = Mock(spec=BusinessService)
service.name = "Test Business Service"
service.description = "A test business service"
service.id = "P789012"
service.html_url = "https://pagerduty.com/services/P789012"
service.self_url = "https://api.pagerduty.com/services/P789012"
return service
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_TEAM", "Platform Team")
def test_filter_services_by_team(sample_services):
"""Test filtering services by team."""
filtered = filter_services(sample_services)
assert len(filtered) == 2
assert all(
service["teams"][0]["summary"] == "Platform Team" for service in filtered
)
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_USERS", ["U123"])
def test_filter_services_by_users(sample_services):
"""Test filtering services by users in escalation policy."""
filtered = filter_services(sample_services)
# Should include both the matching technical service and the business service
assert len(filtered) == 2
# Verify the technical service with matching user is included
assert any(service["id"] == "P123" for service in filtered)
# Verify the business service is included (not filtered by users)
assert any(service["type"] == "business_service" for service in filtered)
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_SERVICE_REGEX", "Prod.*")
def test_filter_services_by_regex(sample_services):
"""Test filtering services by name regex pattern."""
filtered = filter_services(sample_services)
assert len(filtered) == 1
assert filtered[0]["name"] == "Production Service"
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_TEAM", "")
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_USERS", [])
def test_filter_services_no_filters(sample_services):
"""Test that no filters returns all services."""
filtered = filter_services(sample_services)
assert len(filtered) == len(sample_services)
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_TEAM", "Platform Team")
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_USERS", ["U123"])
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_SERVICE_REGEX", "Prod.*")
def test_filter_services_multiple_filters(sample_services):
"""Test applying multiple filters together."""
filtered = filter_services(sample_services)
assert len(filtered) == 1
assert filtered[0]["id"] == "P123"
assert filtered[0]["teams"][0]["summary"] == "Platform Team"
assert filtered[0]["name"] == "Production Service"
@patch("lib.pagerduty.resources.services.PAGERDUTY_FILTER_USERS", ["U123"])
def test_filter_services_business_services(sample_services):
"""Test that business services are not filtered by user assignments."""
filtered = filter_services(sample_services)
assert len(filtered) == 2
assert any(service["type"] == "business_service" for service in filtered)
def test_technical_service_init(service_data):
"""Test TechnicalService initialization with basic fields."""
service = TechnicalService(service_data)
assert service.id == "SERVICE123"
assert service.name == "Test Service"
assert service.description == "A test service"
assert service.status == "active"
assert service.created_at == "2023-01-01T00:00:00Z"
assert service.updated_at == "2023-01-02T00:00:00Z"
assert service.html_url == "https://example.pagerduty.com/service/SERVICE123"
assert service.self_url == "https://api.pagerduty.com/services/SERVICE123"
assert service.escalation_policy == {"id": "EP123", "name": "Test Policy"}
assert service.teams == [{"id": "TEAM1", "summary": "Team 1"}]
assert service.dependencies == []
assert service.raw_data == service_data
def test_technical_service_str():
"""Test string representation of the service."""
service = TechnicalService({"id": "SERVICE123", "name": "Test Service"})
assert str(service) == "TechnicalService(id=SERVICE123, name=Test Service)"
def test_fetch_services(mock_session):
"""Test fetching services from PagerDuty API."""
mock_session.list_all.return_value = [
{"id": "SERVICE1", "name": "Service 1"},
{"id": "SERVICE2", "name": "Service 2"},
]
services = fetch_services(mock_session)
# Verify API call
mock_session.list_all.assert_called_once_with(
"services", params={"include[]": ["integrations", "teams"]}
)
# Verify results
assert len(services) == 2
assert isinstance(services[0], TechnicalService)
assert services[0].id == "SERVICE1"
assert services[1].id == "SERVICE2"
def test_fetch_services_without_includes(mock_session):
"""Test fetching services without including integrations or teams."""
mock_session.list_all.return_value = [{"id": "SERVICE1"}]
services = fetch_services(
mock_session, include_integrations=False, include_teams=False
)
# Verify API call with no includes
mock_session.list_all.assert_called_once_with("services", params={})
# Verify results
assert len(services) == 1
assert isinstance(services[0], TechnicalService)
def test_fetch_service_dependencies(mock_session, mock_services):
"""Test fetching service dependencies."""
# Mock the dependencies API call - only mock for the first service to simplify
mock_session.get.side_effect = [
{
"relationships": [{"supporting_service": {"id": "SERVICE2"}}]
}, # First call returns SERVICE2 as a dependency
{"relationships": []}, # Second call returns no dependencies
]
fetch_service_dependencies(mock_session, mock_services)
# Verify API calls - should be called for each service
assert mock_session.get.call_count == 2
mock_session.get.assert_any_call("service_dependencies/technical_services/SERVICE1")
mock_session.get.assert_any_call("service_dependencies/technical_services/SERVICE2")
# Verify that service1 now has service2 as a dependency
assert len(mock_services[0].dependencies) == 1
assert mock_services[0].dependencies[0] == mock_services[1]
# Service2 should have no dependencies since the mock returned empty list
assert len(mock_services[1].dependencies) == 0
@patch("lib.pagerduty.resources.services.fetch_service_dependencies")
@patch("lib.pagerduty.resources.services.fetch_services")
def test_get_all_technical_services_with_metadata(mock_fetch_services, mock_fetch_deps):
"""Test getting all services with their metadata."""
mock_session = MagicMock()
mock_services = [MagicMock(), MagicMock()]
mock_fetch_services.return_value = mock_services
result = get_all_technical_services_with_metadata(mock_session)
# Verify calls
mock_fetch_services.assert_called_once_with(mock_session)
mock_fetch_deps.assert_called_once_with(mock_session, mock_services)
# Verify result
assert result == mock_services
def test_transform_technical_service(technical_service):
"""Test transforming a technical service."""
component = _transform_service(technical_service)
# Verify the component structure
assert component["apiVersion"] == "servicemodel.ext.grafana.com/v1alpha1"
assert component["kind"] == "Component"
assert component["metadata"]["name"] == "test-service"
assert component["spec"]["type"] == "service"
assert component["spec"]["description"] == "A test service"
# Verify annotations
annotations = component["metadata"]["annotations"]
assert annotations["pagerduty.com/service-id"] == "P123456"
assert annotations["pagerduty.com/status"] == "active"
assert (
annotations["pagerduty.com/html-url"]
== "https://pagerduty.com/services/P123456"
)
assert (
annotations["pagerduty.com/api-url"]
== "https://api.pagerduty.com/services/P123456"
)
def test_transform_business_service(business_service):
"""Test transforming a business service."""
component = _transform_service(business_service)
# Verify the component structure
assert component["apiVersion"] == "servicemodel.ext.grafana.com/v1alpha1"
assert component["kind"] == "Component"
assert component["metadata"]["name"] == "test-business-service"
assert component["spec"]["type"] == "business_service"
assert component["spec"]["description"] == "A test business service"
# Verify annotations
annotations = component["metadata"]["annotations"]
assert annotations["pagerduty.com/service-id"] == "P789012"
assert (
annotations["pagerduty.com/html-url"]
== "https://pagerduty.com/services/P789012"
)
assert (
annotations["pagerduty.com/api-url"]
== "https://api.pagerduty.com/services/P789012"
)
def test_validate_component():
"""Test component validation."""
# Test valid component
valid_component = {
"apiVersion": "servicemodel.ext.grafana.com/v1alpha1",
"kind": "Component",
"metadata": {
"name": "test-service",
"annotations": {
"pagerduty.com/service-id": "P123456",
"pagerduty.com/status": "active",
},
},
"spec": {"type": "service", "description": "A test service"},
}
errors = _validate_component(valid_component)
assert errors == []
# Test missing required field
invalid_component = valid_component.copy()
del invalid_component["spec"]
errors = _validate_component(invalid_component)
assert "Missing required field: spec" in errors

View file

@ -0,0 +1,30 @@
from unittest.mock import patch
import pytest
from lib.pagerduty.resources.users import filter_users
@pytest.fixture
def users():
return [
{"id": "USER1", "name": "User 1"},
{"id": "USER2", "name": "User 2"},
{"id": "USER3", "name": "User 3"},
]
@patch("lib.pagerduty.resources.users.PAGERDUTY_FILTER_USERS", ["USER1", "USER3"])
def test_filter_users(users):
"""Test filtering users by ID when PAGERDUTY_FILTER_USERS is set."""
filtered = filter_users(users)
assert len(filtered) == 2
assert {u["id"] for u in filtered} == {"USER1", "USER3"}
@patch("lib.pagerduty.resources.users.PAGERDUTY_FILTER_USERS", [])
def test_filter_users_no_filter(users):
"""Test that all users are kept when PAGERDUTY_FILTER_USERS is empty."""
filtered = filter_users(users)
assert len(filtered) == 3
assert {u["id"] for u in filtered} == {"USER1", "USER2", "USER3"}

View file

@ -1,38 +0,0 @@
from lib.common.resources.users import match_user
from lib.pagerduty.resources.escalation_policies import match_escalation_policy
from lib.pagerduty.resources.integrations import match_integration
from lib.pagerduty.resources.schedules import match_schedule
def test_match_user_email_case_insensitive():
pd_user = {"email": "test@test.com"}
oncall_users = [{"email": "TEST@TEST.COM"}]
match_user(pd_user, oncall_users)
assert pd_user["oncall_user"] == oncall_users[0]
def test_match_schedule_name_case_insensitive():
pd_schedule = {"name": "Test"}
oncall_schedules = [{"name": "test"}]
match_schedule(pd_schedule, oncall_schedules, user_id_map={})
assert pd_schedule["oncall_schedule"] == oncall_schedules[0]
def test_match_escalation_policy_name_case_insensitive():
pd_escalation_policy = {"name": "Test"}
oncall_escalation_chains = [{"name": "test"}]
match_escalation_policy(pd_escalation_policy, oncall_escalation_chains)
assert (
pd_escalation_policy["oncall_escalation_chain"] == oncall_escalation_chains[0]
)
def test_match_integration_name_case_insensitive():
pd_integration = {"service": {"name": "Test service"}, "name": "test Integration"}
oncall_integrations = [{"name": "test Service - Test integration"}]
match_integration(pd_integration, oncall_integrations)
assert pd_integration["oncall_integration"] == oncall_integrations[0]

View file

@ -1,32 +0,0 @@
from lib.pagerduty.resources.escalation_policies import match_escalation_policy
from lib.pagerduty.resources.integrations import match_integration
from lib.pagerduty.resources.schedules import match_schedule
def test_match_schedule_name_extra_spaces():
pd_schedule = {"name": " test "}
oncall_schedules = [{"name": "test"}]
match_schedule(pd_schedule, oncall_schedules, user_id_map={})
assert pd_schedule["oncall_schedule"] == oncall_schedules[0]
def test_match_escalation_policy_name_extra_spaces():
pd_escalation_policy = {"name": " test "}
oncall_escalation_chains = [{"name": "test"}]
match_escalation_policy(pd_escalation_policy, oncall_escalation_chains)
assert (
pd_escalation_policy["oncall_escalation_chain"] == oncall_escalation_chains[0]
)
def test_match_integration_name_extra_spaces():
pd_integration = {
"service": {"name": " test service "},
"name": " test integration ",
}
oncall_integrations = [{"name": "test service - test integration"}]
match_integration(pd_integration, oncall_integrations)
assert pd_integration["oncall_integration"] == oncall_integrations[0]

View file

@ -1,12 +1,6 @@
from unittest.mock import call, patch
from lib.pagerduty.migrate import (
filter_escalation_policies,
filter_integrations,
filter_schedules,
filter_users,
migrate,
)
from lib.pagerduty.migrate import migrate
@patch("lib.pagerduty.migrate.MIGRATE_USERS", False)
@ -35,13 +29,19 @@ def test_users_are_skipped_when_migrate_users_is_false(
mock_oncall_client.list_users_with_notification_rules.assert_not_called()
@patch("lib.pagerduty.migrate.MIGRATE_USERS", True)
# Need to mock PAGERDUTY_FILTER_USERS in both spots because it's
# used in both migrate.py and users.py (and filter_users is imported from users.py)
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1", "USER3"])
@patch("lib.pagerduty.resources.users.PAGERDUTY_FILTER_USERS", ["USER1", "USER3"])
@patch("lib.pagerduty.migrate.MIGRATE_USERS", True)
@patch("lib.pagerduty.migrate.MODE", "migrate") # Skip report generation
@patch("lib.pagerduty.migrate.APISession")
@patch("lib.pagerduty.migrate.OnCallAPIClient")
@patch("lib.pagerduty.migrate.match_user")
def test_only_specified_users_are_processed_when_filter_users_is_set(
MockOnCallAPIClient, MockAPISession
mock_match_user,
MockOnCallAPIClient,
MockAPISession,
):
mock_session = MockAPISession.return_value
@ -83,282 +83,18 @@ def test_only_specified_users_are_processed_when_filter_users_is_set(
]
mock_session.jget.return_value = {"overrides": []}
# Mock the user matching function to set oncall_user
with patch("lib.pagerduty.migrate.match_user") as mock_match_user:
def set_oncall_user(user, _):
# Just leave oncall_user as it is (None)
pass
def set_oncall_user(user, _):
# Just leave oncall_user as it is (None)
pass
mock_match_user.side_effect = set_oncall_user
mock_match_user.side_effect = set_oncall_user
migrate()
# Run migrate
migrate()
# Check that match_user was only called for USER1 and USER3
assert mock_match_user.call_count == 2
user_ids = [
call_args[0][0]["id"] for call_args in mock_match_user.call_args_list
]
assert set(user_ids) == {"USER1", "USER3"}
class TestPagerDutyFiltering:
def setup_method(self):
self.mock_schedule = {
"id": "SCHEDULE1",
"name": "Test Schedule",
"teams": [{"summary": "Team 1"}],
"schedule_layers": [
{
"users": [
{"user": {"id": "USER1"}},
{"user": {"id": "USER2"}},
]
}
],
}
self.mock_policy = {
"id": "POLICY1",
"name": "Test Policy",
"teams": [{"summary": "Team 1"}],
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER1"},
{"type": "user", "id": "USER2"},
]
}
],
}
self.mock_integration = {
"id": "INTEGRATION1",
"name": "Test Integration",
"service": {
"name": "Service 1",
"teams": [{"summary": "Team 1"}],
},
}
self.users = [
{"id": "USER1", "name": "User 1"},
{"id": "USER2", "name": "User 2"},
{"id": "USER3", "name": "User 3"},
]
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1", "USER3"])
def test_filter_users(self):
"""Test filtering users by ID when PAGERDUTY_FILTER_USERS is set."""
filtered = filter_users(self.users)
assert len(filtered) == 2
assert {u["id"] for u in filtered} == {"USER1", "USER3"}
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", [])
def test_filter_users_no_filter(self):
"""Test that all users are kept when PAGERDUTY_FILTER_USERS is empty."""
filtered = filter_users(self.users)
assert len(filtered) == 3
assert {u["id"] for u in filtered} == {"USER1", "USER2", "USER3"}
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_schedules_by_team(self):
schedules = [
self.mock_schedule,
{**self.mock_schedule, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_schedules_by_users(self):
schedules = [
self.mock_schedule,
{
**self.mock_schedule,
"schedule_layers": [{"users": [{"user": {"id": "USER3"}}]}],
},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_SCHEDULE_REGEX", "^Test")
def test_filter_schedules_by_regex(self):
schedules = [
self.mock_schedule,
{**self.mock_schedule, "name": "Another Schedule"},
]
filtered = filter_schedules(schedules)
assert len(filtered) == 1
assert filtered[0]["id"] == "SCHEDULE1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER3"])
def test_filter_schedules_with_multiple_filters_or_logic(self):
"""Test that OR logic is applied between filters - a schedule matching any filter is included"""
schedules = [
self.mock_schedule, # Has Team 1 but not USER3
{
"id": "SCHEDULE2",
"name": "Test Schedule 2",
"teams": [{"summary": "Team 2"}], # Not Team 1
"schedule_layers": [
{"users": [{"user": {"id": "USER3"}}]}
], # Has USER3
},
{
"id": "SCHEDULE3",
"name": "Test Schedule 3",
"teams": [{"summary": "Team 3"}], # Not Team 1
"schedule_layers": [
{"users": [{"user": {"id": "USER4"}}]}
], # Not USER3
},
]
filtered = filter_schedules(schedules)
# SCHEDULE1 matches team filter, SCHEDULE2 matches user filter, SCHEDULE3 matches neither
assert len(filtered) == 2
assert {s["id"] for s in filtered} == {"SCHEDULE1", "SCHEDULE2"}
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_escalation_policies_by_team(self):
policies = [
self.mock_policy,
{**self.mock_policy, "teams": [{"summary": "Team 2"}]},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER1"])
def test_filter_escalation_policies_by_users(self):
policies = [
self.mock_policy,
{
**self.mock_policy,
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER3"},
{"type": "user", "id": "USER4"},
]
}
],
},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_ESCALATION_POLICY_REGEX", "^Test")
def test_filter_escalation_policies_by_regex(self):
policies = [
self.mock_policy,
{**self.mock_policy, "name": "Another Policy"},
]
filtered = filter_escalation_policies(policies)
assert len(filtered) == 1
assert filtered[0]["id"] == "POLICY1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_USERS", ["USER3"])
def test_filter_escalation_policies_with_multiple_filters_or_logic(self):
"""Test that OR logic is applied between filters - a policy matching any filter is included"""
policies = [
self.mock_policy, # Has Team 1 but not USER3
{
"id": "POLICY2",
"name": "Test Policy 2",
"teams": [{"summary": "Team 2"}], # Not Team 1
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER3"}, # Has USER3
]
}
],
},
{
"id": "POLICY3",
"name": "Test Policy 3",
"teams": [{"summary": "Team 3"}], # Not Team 1
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "USER4"}, # Not USER3
]
}
],
},
]
filtered = filter_escalation_policies(policies)
# POLICY1 matches team filter, POLICY2 matches user filter, POLICY3 matches neither
assert len(filtered) == 2
assert {p["id"] for p in filtered} == {"POLICY1", "POLICY2"}
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_filter_integrations_by_team(self):
integrations = [
self.mock_integration,
{
**self.mock_integration,
"service": {
"name": "Service 1",
"teams": [{"summary": "Team 2"}],
},
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
@patch(
"lib.pagerduty.migrate.PAGERDUTY_FILTER_INTEGRATION_REGEX", "^Service 1 - Test"
)
def test_filter_integrations_by_regex(self):
integrations = [
self.mock_integration,
{
**self.mock_integration,
"service": {"name": "Service 2", "teams": [{"summary": "Team 1"}]},
},
]
filtered = filter_integrations(integrations)
assert len(filtered) == 1
assert filtered[0]["id"] == "INTEGRATION1"
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch(
"lib.pagerduty.migrate.PAGERDUTY_FILTER_INTEGRATION_REGEX", "^Service 2 - Test"
)
def test_filter_integrations_with_multiple_filters_or_logic(self):
"""Test that OR logic is applied between filters - an integration matching any filter is included"""
integrations = [
self.mock_integration, # Has Team 1 but doesn't match regex
{
"id": "INTEGRATION2",
"name": "Test Integration",
"service": {
"name": "Service 2", # Matches regex
"teams": [{"summary": "Team 2"}], # Not Team 1
},
},
{
"id": "INTEGRATION3",
"name": "Test Integration",
"service": {
"name": "Service 3", # Doesn't match regex
"teams": [{"summary": "Team 3"}], # Not Team 1
},
},
]
filtered = filter_integrations(integrations)
# INTEGRATION1 matches team filter, INTEGRATION2 matches regex filter, INTEGRATION3 matches neither
assert len(filtered) == 2
assert {i["id"] for i in filtered} == {"INTEGRATION1", "INTEGRATION2"}
# Check that match_user was only called for USER1 and USER3
assert mock_match_user.call_count == 2
user_ids = [call_args[0][0]["id"] for call_args in mock_match_user.call_args_list]
assert set(user_ids) == {"USER1", "USER3"}
class TestPagerDutyMigrationFiltering:
@ -401,7 +137,7 @@ class TestPagerDutyMigrationFiltering:
mock_filter_policies.assert_called_once()
mock_filter_integrations.assert_called_once()
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.config.PAGERDUTY_FILTER_TEAM", "Team 1")
@patch("lib.pagerduty.migrate.filter_schedules")
@patch("lib.pagerduty.migrate.filter_escalation_policies")
@patch("lib.pagerduty.migrate.filter_integrations")
@ -500,51 +236,3 @@ class TestPagerDutyMigrationFiltering:
mock_filter_schedules.assert_called_once()
mock_filter_policies.assert_called_once()
mock_filter_integrations.assert_called_once()
@patch("lib.pagerduty.migrate.VERBOSE_LOGGING", True)
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_verbose_logging_for_schedules(capsys):
schedules = [
{
"id": "SCHEDULE1",
"name": "Test Schedule",
"teams": [{"summary": "Team 1"}],
},
{
"id": "SCHEDULE2",
"name": "Other Schedule",
"teams": [{"summary": "Team 2"}],
},
]
filter_schedules(schedules)
# Capture the output and verify verbose messages
captured = capsys.readouterr()
assert "Filtered out 1 schedules" in captured.out
assert "Schedule SCHEDULE2: No teams found for team filter: Team 1" in captured.out
@patch("lib.pagerduty.migrate.VERBOSE_LOGGING", False)
@patch("lib.pagerduty.migrate.PAGERDUTY_FILTER_TEAM", "Team 1")
def test_non_verbose_logging_for_schedules(capsys):
schedules = [
{
"id": "SCHEDULE1",
"name": "Test Schedule",
"teams": [{"summary": "Team 1"}],
},
{
"id": "SCHEDULE2",
"name": "Other Schedule",
"teams": [{"summary": "Team 2"}],
},
]
filter_schedules(schedules)
# Capture the output and verify no verbose messages
captured = capsys.readouterr()
assert "Filtered out 1 schedules" in captured.out
assert "Schedule SCHEDULE2" not in captured.out

View file

@ -1,110 +0,0 @@
"""
Tests for service filtering functionality.
"""
from unittest.mock import patch
import pytest
from lib.common.resources.services import filter_services
@pytest.fixture
def sample_services():
"""Sample service data for testing."""
return [
{
"id": "P123",
"name": "Production Service",
"type": "service",
"teams": [{"summary": "Platform Team"}],
"escalation_policy": {
"escalation_rules": [
{
"targets": [
{"type": "user", "id": "U123"},
{"type": "user", "id": "U456"},
]
}
]
},
},
{
"id": "P456",
"name": "Staging Service",
"type": "service",
"teams": [{"summary": "DevOps Team"}],
"escalation_policy": {
"escalation_rules": [{"targets": [{"type": "user", "id": "U789"}]}]
},
},
{
"id": "B123",
"name": "Business Service",
"type": "business_service",
"teams": [{"summary": "Platform Team"}],
},
]
def test_filter_services_by_team(sample_services):
"""Test filtering services by team."""
with patch("lib.common.resources.services.PAGERDUTY_FILTER_TEAM", "Platform Team"):
filtered = filter_services(sample_services)
assert len(filtered) == 2
assert all(
service["teams"][0]["summary"] == "Platform Team" for service in filtered
)
def test_filter_services_by_users(sample_services):
"""Test filtering services by users in escalation policy."""
with patch("lib.common.resources.services.PAGERDUTY_FILTER_USERS", ["U123"]):
filtered = filter_services(sample_services)
# Should include both the matching technical service and the business service
assert len(filtered) == 2
# Verify the technical service with matching user is included
assert any(service["id"] == "P123" for service in filtered)
# Verify the business service is included (not filtered by users)
assert any(service["type"] == "business_service" for service in filtered)
def test_filter_services_by_regex(sample_services):
"""Test filtering services by name regex pattern."""
with patch(
"lib.common.resources.services.PAGERDUTY_FILTER_SERVICE_REGEX", "Prod.*"
):
filtered = filter_services(sample_services)
assert len(filtered) == 1
assert filtered[0]["name"] == "Production Service"
def test_filter_services_no_filters(sample_services):
"""Test that no filters returns all services."""
with patch("lib.common.resources.services.PAGERDUTY_FILTER_TEAM", ""), patch(
"lib.common.resources.services.PAGERDUTY_FILTER_USERS", []
), patch("lib.common.resources.services.PAGERDUTY_FILTER_SERVICE_REGEX", ""):
filtered = filter_services(sample_services)
assert len(filtered) == len(sample_services)
def test_filter_services_multiple_filters(sample_services):
"""Test applying multiple filters together."""
with patch(
"lib.common.resources.services.PAGERDUTY_FILTER_TEAM", "Platform Team"
), patch("lib.common.resources.services.PAGERDUTY_FILTER_USERS", ["U123"]), patch(
"lib.common.resources.services.PAGERDUTY_FILTER_SERVICE_REGEX", "Prod.*"
):
filtered = filter_services(sample_services)
assert len(filtered) == 1
assert filtered[0]["id"] == "P123"
assert filtered[0]["teams"][0]["summary"] == "Platform Team"
assert filtered[0]["name"] == "Production Service"
def test_filter_services_business_services(sample_services):
"""Test that business services are not filtered by user assignments."""
with patch("lib.common.resources.services.PAGERDUTY_FILTER_USERS", ["U123"]):
filtered = filter_services(sample_services)
assert len(filtered) == 2
assert any(service["type"] == "business_service" for service in filtered)

View file

@ -1,153 +0,0 @@
"""
Tests for the PagerDuty services module.
"""
from unittest.mock import MagicMock, patch
import pytest
from lib.pagerduty.resources.services import (
TechnicalService,
fetch_service_dependencies,
fetch_services,
get_all_technical_services_with_metadata,
)
@pytest.fixture
def service_data():
"""Basic service data fixture."""
return {
"id": "SERVICE123",
"name": "Test Service",
"description": "A test service",
"status": "active",
"created_at": "2023-01-01T00:00:00Z",
"updated_at": "2023-01-02T00:00:00Z",
"html_url": "https://example.pagerduty.com/service/SERVICE123",
"self": "https://api.pagerduty.com/services/SERVICE123",
"escalation_policy": {"id": "EP123", "name": "Test Policy"},
"teams": [{"id": "TEAM1", "summary": "Team 1"}],
}
def test_technical_service_init(service_data):
"""Test TechnicalService initialization with basic fields."""
service = TechnicalService(service_data)
assert service.id == "SERVICE123"
assert service.name == "Test Service"
assert service.description == "A test service"
assert service.status == "active"
assert service.created_at == "2023-01-01T00:00:00Z"
assert service.updated_at == "2023-01-02T00:00:00Z"
assert service.html_url == "https://example.pagerduty.com/service/SERVICE123"
assert service.self_url == "https://api.pagerduty.com/services/SERVICE123"
assert service.escalation_policy == {"id": "EP123", "name": "Test Policy"}
assert service.teams == [{"id": "TEAM1", "summary": "Team 1"}]
assert service.dependencies == []
assert service.raw_data == service_data
def test_technical_service_str():
"""Test string representation of the service."""
service = TechnicalService({"id": "SERVICE123", "name": "Test Service"})
assert str(service) == "TechnicalService(id=SERVICE123, name=Test Service)"
@pytest.fixture
def mock_session():
"""Create a mock API session."""
return MagicMock()
def test_fetch_services(mock_session):
"""Test fetching services from PagerDuty API."""
mock_session.list_all.return_value = [
{"id": "SERVICE1", "name": "Service 1"},
{"id": "SERVICE2", "name": "Service 2"},
]
services = fetch_services(mock_session)
# Verify API call
mock_session.list_all.assert_called_once_with(
"services", params={"include[]": ["integrations", "teams"]}
)
# Verify results
assert len(services) == 2
assert isinstance(services[0], TechnicalService)
assert services[0].id == "SERVICE1"
assert services[1].id == "SERVICE2"
def test_fetch_services_without_includes(mock_session):
"""Test fetching services without including integrations or teams."""
mock_session.list_all.return_value = [{"id": "SERVICE1"}]
services = fetch_services(
mock_session, include_integrations=False, include_teams=False
)
# Verify API call with no includes
mock_session.list_all.assert_called_once_with("services", params={})
# Verify results
assert len(services) == 1
assert isinstance(services[0], TechnicalService)
@pytest.fixture
def mock_services():
"""Create mock services for dependency testing."""
service1 = TechnicalService({"id": "SERVICE1", "name": "Service 1"})
service2 = TechnicalService({"id": "SERVICE2", "name": "Service 2"})
return [service1, service2]
def test_fetch_service_dependencies(mock_session, mock_services):
"""Test fetching service dependencies."""
# Mock the dependencies API call - only mock for the first service to simplify
mock_session.get.side_effect = [
{
"relationships": [{"supporting_service": {"id": "SERVICE2"}}]
}, # First call returns SERVICE2 as a dependency
{"relationships": []}, # Second call returns no dependencies
]
fetch_service_dependencies(mock_session, mock_services)
# Verify API calls - should be called for each service
assert mock_session.get.call_count == 2
mock_session.get.assert_any_call("service_dependencies/technical_services/SERVICE1")
mock_session.get.assert_any_call("service_dependencies/technical_services/SERVICE2")
# Verify that service1 now has service2 as a dependency
assert len(mock_services[0].dependencies) == 1
assert mock_services[0].dependencies[0] == mock_services[1]
# Service2 should have no dependencies since the mock returned empty list
assert len(mock_services[1].dependencies) == 0
def test_get_all_technical_services_with_metadata():
"""Test getting all services with their metadata."""
mock_session = MagicMock()
mock_services = [MagicMock(), MagicMock()]
with patch(
"lib.pagerduty.resources.services.fetch_services"
) as mock_fetch_services:
with patch(
"lib.pagerduty.resources.services.fetch_service_dependencies"
) as mock_fetch_deps:
mock_fetch_services.return_value = mock_services
result = get_all_technical_services_with_metadata(mock_session)
# Verify calls
mock_fetch_services.assert_called_once_with(mock_session)
mock_fetch_deps.assert_called_once_with(mock_session, mock_services)
# Verify result
assert result == mock_services

View file

@ -353,3 +353,189 @@ def test_splunk_user_already_exists(
), 'Expected "already exists" message not found in print calls'
# Verify sys.exit was not called
mock_exit.assert_not_called()
@patch("lib.opsgenie.api_client.OpsGenieAPIClient")
@patch("lib.grafana.api_client.GrafanaAPIClient")
@patch("sys.exit")
@patch.dict(
"os.environ",
{
"MIGRATING_FROM": "opsgenie",
"OPSGENIE_API_KEY": "test_token",
"GRAFANA_URL": "http://test.com",
"GRAFANA_USERNAME": "test_user",
"GRAFANA_PASSWORD": "test_pass",
"OPSGENIE_FILTER_USERS": "",
},
)
def test_migrate_all_opsgenie_users(
mock_exit, mock_grafana_client_class, mock_opsgenie_client_class
):
mock_opsgenie_instance = mock_opsgenie_client_class.return_value
mock_opsgenie_instance.list_users.return_value = [
{"id": "USER1", "fullName": "User One", "username": "user1@example.com"},
{"id": "USER2", "fullName": "User Two", "username": "user2@example.com"},
{"id": "USER3", "fullName": "User Three", "username": "user3@example.com"},
]
mock_grafana_instance = mock_grafana_client_class.return_value
mock_grafana_instance.create_user_with_random_password.return_value = MockResponse(
200
)
import importlib
import add_users_to_grafana
importlib.reload(add_users_to_grafana)
add_users_to_grafana.migrate_opsgenie_users()
assert mock_opsgenie_instance.list_users.called
assert mock_grafana_instance.create_user_with_random_password.call_count == 3
mock_exit.assert_not_called()
calls = mock_grafana_instance.create_user_with_random_password.call_args_list
call_emails = [call[0][1] for call in calls]
assert "user1@example.com" in call_emails
assert "user2@example.com" in call_emails
assert "user3@example.com" in call_emails
@patch("lib.opsgenie.api_client.OpsGenieAPIClient")
@patch("lib.grafana.api_client.GrafanaAPIClient")
@patch("sys.exit")
@patch.dict(
"os.environ",
{
"MIGRATING_FROM": "opsgenie",
"OPSGENIE_API_KEY": "test_token",
"GRAFANA_URL": "http://test.com",
"GRAFANA_USERNAME": "test_user",
"GRAFANA_PASSWORD": "test_pass",
"OPSGENIE_FILTER_USERS": "USER1,USER3",
},
)
def test_migrate_filtered_opsgenie_users(
mock_exit, mock_grafana_client_class, mock_opsgenie_client_class
):
mock_opsgenie_instance = mock_opsgenie_client_class.return_value
mock_opsgenie_instance.list_users.return_value = [
{"id": "USER1", "fullName": "User One", "username": "user1@example.com"},
{"id": "USER2", "fullName": "User Two", "username": "user2@example.com"},
{"id": "USER3", "fullName": "User Three", "username": "user3@example.com"},
]
mock_grafana_instance = mock_grafana_client_class.return_value
mock_grafana_instance.create_user_with_random_password.return_value = MockResponse(
200
)
import importlib
import add_users_to_grafana
importlib.reload(add_users_to_grafana)
add_users_to_grafana.migrate_opsgenie_users()
assert mock_opsgenie_instance.list_users.called
assert mock_grafana_instance.create_user_with_random_password.call_count == 2
mock_exit.assert_not_called()
calls = mock_grafana_instance.create_user_with_random_password.call_args_list
call_emails = [call[0][1] for call in calls]
assert "user1@example.com" in call_emails
assert "user3@example.com" in call_emails
assert "user2@example.com" not in call_emails
@patch("lib.opsgenie.api_client.OpsGenieAPIClient")
@patch("lib.grafana.api_client.GrafanaAPIClient")
@patch("sys.exit")
@patch.dict(
"os.environ",
{
"MIGRATING_FROM": "opsgenie",
"OPSGENIE_API_KEY": "test_token",
"GRAFANA_URL": "http://test.com",
"GRAFANA_USERNAME": "test_user",
"GRAFANA_PASSWORD": "test_pass",
},
)
def test_opsgenie_error_handling(
mock_exit, mock_grafana_client_class, mock_opsgenie_client_class
):
mock_opsgenie_instance = mock_opsgenie_client_class.return_value
mock_opsgenie_instance.list_users.return_value = [
{"id": "USER1", "fullName": "User One", "username": "user1@example.com"}
]
mock_grafana_instance = mock_grafana_client_class.return_value
mock_grafana_instance.create_user_with_random_password.return_value = MockResponse(
401
)
import importlib
import add_users_to_grafana
importlib.reload(add_users_to_grafana)
add_users_to_grafana.migrate_opsgenie_users()
mock_exit.assert_called_once()
call_args = mock_exit.call_args[0][0]
assert "Invalid username or password" in call_args
@patch("lib.opsgenie.api_client.OpsGenieAPIClient")
@patch("lib.grafana.api_client.GrafanaAPIClient")
@patch("sys.exit")
@patch("builtins.print")
@patch.dict(
"os.environ",
{
"MIGRATING_FROM": "opsgenie",
"OPSGENIE_API_KEY": "test_token",
"GRAFANA_URL": "http://test.com",
"GRAFANA_USERNAME": "test_user",
"GRAFANA_PASSWORD": "test_pass",
},
)
def test_opsgenie_user_already_exists(
mock_print, mock_exit, mock_grafana_client_class, mock_opsgenie_client_class
):
mock_opsgenie_instance = mock_opsgenie_client_class.return_value
mock_opsgenie_instance.list_users.return_value = [
{"id": "USER1", "fullName": "User One", "username": "user1@example.com"}
]
mock_grafana_instance = mock_grafana_client_class.return_value
mock_grafana_instance.create_user_with_random_password.return_value = MockResponse(
412
)
import importlib
import add_users_to_grafana
importlib.reload(add_users_to_grafana)
add_users_to_grafana.migrate_opsgenie_users()
already_exists_message_found = False
for call_args in mock_print.call_args_list:
if (
len(call_args[0]) > 0
and isinstance(call_args[0][0], str)
and "already exists" in call_args[0][0]
):
already_exists_message_found = True
break
assert (
already_exists_message_found
), 'Expected "already exists" message not found in print calls'
mock_exit.assert_not_called()

View file

@ -0,0 +1,54 @@
import os
import uuid
from unittest.mock import patch
import pytest
from lib.session import SESSION_FILE, get_or_create_session_id
@pytest.fixture
def cleanup_session_file():
# Clean up before test
if os.path.exists(SESSION_FILE):
os.remove(SESSION_FILE)
yield
# Clean up after test
if os.path.exists(SESSION_FILE):
os.remove(SESSION_FILE)
def test_get_or_create_session_id_creates_new(cleanup_session_file):
# First call should create a new session ID
session_id1 = get_or_create_session_id()
assert session_id1 is not None
assert len(session_id1) > 0
# Verify it's a valid UUID
uuid.UUID(session_id1)
# Second call should return the same ID
session_id2 = get_or_create_session_id()
assert session_id2 == session_id1
# Verify file exists and contains the ID
assert os.path.exists(SESSION_FILE)
with open(SESSION_FILE, "r") as f:
stored_id = f.read().strip()
assert stored_id == session_id1
@patch("uuid.uuid4")
def test_get_or_create_session_id_uses_existing(mock_uuid, cleanup_session_file):
# Create a session file with a known ID
test_id = "12345678-1234-5678-1234-567812345678"
os.makedirs(os.path.dirname(SESSION_FILE), exist_ok=True)
with open(SESSION_FILE, "w") as f:
f.write(test_id)
# Should return existing ID without generating new one
session_id = get_or_create_session_id()
assert session_id == test_id
mock_uuid.assert_not_called()

View file

@ -1,4 +1,4 @@
from lib.base_config import MIGRATING_FROM, PAGERDUTY, SPLUNK
from lib.base_config import MIGRATING_FROM, OPSGENIE, PAGERDUTY, SPLUNK
if __name__ == "__main__":
if MIGRATING_FROM == PAGERDUTY:
@ -8,6 +8,10 @@ if __name__ == "__main__":
elif MIGRATING_FROM == SPLUNK:
from lib.splunk.migrate import migrate
migrate()
elif MIGRATING_FROM == OPSGENIE:
from lib.opsgenie.migrate import migrate
migrate()
else:
raise ValueError("Invalid MIGRATING_FROM value")

View file

@ -6,3 +6,5 @@ env =
D:MIGRATING_FROM=pagerduty
D:SPLUNK_API_ID=abcd
D:SPLUNK_API_KEY=abcd
D:OPSGENIE_API_KEY=abcd
D:OPSGENIE_API_URL=test