From f68b954b7210761c42556431463f9d221a414e8c Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 Aug 2024 19:06:43 +0000 Subject: [PATCH 01/15] Bump djangorestframework from 3.14.0 to 3.15.2 in /engine (#4593) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [djangorestframework](https://github.com/encode/django-rest-framework) from 3.14.0 to 3.15.2.
Release notes

Sourced from djangorestframework's releases.

Version 3.15.1

What's Changed

New Contributors

Full Changelog: https://github.com/encode/django-rest-framework/compare/3.15.0...3.15.1

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=djangorestframework&package-manager=pip&previous-version=3.14.0&new-version=3.15.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) You can trigger a rebase of this PR by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/grafana/oncall/network/alerts).
> **Note** > Automatic rebases have been disabled on this pull request as it has been open for over 30 days. --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Joey Orlando Co-authored-by: Joey Orlando --- engine/apps/api/serializers/alert_receive_channel.py | 6 +++--- engine/requirements.in | 2 +- engine/requirements.txt | 3 +-- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/engine/apps/api/serializers/alert_receive_channel.py b/engine/apps/api/serializers/alert_receive_channel.py index eabf2ff6..563f4eb1 100644 --- a/engine/apps/api/serializers/alert_receive_channel.py +++ b/engine/apps/api/serializers/alert_receive_channel.py @@ -8,7 +8,7 @@ from drf_spectacular.utils import PolymorphicProxySerializer, extend_schema_fiel from jinja2 import TemplateSyntaxError from rest_framework import serializers from rest_framework.exceptions import ValidationError -from rest_framework.fields import SerializerMethodField, set_value +from rest_framework.fields import SerializerMethodField from apps.alerts.grafana_alerting_sync_manager.grafana_alerting_sync import GrafanaAlertingSyncManager from apps.alerts.models import AlertReceiveChannel @@ -632,7 +632,7 @@ class AlertReceiveChannelTemplatesSerializer(EagerLoadingMixin, serializers.Mode backend_updates[field] = value # update backend templates backend_templates.update(backend_updates) - set_value(ret, ["messaging_backends_templates", backend_id], backend_templates) + self.set_value(ret, ["messaging_backends_templates", backend_id], backend_templates) return errors @@ -651,7 +651,7 @@ class AlertReceiveChannelTemplatesSerializer(EagerLoadingMixin, serializers.Mode errors[field_name] = "invalid template" except DjangoValidationError: errors[field_name] = "invalid URL" - set_value(ret, [field_name], value) + self.set_value(ret, [field_name], value) return errors def to_representation(self, obj: "AlertReceiveChannel"): diff --git a/engine/requirements.in b/engine/requirements.in index e6d30c75..fb9f4b65 100644 --- a/engine/requirements.in +++ b/engine/requirements.in @@ -23,7 +23,7 @@ django-redis==5.4.0 django-rest-polymorphic==0.1.10 django-silk==5.0.3 django-sns-view==0.1.2 -djangorestframework==3.14.0 +djangorestframework==3.15.2 factory-boy<3.0 drf-spectacular==0.26.5 emoji==2.4.0 diff --git a/engine/requirements.txt b/engine/requirements.txt index 5ee51fd0..e3584de8 100644 --- a/engine/requirements.txt +++ b/engine/requirements.txt @@ -136,7 +136,7 @@ django-silk==5.0.3 # via -r requirements.in django-sns-view==0.1.2 # via -r requirements.in -djangorestframework==3.14.0 +djangorestframework==3.15.2 # via # -r requirements.in # django-rest-polymorphic @@ -370,7 +370,6 @@ python3-openid==3.2.0 pytz==2024.1 # via # apscheduler - # djangorestframework # icalendar # python-telegram-bot # recurring-ical-events From 9b0c7933ae9e54741de4b5b7de8082bdf35a5db8 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 Aug 2024 19:23:56 +0000 Subject: [PATCH 02/15] Bump aiohttp from 3.9.4 to 3.10.2 in /dev/scripts/generate-fake-data (#4800) Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.9.4 to 3.10.2.
Release notes

Sourced from aiohttp's releases.

3.10.2

Bug fixes

  • Fixed server checks for circular symbolic links to be compatible with Python 3.13 -- by :user:steverep.

    Related issues and pull requests on GitHub: #8565.

  • Fixed request body not being read when ignoring an Upgrade request -- by :user:Dreamsorcerer.

    Related issues and pull requests on GitHub: #8597.

  • Fixed an edge case where shutdown would wait for timeout when the handler was already completed -- by :user:Dreamsorcerer.

    Related issues and pull requests on GitHub: #8611.

  • Fixed connecting to npipe://, tcp://, and unix:// urls -- by :user:bdraco.

    Related issues and pull requests on GitHub: #8632.

  • Fixed WebSocket ping tasks being prematurely garbage collected -- by :user:bdraco.

    There was a small risk that WebSocket ping tasks would be prematurely garbage collected because the event loop only holds a weak reference to the task. The garbage collection risk has been fixed by holding a strong reference to the task. Additionally, the task is now scheduled eagerly with Python 3.12+ to increase the chance it can be completed immediately and avoid having to hold any references to the task.

    Related issues and pull requests on GitHub: #8641.

  • Fixed incorrectly following symlinks for compressed file variants -- by :user:steverep.

    Related issues and pull requests on GitHub:

... (truncated)

Changelog

Sourced from aiohttp's changelog.

3.10.2 (2024-08-08)

Bug fixes

  • Fixed server checks for circular symbolic links to be compatible with Python 3.13 -- by :user:steverep.

    Related issues and pull requests on GitHub: :issue:8565.

  • Fixed request body not being read when ignoring an Upgrade request -- by :user:Dreamsorcerer.

    Related issues and pull requests on GitHub: :issue:8597.

  • Fixed an edge case where shutdown would wait for timeout when the handler was already completed -- by :user:Dreamsorcerer.

    Related issues and pull requests on GitHub: :issue:8611.

  • Fixed connecting to npipe://, tcp://, and unix:// urls -- by :user:bdraco.

    Related issues and pull requests on GitHub: :issue:8632.

  • Fixed WebSocket ping tasks being prematurely garbage collected -- by :user:bdraco.

    There was a small risk that WebSocket ping tasks would be prematurely garbage collected because the event loop only holds a weak reference to the task. The garbage collection risk has been fixed by holding a strong reference to the task. Additionally, the task is now scheduled eagerly with Python 3.12+ to increase the chance it can be completed immediately and avoid having to hold any references to the task.

    Related issues and pull requests on GitHub: :issue:8641.

  • Fixed incorrectly following symlinks for compressed file variants -- by :user:steverep.

... (truncated)

Commits
  • 491106e Release 3.10.2 (#8655)
  • ce2e975 [PR #8652/b0536ae6 backport][3.10] Do not follow symlinks for compressed file...
  • 6a77806 [PR #8636/51d872e backport][3.10] Remove Request.wait_for_disconnection() met...
  • 1f92213 [PR #8642/e4942771 backport][3.10] Fix response to circular symlinks with Pyt...
  • 2ef14a6 [PR #8641/0a88bab backport][3.10] Fix WebSocket ping tasks being prematurely ...
  • 68e8496 [PR #8608/c4acabc backport][3.10] Fix timer handle churn in websocket heartbe...
  • 72f41aa [PR #8632/b2691f2 backport][3.10] Fix connecting to npipe://, tcp://, and uni...
  • bf83dbe [PR #8634/c7293e19 backport][3.10] Backport #8620 as improvements to various ...
  • 4815765 [PR #8597/c99a1e27 backport][3.10] Fix reading of body when ignoring an upgra...
  • 266608d [PR #8611/1fcef940 backport][3.10] Fix handler waiting on shutdown (#8627)
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.9.4&new-version=3.10.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/grafana/oncall/network/alerts).
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- dev/scripts/generate-fake-data/requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/scripts/generate-fake-data/requirements.txt b/dev/scripts/generate-fake-data/requirements.txt index 7ccb0127..74976b8c 100644 --- a/dev/scripts/generate-fake-data/requirements.txt +++ b/dev/scripts/generate-fake-data/requirements.txt @@ -1,3 +1,3 @@ -aiohttp==3.9.4 +aiohttp==3.10.2 Faker==16.4.0 tqdm==4.66.3 From 535baf7fc8bfd207c5df895dff3ce65017c907d7 Mon Sep 17 00:00:00 2001 From: Joey Orlando Date: Fri, 9 Aug 2024 16:09:47 -0400 Subject: [PATCH 03/15] Fix missing `setuptools` dep (#4799) # What this PR does _tldr;_ I think we should install `setuptools` into our engine `Dockerfile` + in our CI env because Python 3.12 no longer installs `distutils` by default. This should unblock us from being able to merge #4656 and #4555. **More details** I would like to be able to merge #4656 and #4555. _However_, in both of these PRs `setuptools` is being removed from `requirements-dev.txt` ([here](https://github.com/grafana/oncall/pull/4555/files#diff-d8146d0816a943b0fa69a20399d7bbdb58e1c84c8b7933b2ba6dea7c10c410f5L113-L116) and [here](https://github.com/grafana/oncall/pull/4656/files#diff-d8146d0816a943b0fa69a20399d7bbdb58e1c84c8b7933b2ba6dea7c10c410f5L113-L116)). This leads to things breaking because of: ```bash File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/site-packages/polymorphic/__init__.py", line 9, in import pkg_resources ModuleNotFoundError: No module named 'pkg_resources' ``` - https://github.com/grafana/oncall/actions/runs/9865348392/job/27242117474?pr=4555#step:5:98 - https://github.com/grafana/oncall/actions/runs/10078898966/job/27864920455?pr=4656#step:5:100 Python 3.12 made a change to no longer pre-install `distutils` ([relevant release notes](https://docs.python.org/3/whatsnew/3.12.html#:~:text=The%20third%2Dparty%20Setuptools%20package%20continues%20to%20provide%20distutils%2C%20if%20you%20still%20require%20it%20in%20Python%203.12%20and%20beyond)): > [PEP 632](https://peps.python.org/pep-0632/): Remove the distutils package. See [the migration guide](https://peps.python.org/pep-0632/#migration-advice) for advice replacing the APIs it provided. The third-party [Setuptools](https://setuptools.pypa.io/en/latest/deprecated/distutils-legacy.html) package continues to provide distutils, if you still require it in Python 3.12 and beyond. > > [gh-95299](https://github.com/python/cpython/issues/95299): Do not pre-install setuptools in virtual environments created with [venv](https://docs.python.org/3/library/venv.html#module-venv). This means that distutils, setuptools, pkg_resources, and easy_install will no longer available by default; to access these run pip install setuptools in the [activated](https://docs.python.org/3/library/venv.html#venv-explanation) virtual environment. Additionally, `setuptools` is in `pip-tools` `UNSAFE_PACKAGES` list ([related GitHub issue](https://github.com/pypa/pipenv/issues/1417#issuecomment-364795745)), hence why I think Dependabot is removing it in #4656 and #4555. ## Checklist - [x] Unit, integration, and e2e (if applicable) tests updated - [x] Documentation added (or `pr:no public docs` PR label added if not required) - [x] Added the relevant release notes label (see labels prefixed w/ `release:`). These labels dictate how your PR will show up in the autogenerated release notes. --- .github/actions/setup-python/action.yml | 2 +- engine/Dockerfile | 2 +- engine/requirements-dev.txt | 4 ++-- engine/requirements.in | 11 +---------- engine/requirements.txt | 10 +++++++--- 5 files changed, 12 insertions(+), 17 deletions(-) diff --git a/.github/actions/setup-python/action.yml b/.github/actions/setup-python/action.yml index 1a9b2df6..6d46a6bd 100644 --- a/.github/actions/setup-python/action.yml +++ b/.github/actions/setup-python/action.yml @@ -23,5 +23,5 @@ runs: if: ${{ inputs.install-dependencies == 'true' }} shell: bash run: | - pip install uv + pip install uv setuptools uv pip sync --system ${{ inputs.python-requirements-paths }} diff --git a/engine/Dockerfile b/engine/Dockerfile index f3ab2e7b..f2f134cd 100644 --- a/engine/Dockerfile +++ b/engine/Dockerfile @@ -27,7 +27,7 @@ RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \ && rm grpcio-1.64.1-cp312-cp312-linux_aarch64.whl; \ fi -RUN pip install uv +RUN pip install uv setuptools # TODO: figure out how to get this to work.. see comment in .github/workflows/e2e-tests.yml # https://stackoverflow.com/a/71846527 diff --git a/engine/requirements-dev.txt b/engine/requirements-dev.txt index 532c5d10..4608e48c 100644 --- a/engine/requirements-dev.txt +++ b/engine/requirements-dev.txt @@ -106,11 +106,11 @@ pyyaml==6.0.1 # via # -c requirements.txt # pre-commit -requests==2.32.0 +requests==2.32.3 # via # -c requirements.txt # djangorestframework-stubs -setuptools==70.0.0 +setuptools==72.1.0 # via # -c requirements.txt # nodeenv diff --git a/engine/requirements.in b/engine/requirements.in index fb9f4b65..c57ca1ac 100644 --- a/engine/requirements.in +++ b/engine/requirements.in @@ -53,7 +53,7 @@ python-telegram-bot==13.13 recurring-ical-events==2.1.0 redis==5.0.1 regex==2021.11.2 -requests==2.32.0 +requests==2.32.3 slack-export-viewer==1.1.4 slack_sdk==3.21.3 social-auth-app-django==5.4.1 @@ -64,12 +64,3 @@ whitenoise==5.3.0 google-api-python-client==2.122.0 google-auth-httplib2==0.2.0 google-auth-oauthlib==1.2.0 -# see the following resources as to why we need to install setuptools manually -# -# Python 3.12 release notes https://docs.python.org/3/whatsnew/3.12.html -# -# python/cpython#95299: Do not pre-install setuptools in virtual environments -# created with venv. This means that distutils, setuptools, pkg_resources, and -# easy_install will no longer available by default; to access these run pip -# install setuptools in the activated virtual environment. -setuptools==70.0.0 diff --git a/engine/requirements.txt b/engine/requirements.txt index e3584de8..10488803 100644 --- a/engine/requirements.txt +++ b/engine/requirements.txt @@ -34,7 +34,7 @@ cachetools==4.2.2 # via # google-auth # python-telegram-bot -celery[redis]==5.3.1 +celery==5.3.1 # via -r requirements.in certifi==2024.2.2 # via @@ -157,7 +157,7 @@ firebase-admin==5.4.0 # via fcm-django flask==3.0.2 # via slack-export-viewer -google-api-core[grpc]==2.17.0 +google-api-core==2.17.0 # via # firebase-admin # google-api-python-client @@ -392,7 +392,7 @@ referencing==0.33.0 # jsonschema-specifications regex==2021.11.2 # via -r requirements.in -requests==2.32.0 +requests==2.32.3 # via # -r requirements.in # cachecontrol @@ -415,6 +415,10 @@ rsa==4.9 # via google-auth s3transfer==0.10.0 # via boto3 +setuptools==72.1.0 + # via + # apscheduler + # opentelemetry-instrumentation six==1.16.0 # via # apscheduler From 60f018417ac7b034dfecd267638daf16d5b1cba5 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 Aug 2024 16:30:38 -0400 Subject: [PATCH 04/15] Bump urllib3 from 1.26.18 to 1.26.19 in /engine (#4555) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.18 to 1.26.19.
Release notes

Sourced from urllib3's releases.

1.26.19

🚀 urllib3 is fundraising for HTTP/2 support

urllib3 is raising ~$40,000 USD to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects please consider contributing financially to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.

Thank you for your support.

Changes

  • Added the Proxy-Authorization header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via Retry.remove_headers_on_redirect.

Full Changelog: https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19

Note that due to an issue with our release automation, no multiple.intoto.jsonl file is available for this release.

Changelog

Sourced from urllib3's changelog.

1.26.19 (2024-06-17)

  • Added the Proxy-Authorization header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via Retry.remove_headers_on_redirect.
  • Fixed handling of OpenSSL 3.2.0 new error message for misconfiguring an HTTP proxy as HTTPS. ([#3405](https://github.com/urllib3/urllib3/issues/3405) <https://github.com/urllib3/urllib3/issues/3405>__)
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=1.26.18&new-version=1.26.19)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) You can trigger a rebase of this PR by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/grafana/oncall/network/alerts).
> **Note** > Automatic rebases have been disabled on this pull request as it has been open for over 30 days. Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- engine/requirements-dev.txt | 12 ++++-------- engine/requirements.in | 2 +- engine/requirements.txt | 10 +++------- 3 files changed, 8 insertions(+), 16 deletions(-) diff --git a/engine/requirements-dev.txt b/engine/requirements-dev.txt index 4608e48c..34ab2a5d 100644 --- a/engine/requirements-dev.txt +++ b/engine/requirements-dev.txt @@ -25,14 +25,14 @@ django==4.2.11 # django-stubs-ext django-filter-stubs==0.1.3 # via -r requirements-dev.in -django-stubs==4.2.2 +django-stubs[compatible-mypy]==4.2.2 # via # -r requirements-dev.in # django-filter-stubs # djangorestframework-stubs django-stubs-ext==4.2.7 # via django-stubs -djangorestframework-stubs==3.14.2 +djangorestframework-stubs[compatible-mypy]==3.14.2 # via # -r requirements-dev.in # django-filter-stubs @@ -96,7 +96,7 @@ pytest-django==4.8.0 # via -r requirements-dev.in pytest-factoryboy==2.7.0 # via -r requirements-dev.in -pytest-xdist==3.6.1 +pytest-xdist[psutil]==3.6.1 # via -r requirements-dev.in python-dateutil==2.8.2 # via @@ -110,10 +110,6 @@ requests==2.32.3 # via # -c requirements.txt # djangorestframework-stubs -setuptools==72.1.0 - # via - # -c requirements.txt - # nodeenv six==1.16.0 # via # -c requirements.txt @@ -156,7 +152,7 @@ typing-extensions==4.9.0 # djangorestframework-stubs # mypy # pytest-factoryboy -urllib3==1.26.18 +urllib3==1.26.19 # via # -c requirements.txt # requests diff --git a/engine/requirements.in b/engine/requirements.in index c57ca1ac..261696c3 100644 --- a/engine/requirements.in +++ b/engine/requirements.in @@ -58,7 +58,7 @@ slack-export-viewer==1.1.4 slack_sdk==3.21.3 social-auth-app-django==5.4.1 twilio~=6.37.0 -urllib3==1.26.18 +urllib3==1.26.19 uwsgi==2.0.26 whitenoise==5.3.0 google-api-python-client==2.122.0 diff --git a/engine/requirements.txt b/engine/requirements.txt index 10488803..31958d72 100644 --- a/engine/requirements.txt +++ b/engine/requirements.txt @@ -34,7 +34,7 @@ cachetools==4.2.2 # via # google-auth # python-telegram-bot -celery==5.3.1 +celery[redis]==5.3.1 # via -r requirements.in certifi==2024.2.2 # via @@ -157,7 +157,7 @@ firebase-admin==5.4.0 # via fcm-django flask==3.0.2 # via slack-export-viewer -google-api-core==2.17.0 +google-api-core[grpc]==2.17.0 # via # firebase-admin # google-api-python-client @@ -415,10 +415,6 @@ rsa==4.9 # via google-auth s3transfer==0.10.0 # via boto3 -setuptools==72.1.0 - # via - # apscheduler - # opentelemetry-instrumentation six==1.16.0 # via # apscheduler @@ -458,7 +454,7 @@ uritemplate==4.1.1 # via # drf-spectacular # google-api-python-client -urllib3==1.26.18 +urllib3==1.26.19 # via # -r requirements.in # botocore From e2bc9d784b8d5e14eb963de78cbd31365389d515 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 Aug 2024 16:30:50 -0400 Subject: [PATCH 05/15] Bump django from 4.2.11 to 4.2.15 in /engine (#4801) Bumps [django](https://github.com/django/django) from 4.2.11 to 4.2.15.
Commits
  • 4d32ebc [4.2.x] Bumped version for 4.2.15 release.
  • f4af67b [4.2.x] Fixed CVE-2024-42005 -- Mitigated QuerySet.values() SQL injection att...
  • efea1ef [4.2.x] Fixed CVE-2024-41991 -- Prevented potential ReDoS in django.utils.htm...
  • d0a82e2 [4.2.x] Fixed CVE-2024-41990 -- Mitigated potential DoS in urlize and urlizet...
  • fc76660 [4.2.x] Fixed CVE-2024-41989 -- Prevented excessive memory consumption in flo...
  • 7b1a76f [4.2.x] Added stub release notes and release date for 4.2.15.
  • 96a3497 [4.2.x] Fixed #35627 -- Raised a LookupError rather than an unhandled ValueEr...
  • c5d196a [4.2.x] Fixed auth_tests and file_storage tests on Python 3.8.
  • 8e59e33 [4.2.x] Added CVE-2024-38875, CVE-2024-39329, CVE-2024-39330, and CVE-2024-39...
  • 72f6c7d [4.2.x] Post-release version bump.
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=django&package-manager=pip&previous-version=4.2.11&new-version=4.2.15)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/grafana/oncall/network/alerts).
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- engine/requirements-dev.txt | 2 +- engine/requirements.in | 2 +- engine/requirements.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/engine/requirements-dev.txt b/engine/requirements-dev.txt index 34ab2a5d..ef9cfd69 100644 --- a/engine/requirements-dev.txt +++ b/engine/requirements-dev.txt @@ -18,7 +18,7 @@ charset-normalizer==3.3.2 # requests distlib==0.3.8 # via virtualenv -django==4.2.11 +django==4.2.15 # via # -c requirements.txt # django-stubs diff --git a/engine/requirements.in b/engine/requirements.in index 261696c3..9a0d2177 100644 --- a/engine/requirements.in +++ b/engine/requirements.in @@ -2,7 +2,7 @@ babel==2.12.1 beautifulsoup4==4.12.2 celery[redis]==5.3.1 cryptography==42.0.8 -django==4.2.11 +django==4.2.15 django-add-default-value==0.10.0 django-amazon-ses==4.0.1 django-anymail==8.6 diff --git a/engine/requirements.txt b/engine/requirements.txt index 31958d72..f39d4106 100644 --- a/engine/requirements.txt +++ b/engine/requirements.txt @@ -74,7 +74,7 @@ deprecated==1.2.14 # via # opentelemetry-api # opentelemetry-exporter-otlp-proto-grpc -django==4.2.11 +django==4.2.15 # via # -r requirements.in # django-add-default-value From 503939783fb70891bf16f30887a98197de3ca734 Mon Sep 17 00:00:00 2001 From: Yulya Artyukhina Date: Mon, 12 Aug 2024 12:37:48 +0200 Subject: [PATCH 06/15] Add settings var to choose application metrics to collect (#4781) # What this PR does Adds settings var `METRICS_TO_COLLECT` to choose what metrics should be collected by `ApplicationMetricsCollector`. It allows to collect different application metrics using different exporters. ## Which issue(s) this PR closes ## Checklist - [x] Unit, integration, and e2e (if applicable) tests updated - [x] Documentation added (or `pr:no public docs` PR label added if not required) - [x] Added the relevant release notes label (see labels prefixed w/ `release:`). These labels dictate how your PR will show up in the autogenerated release notes. --- .../metrics_exporter/metrics_collectors.py | 189 +++++++++--------- .../tests/test_metrics_collectors.py | 48 ++++- engine/settings/base.py | 11 + 3 files changed, 155 insertions(+), 93 deletions(-) diff --git a/engine/apps/metrics_exporter/metrics_collectors.py b/engine/apps/metrics_exporter/metrics_collectors.py index 720cdea9..db1ecc14 100644 --- a/engine/apps/metrics_exporter/metrics_collectors.py +++ b/engine/apps/metrics_exporter/metrics_collectors.py @@ -2,9 +2,10 @@ import logging import re import typing +from django.conf import settings from django.core.cache import cache from prometheus_client import CollectorRegistry -from prometheus_client.metrics_core import CounterMetricFamily, GaugeMetricFamily, HistogramMetricFamily +from prometheus_client.metrics_core import CounterMetricFamily, GaugeMetricFamily, HistogramMetricFamily, Metric from apps.alerts.constants import AlertGroupState from apps.metrics_exporter.constants import ( @@ -26,6 +27,11 @@ from apps.metrics_exporter.helpers import ( get_organization_ids, ) from apps.metrics_exporter.tasks import start_calculate_and_cache_metrics, start_recalculation_for_new_metric +from settings.base import ( + METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME, + METRIC_ALERT_GROUPS_TOTAL_NAME, + METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME, +) application_metrics_registry = CollectorRegistry() @@ -42,6 +48,8 @@ RE_USER_WAS_NOTIFIED_OF_ALERT_GROUPS = re.compile(_RE_BASE_PATTERN.format(USER_W # https://github.com/prometheus/client_python#custom-collectors class ApplicationMetricsCollector: + GetMetricFunc = typing.Callable[[set], typing.Tuple[Metric, set]] + def __init__(self): self._buckets = (60, 300, 600, 3600, "+Inf") self._stack_labels = [ @@ -61,29 +69,33 @@ class ApplicationMetricsCollector: self._user_labels = ["username"] + self._stack_labels def collect(self): + """ + Collects metrics listed in METRICS_TO_COLLECT settings var + """ + metrics_map: typing.Dict[str, ApplicationMetricsCollector.GetMetricFunc] = { + METRIC_ALERT_GROUPS_TOTAL_NAME: self._get_alert_groups_total_metric, + METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME: self._get_response_time_metric, + METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME: self._get_user_was_notified_of_alert_groups_metric, + } org_ids = set(get_organization_ids()) + metrics: typing.List[Metric] = [] + missing_org_ids: typing.Set[int] = set() - # alert groups total metric: gauge - alert_groups_total, missing_org_ids_1 = self._get_alert_groups_total_metric(org_ids) - # alert groups response time metrics: histogram - alert_groups_response_time_seconds, missing_org_ids_2 = self._get_response_time_metric(org_ids) - # user was notified of alert groups metrics: counter - user_was_notified, missing_org_ids_3 = self._get_user_was_notified_of_alert_groups_metric(org_ids) - - # This part is used for releasing new metrics to avoid recalculation for every metric. - # Uncomment with metric name when needed. - # # update new metric gradually - # missing_org_ids_3 = self._update_new_metric(USER_WAS_NOTIFIED_OF_ALERT_GROUPS, org_ids, missing_org_ids_3) + for metric_name in settings.METRICS_TO_COLLECT: + if metric_name not in metrics_map: + logger.error(f"Invalid metric name {metric_name} in `METRICS_TO_COLLECT` var") + continue + metric, missing_org_ids_temp = metrics_map[metric_name](org_ids) + metrics.append(metric) + missing_org_ids |= missing_org_ids_temp # check for orgs missing any of the metrics or needing a refresh, start recalculation task for missing org ids - missing_org_ids = missing_org_ids_1 | missing_org_ids_2 | missing_org_ids_3 self.recalculate_cache_for_missing_org_ids(org_ids, missing_org_ids) - yield alert_groups_total - yield alert_groups_response_time_seconds - yield user_was_notified + for metric in metrics: + yield metric - def _get_alert_groups_total_metric(self, org_ids): + def _get_alert_groups_total_metric(self, org_ids: set[int]) -> typing.Tuple[Metric, set[int]]: alert_groups_total = GaugeMetricFamily( ALERT_GROUPS_TOTAL, "All alert groups", labels=self._integration_labels_with_state ) @@ -98,15 +110,7 @@ class ApplicationMetricsCollector: logger.warning(f"Deleting stale metrics cache for {org_key}") cache.delete(org_key) break - # Labels values should have the same order as _integration_labels_with_state - labels_values = [ - integration_data["integration_name"], # integration - integration_data["team_name"], # team - integration_data["org_id"], # grafana org_id - integration_data["slug"], # grafana instance slug - integration_data["id"], # grafana instance id - ] - labels_values = list(map(str, labels_values)) + labels_values: typing.List[str] = self._get_labels_from_integration_data(integration_data) for service_name in integration_data["services"]: for state in AlertGroupState: alert_groups_total.add_metric( @@ -118,7 +122,25 @@ class ApplicationMetricsCollector: missing_org_ids = org_ids - processed_org_ids return alert_groups_total, missing_org_ids - def _get_response_time_metric(self, org_ids): + def _get_user_was_notified_of_alert_groups_metric(self, org_ids: set[int]) -> typing.Tuple[Metric, set[int]]: + user_was_notified = CounterMetricFamily( + USER_WAS_NOTIFIED_OF_ALERT_GROUPS, "Number of alert groups user was notified of", labels=self._user_labels + ) + processed_org_ids = set() + user_was_notified_keys = [get_metric_user_was_notified_of_alert_groups_key(org_id) for org_id in org_ids] + org_users: typing.Dict[str, typing.Dict[int, UserWasNotifiedOfAlertGroupsMetricsDict]] = cache.get_many( + user_was_notified_keys + ) + for org_key, users in org_users.items(): + for _, user_data in users.items(): + labels_values: typing.List[str] = self._get_labels_from_user_data(user_data) + user_was_notified.add_metric(labels_values, user_data["counter"]) + org_id_from_key = RE_USER_WAS_NOTIFIED_OF_ALERT_GROUPS.match(org_key).groups()[0] + processed_org_ids.add(int(org_id_from_key)) + missing_org_ids = org_ids - processed_org_ids + return user_was_notified, missing_org_ids + + def _get_response_time_metric(self, org_ids: set[int]) -> typing.Tuple[Metric, set[int]]: alert_groups_response_time_seconds = HistogramMetricFamily( ALERT_GROUPS_RESPONSE_TIME, "Users response time to alert groups in 7 days (seconds)", @@ -135,21 +157,12 @@ class ApplicationMetricsCollector: logger.warning(f"Deleting stale metrics cache for {org_key}") cache.delete(org_key) break - # Labels values should have the same order as _integration_labels - labels_values = [ - integration_data["integration_name"], # integration - integration_data["team_name"], # team - integration_data["org_id"], # grafana org_id - integration_data["slug"], # grafana instance slug - integration_data["id"], # grafana instance id - ] - labels_values = list(map(str, labels_values)) - + labels_values: typing.List[str] = self._get_labels_from_integration_data(integration_data) for service_name, response_time in integration_data["services"].items(): if not response_time: continue - buckets, sum_value = self.get_buckets_with_sum(response_time) - buckets = sorted(list(buckets.items()), key=lambda x: float(x[0])) + buckets_values, sum_value = self._get_buckets_with_sum(response_time) + buckets: list = sorted(list(buckets_values.items()), key=lambda x: float(x[0])) alert_groups_response_time_seconds.add_metric( labels_values + [service_name], buckets=buckets, @@ -160,55 +173,7 @@ class ApplicationMetricsCollector: missing_org_ids = org_ids - processed_org_ids return alert_groups_response_time_seconds, missing_org_ids - def _get_user_was_notified_of_alert_groups_metric(self, org_ids): - user_was_notified = CounterMetricFamily( - USER_WAS_NOTIFIED_OF_ALERT_GROUPS, "Number of alert groups user was notified of", labels=self._user_labels - ) - processed_org_ids = set() - user_was_notified_keys = [get_metric_user_was_notified_of_alert_groups_key(org_id) for org_id in org_ids] - org_users: typing.Dict[str, typing.Dict[int, UserWasNotifiedOfAlertGroupsMetricsDict]] = cache.get_many( - user_was_notified_keys - ) - for org_key, users in org_users.items(): - for _, user_data in users.items(): - # Labels values should have the same order as _user_labels - labels_values = [ - user_data["user_username"], # username - user_data["org_id"], # grafana org_id - user_data["slug"], # grafana instance slug - user_data["id"], # grafana instance id - ] - labels_values = list(map(str, labels_values)) - user_was_notified.add_metric(labels_values, user_data["counter"]) - org_id_from_key = RE_USER_WAS_NOTIFIED_OF_ALERT_GROUPS.match(org_key).groups()[0] - processed_org_ids.add(int(org_id_from_key)) - missing_org_ids = org_ids - processed_org_ids - return user_was_notified, missing_org_ids - - def _update_new_metric(self, metric_name, org_ids, missing_org_ids): - """ - This method is used for new metrics to calculate metrics gradually and avoid force recalculation for all orgs - """ - calculation_started_key = get_metric_calculation_started_key(metric_name) - is_calculation_started = cache.get(calculation_started_key) - if len(missing_org_ids) == len(org_ids) or is_calculation_started: - missing_org_ids = set() - if not is_calculation_started: - start_recalculation_for_new_metric.apply_async((metric_name,)) - return missing_org_ids - - def recalculate_cache_for_missing_org_ids(self, org_ids, missing_org_ids): - cache_timer_for_org_keys = [get_metrics_cache_timer_key(org_id) for org_id in org_ids] - cache_timers_for_org = cache.get_many(cache_timer_for_org_keys) - recalculate_orgs: typing.List[RecalculateOrgMetricsDict] = [] - for org_id in org_ids: - force_task = org_id in missing_org_ids - if force_task or not cache_timers_for_org.get(get_metrics_cache_timer_key(org_id)): - recalculate_orgs.append({"organization_id": org_id, "force": force_task}) - if recalculate_orgs: - start_calculate_and_cache_metrics.apply_async((recalculate_orgs,)) - - def get_buckets_with_sum(self, values): + def _get_buckets_with_sum(self, values: typing.List[int]) -> typing.Tuple[typing.Dict[str, float], int]: """Put values in correct buckets and count values sum""" buckets_values = {str(key): 0 for key in self._buckets} sum_value = 0 @@ -219,5 +184,51 @@ class ApplicationMetricsCollector: sum_value += value return buckets_values, sum_value + def _get_labels_from_integration_data( + self, integration_data: AlertGroupsTotalMetricsDict | AlertGroupsResponseTimeMetricsDict + ) -> typing.List[str]: + # Labels values should have the same order as _integration_labels_with_state + labels_values = [ + integration_data["integration_name"], # integration + integration_data["team_name"], # team + integration_data["org_id"], # grafana org_id + integration_data["slug"], # grafana instance slug + integration_data["id"], # grafana instance id + ] + return list(map(str, labels_values)) -application_metrics_registry.register(ApplicationMetricsCollector()) + def _get_labels_from_user_data(self, user_data: UserWasNotifiedOfAlertGroupsMetricsDict) -> typing.List[str]: + # Labels values should have the same order as _user_labels + labels_values = [ + user_data["user_username"], # username + user_data["org_id"], # grafana org_id + user_data["slug"], # grafana instance slug + user_data["id"], # grafana instance id + ] + return list(map(str, labels_values)) + + def _update_new_metric(self, metric_name: str, org_ids: set[int], missing_org_ids: set[int]) -> set[int]: + """ + This method is used for new metrics to calculate metrics gradually and avoid force recalculation for all orgs + Add to collect() method the following code with metric name when needed: + # update new metric gradually + missing_org_ids_X = self._update_new_metric(, org_ids, missing_org_ids_X) + """ + calculation_started_key = get_metric_calculation_started_key(metric_name) + is_calculation_started = cache.get(calculation_started_key) + if len(missing_org_ids) == len(org_ids) or is_calculation_started: + missing_org_ids = set() + if not is_calculation_started: + start_recalculation_for_new_metric.apply_async((metric_name,)) + return missing_org_ids + + def recalculate_cache_for_missing_org_ids(self, org_ids: set[int], missing_org_ids: set[int]) -> None: + cache_timer_for_org_keys = [get_metrics_cache_timer_key(org_id) for org_id in org_ids] + cache_timers_for_org = cache.get_many(cache_timer_for_org_keys) + recalculate_orgs: typing.List[RecalculateOrgMetricsDict] = [] + for org_id in org_ids: + force_task = org_id in missing_org_ids + if force_task or not cache_timers_for_org.get(get_metrics_cache_timer_key(org_id)): + recalculate_orgs.append({"organization_id": org_id, "force": force_task}) + if recalculate_orgs: + start_calculate_and_cache_metrics.apply_async((recalculate_orgs,)) diff --git a/engine/apps/metrics_exporter/tests/test_metrics_collectors.py b/engine/apps/metrics_exporter/tests/test_metrics_collectors.py index 640ac57b..4dcd1a32 100644 --- a/engine/apps/metrics_exporter/tests/test_metrics_collectors.py +++ b/engine/apps/metrics_exporter/tests/test_metrics_collectors.py @@ -15,16 +15,44 @@ from apps.metrics_exporter.constants import ( from apps.metrics_exporter.helpers import get_metric_alert_groups_response_time_key, get_metric_alert_groups_total_key from apps.metrics_exporter.metrics_collectors import ApplicationMetricsCollector from apps.metrics_exporter.tests.conftest import METRICS_TEST_SERVICE_NAME +from settings.base import ( + METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME, + METRIC_ALERT_GROUPS_TOTAL_NAME, + METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME, +) # redis cluster usage modifies the cache keys for some operations, so we need to test both cases # see common.cache.ensure_cache_key_allocates_to_the_same_hash_slot for more details @pytest.mark.parametrize("use_redis_cluster", [True, False]) +@pytest.mark.parametrize( + "metric_base_names_and_metric_names", + [ + [ + [METRIC_ALERT_GROUPS_TOTAL_NAME, METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME], + [ALERT_GROUPS_TOTAL, USER_WAS_NOTIFIED_OF_ALERT_GROUPS], + ], + [[METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME], [ALERT_GROUPS_RESPONSE_TIME]], + [ + [ + METRIC_ALERT_GROUPS_TOTAL_NAME, + METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME, + METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME, + ], + [ALERT_GROUPS_TOTAL, USER_WAS_NOTIFIED_OF_ALERT_GROUPS, ALERT_GROUPS_RESPONSE_TIME], + ], + ], +) @patch("apps.metrics_exporter.metrics_collectors.get_organization_ids", return_value=[1]) @patch("apps.metrics_exporter.metrics_collectors.start_calculate_and_cache_metrics.apply_async") @pytest.mark.django_db -def test_application_metrics_collector( - mocked_org_ids, mocked_start_calculate_and_cache_metrics, mock_cache_get_metrics_for_collector, use_redis_cluster +def test_application_metrics_collectors( + mocked_org_ids, + mocked_start_calculate_and_cache_metrics, + mock_cache_get_metrics_for_collector, + use_redis_cluster, + metric_base_names_and_metric_names, + settings, ): """Test that ApplicationMetricsCollector generates expected metrics from cache""" @@ -41,10 +69,16 @@ def test_application_metrics_collector( return labels with override_settings(USE_REDIS_CLUSTER=use_redis_cluster): + settings.METRICS_TO_COLLECT = metric_base_names_and_metric_names[0] collector = ApplicationMetricsCollector() test_metrics_registry = CollectorRegistry() test_metrics_registry.register(collector) - for metric in test_metrics_registry.collect(): + + metrics = [i for i in test_metrics_registry.collect()] + assert len(metrics) == len(metric_base_names_and_metric_names[1]) + + for metric in metrics: + assert metric.name in metric_base_names_and_metric_names[1] if metric.name == ALERT_GROUPS_TOTAL: # 2 integrations with labels for each alert group state per service assert len(metric.samples) == len(AlertGroupState) * 3 # 2 from 1st integration and 1 from 2nd @@ -71,6 +105,8 @@ def test_application_metrics_collector( elif metric.name == USER_WAS_NOTIFIED_OF_ALERT_GROUPS: # metric with labels for each notified user assert len(metric.samples) == 1 + else: + raise AssertionError result = generate_latest(test_metrics_registry).decode("utf-8") assert result is not None assert mocked_org_ids.called @@ -91,7 +127,9 @@ def test_application_metrics_collector_with_old_metrics_without_services( collector = ApplicationMetricsCollector() test_metrics_registry = CollectorRegistry() test_metrics_registry.register(collector) - for metric in test_metrics_registry.collect(): + metrics = [i for i in test_metrics_registry.collect()] + assert len(metrics) == 3 + for metric in metrics: if metric.name == ALERT_GROUPS_TOTAL: alert_groups_total_metrics_cache = cache.get(get_metric_alert_groups_total_key(org_id)) assert alert_groups_total_metrics_cache and "services" not in alert_groups_total_metrics_cache[1] @@ -106,6 +144,8 @@ def test_application_metrics_collector_with_old_metrics_without_services( elif metric.name == USER_WAS_NOTIFIED_OF_ALERT_GROUPS: # metric with labels for each notified user assert len(metric.samples) == 1 + else: + raise AssertionError result = generate_latest(test_metrics_registry).decode("utf-8") assert result is not None assert mocked_org_ids.called diff --git a/engine/settings/base.py b/engine/settings/base.py index 5c77c080..9aace2df 100644 --- a/engine/settings/base.py +++ b/engine/settings/base.py @@ -107,6 +107,17 @@ CHATOPS_SIGNING_SECRET = os.environ.get("CHATOPS_SIGNING_SECRET", None) # Prometheus exporter metrics endpoint auth PROMETHEUS_EXPORTER_SECRET = os.environ.get("PROMETHEUS_EXPORTER_SECRET") +# Application metric names without prefixes +METRIC_ALERT_GROUPS_TOTAL_NAME = "alert_groups_total" +METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME = "alert_groups_response_time" +METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME = "user_was_notified_of_alert_groups" +METRICS_ALL = [ + METRIC_ALERT_GROUPS_TOTAL_NAME, + METRIC_ALERT_GROUPS_RESPONSE_TIME_NAME, + METRIC_USER_WAS_NOTIFIED_OF_ALERT_GROUPS_NAME, +] +# List of metrics to collect. Collect all available application metrics by default +METRICS_TO_COLLECT = os.environ.get("METRICS_TO_COLLECT", METRICS_ALL) # Database From b3119e526656a9e2fe15d09d33281bb314d60dda Mon Sep 17 00:00:00 2001 From: Innokentii Konstantinov Date: Tue, 13 Aug 2024 15:57:17 +0800 Subject: [PATCH 07/15] chore: return 422 on slack step not found (#4810) # What this PR does Return 422 instead of 500 when the handler for the slack incoming event was not found to gracefully omit events on call not subscribed to. --- engine/apps/slack/views.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/engine/apps/slack/views.py b/engine/apps/slack/views.py index 1812927a..2c7aff6b 100644 --- a/engine/apps/slack/views.py +++ b/engine/apps/slack/views.py @@ -433,8 +433,8 @@ class SlackEventApiEndpointView(APIView): step_was_found = True if not step_was_found: - raise Exception("Step is undefined" + str(payload)) - + logger.warning("SlackEventApiEndpointView: Step is undefined" + str(payload)) + return Response(status=422) return Response(status=200) @staticmethod From 18726432af7b5c4806438ec325c048fad83dd671 Mon Sep 17 00:00:00 2001 From: Yulya Artyukhina Date: Tue, 13 Aug 2024 11:24:30 +0200 Subject: [PATCH 08/15] Reduce a number of requests to db on `alert_receive_channel` internal api endpoint (#4805) # What this PR does Reduce a number of requests to db on `alert_receive_channel` internal api endpoint from Screenshot 2024-08-12 at 14 55 05 to Screenshot 2024-08-12 at 14 55 13 ## Which issue(s) this PR closes Related to [issue link here] ## Checklist - [ ] Unit, integration, and e2e (if applicable) tests updated - [x] Documentation added (or `pr:no public docs` PR label added if not required) - [x] Added the relevant release notes label (see labels prefixed w/ `release:`). These labels dictate how your PR will show up in the autogenerated release notes. --- .../apps/api/serializers/alert_receive_channel.py | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/engine/apps/api/serializers/alert_receive_channel.py b/engine/apps/api/serializers/alert_receive_channel.py index 563f4eb1..33bf240f 100644 --- a/engine/apps/api/serializers/alert_receive_channel.py +++ b/engine/apps/api/serializers/alert_receive_channel.py @@ -12,7 +12,6 @@ from rest_framework.fields import SerializerMethodField from apps.alerts.grafana_alerting_sync_manager.grafana_alerting_sync import GrafanaAlertingSyncManager from apps.alerts.models import AlertReceiveChannel -from apps.alerts.models.channel_filter import ChannelFilter from apps.base.messaging import get_messaging_backends from apps.integrations.legacy_prefix import has_legacy_prefix from apps.labels.models import LabelKeyCache, LabelValueCache @@ -277,7 +276,7 @@ class AlertReceiveChannelSerializer( # With using of select_related ORM builds strange join # which leads to incorrect heartbeat-alert_receive_channel binding in result PREFETCH_RELATED = ["channel_filters", "integration_heartbeat", "labels", "labels__key", "labels__value"] - SELECT_RELATED = ["organization", "author"] + SELECT_RELATED = ["organization", "author", "team"] class Meta: model = AlertReceiveChannel @@ -490,11 +489,12 @@ class AlertReceiveChannelSerializer( return has_legacy_prefix(obj.integration) def get_connected_escalations_chains_count(self, obj: "AlertReceiveChannel") -> int: - return ( - ChannelFilter.objects.filter(alert_receive_channel=obj, escalation_chain__isnull=False) - .values("escalation_chain") - .distinct() - .count() + return len( + set( + channel_filter.escalation_chain_id + for channel_filter in obj.channel_filters.all() + if channel_filter.escalation_chain_id is not None + ) ) From 66f2fafce9a24635505e18663fe0ad6d70c4cec9 Mon Sep 17 00:00:00 2001 From: Levente Balogh Date: Tue, 13 Aug 2024 12:18:20 +0200 Subject: [PATCH 09/15] Feature: Use ui extension hooks where available (#4765) **What this PR does / why we need it:** This PR updates usage of plugin extensions APIs to take advantage of the new hooks API where available. In older versions we fallback to the currently used hook. This prevents an issue where due to the reactive registry the older APIs don't receive the full list of extensions. It also paves the way for frontend performance improvements in Grafana core. **Which issue(s) this PR fixes:** Related: https://github.com/grafana/grafana-community-team/issues/174 **Special notes for your reviewer:** We would really appreciate some assistance in testing this PR in both the latest version of Grafana 11 and the minimum supported Grafana version. --------- Co-authored-by: Dominik --- grafana-plugin/.eslintrc.js | 2 +- grafana-plugin/package.json | 8 +- .../ExtensionLinkDropdown.tsx | 36 +- .../__snapshots__/AddResponders.test.tsx.snap | 94 +- .../AddRespondersPopup.test.tsx.snap | 25 +- .../NotificationPoliciesSelect.test.tsx.snap | 16 +- .../__snapshots__/TeamResponder.test.tsx.snap | 8 +- .../__snapshots__/UserResponder.test.tsx.snap | 16 +- .../MobileAppConnection.test.tsx.snap | 31 +- .../DisconnectButton.test.tsx.snap | 1 + .../LinkLoginButton.test.tsx.snap | 1 + .../PluginConfigPage.test.tsx.snap | 10 + .../ConfigurationForm.test.tsx.snap | 7 +- ...veCurrentConfigurationButton.test.tsx.snap | 4 +- grafana-plugin/yarn.lock | 1180 +++++++++++------ 15 files changed, 915 insertions(+), 524 deletions(-) diff --git a/grafana-plugin/.eslintrc.js b/grafana-plugin/.eslintrc.js index a4d297ec..7d13aa6b 100644 --- a/grafana-plugin/.eslintrc.js +++ b/grafana-plugin/.eslintrc.js @@ -12,7 +12,7 @@ module.exports = { { files: ['src/**/*.{ts,tsx}'], rules: { - 'deprecation/deprecation': 'warn', + 'deprecation/deprecation': 'off', }, parserOptions: { project: './tsconfig.json', diff --git a/grafana-plugin/package.json b/grafana-plugin/package.json index 5bc1b554..178cf392 100644 --- a/grafana-plugin/package.json +++ b/grafana-plugin/package.json @@ -135,14 +135,14 @@ "@dnd-kit/sortable": "^7.0.2", "@dnd-kit/utilities": "^3.2.1", "@emotion/css": "11.10.6", - "@grafana/data": "^10.2.3", + "@grafana/data": "^11.1.3", "@grafana/faro-web-sdk": "^1.4.2", "@grafana/faro-web-tracing": "^1.4.2", "@grafana/labels": "~1.5.1", - "@grafana/runtime": "^10.2.2", + "@grafana/runtime": "^11.1.3", "@grafana/scenes": "^1.28.0", - "@grafana/schema": "^10.2.2", - "@grafana/ui": "10.2.0", + "@grafana/schema": "^11.1.3", + "@grafana/ui": "^11.1.3", "@lifeomic/attempt": "^3.0.3", "array-move": "^4.0.0", "axios": "^1.6.7", diff --git a/grafana-plugin/src/components/ExtensionLinkMenu/ExtensionLinkDropdown.tsx b/grafana-plugin/src/components/ExtensionLinkMenu/ExtensionLinkDropdown.tsx index a53060a4..16d077f8 100644 --- a/grafana-plugin/src/components/ExtensionLinkMenu/ExtensionLinkDropdown.tsx +++ b/grafana-plugin/src/components/ExtensionLinkMenu/ExtensionLinkDropdown.tsx @@ -1,7 +1,11 @@ import React, { ReactElement, useMemo, useState } from 'react'; import { PluginExtensionLink } from '@grafana/data'; -import { getPluginLinkExtensions } from '@grafana/runtime'; +import { + type GetPluginExtensionsOptions, + getPluginLinkExtensions, + usePluginLinks as originalUsePluginLinks, +} from '@grafana/runtime'; import { Dropdown, ToolbarButton } from '@grafana/ui'; import { OnCallPluginExtensionPoints } from 'types'; @@ -16,6 +20,9 @@ interface Props { grafanaIncidentId: string | null; } +// `usePluginLinks()` is only available in Grafana>=11.1.0, so we have a fallback for older versions +const usePluginLinks = originalUsePluginLinks === undefined ? usePluginLinksFallback : originalUsePluginLinks; + export function ExtensionLinkDropdown({ incident, extensionPointId, @@ -24,15 +31,15 @@ export function ExtensionLinkDropdown({ }: Props): ReactElement | null { const [isOpen, setIsOpen] = useState(false); const context = useExtensionPointContext(incident); - const extensions = useExtensionLinks(context, extensionPointId); + const { links, isLoading } = usePluginLinks({ context, extensionPointId, limitPerPlugin: 3 }); - if (extensions.length === 0) { + if (links.length === 0 || isLoading) { return null; } const menu = ( @@ -51,24 +58,31 @@ function useExtensionPointContext(incident: ApiSchemas['AlertGroup']): PluginExt return { alertGroup: incident }; } -function useExtensionLinks( - context: T, - extensionPointId: OnCallPluginExtensionPoints -): PluginExtensionLink[] { +function usePluginLinksFallback({ context, extensionPointId, limitPerPlugin }: GetPluginExtensionsOptions): { + links: PluginExtensionLink[]; + isLoading: boolean; +} { return useMemo(() => { // getPluginLinkExtensions is available in Grafana>=10.0, // so will be undefined in earlier versions. Just return an // empty list of extensions in this case. if (getPluginLinkExtensions === undefined) { - return []; + return { + links: [], + isLoading: false, + }; } + const { extensions } = getPluginLinkExtensions({ extensionPointId, context, - limitPerPlugin: 3, + limitPerPlugin, }); - return extensions; + return { + links: extensions, + isLoading: false, + }; }, [context]); } diff --git a/grafana-plugin/src/containers/AddResponders/__snapshots__/AddResponders.test.tsx.snap b/grafana-plugin/src/containers/AddResponders/__snapshots__/AddResponders.test.tsx.snap index 55a46e04..be1e17ea 100644 --- a/grafana-plugin/src/containers/AddResponders/__snapshots__/AddResponders.test.tsx.snap +++ b/grafana-plugin/src/containers/AddResponders/__snapshots__/AddResponders.test.tsx.snap @@ -30,12 +30,10 @@ exports[`AddResponders should properly display the add responders button when hi >
+ />
@@ -352,6 +340,7 @@ exports[`AddResponders should render selected team and users properly 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
@@ -395,15 +381,11 @@ exports[`AddResponders should render selected team and users properly 1`] = ` > + /> @@ -467,6 +449,7 @@ exports[`AddResponders should render selected team and users properly 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
@@ -509,15 +489,11 @@ exports[`AddResponders should render selected team and users properly 1`] = ` > + /> @@ -581,6 +557,7 @@ exports[`AddResponders should render selected team and users properly 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
@@ -623,15 +597,11 @@ exports[`AddResponders should render selected team and users properly 1`] = ` > + /> @@ -640,28 +610,24 @@ exports[`AddResponders should render selected team and users properly 1`] = `
-
-
+ />
-
-
+ />
diff --git a/grafana-plugin/src/containers/AddResponders/parts/AddRespondersPopup/__snapshots__/AddRespondersPopup.test.tsx.snap b/grafana-plugin/src/containers/AddResponders/parts/AddRespondersPopup/__snapshots__/AddRespondersPopup.test.tsx.snap index d9a80841..e27f7058 100644 --- a/grafana-plugin/src/containers/AddResponders/parts/AddRespondersPopup/__snapshots__/AddRespondersPopup.test.tsx.snap +++ b/grafana-plugin/src/containers/AddResponders/parts/AddRespondersPopup/__snapshots__/AddRespondersPopup.test.tsx.snap @@ -22,11 +22,7 @@ exports[`AddRespondersPopup it shows a loading message initially 1`] = ` />
-
-
+ />
diff --git a/grafana-plugin/src/containers/AddResponders/parts/NotificationPoliciesSelect/__snapshots__/NotificationPoliciesSelect.test.tsx.snap b/grafana-plugin/src/containers/AddResponders/parts/NotificationPoliciesSelect/__snapshots__/NotificationPoliciesSelect.test.tsx.snap index a991d610..bea4a690 100644 --- a/grafana-plugin/src/containers/AddResponders/parts/NotificationPoliciesSelect/__snapshots__/NotificationPoliciesSelect.test.tsx.snap +++ b/grafana-plugin/src/containers/AddResponders/parts/NotificationPoliciesSelect/__snapshots__/NotificationPoliciesSelect.test.tsx.snap @@ -14,6 +14,7 @@ exports[`NotificationPoliciesSelect disabled state 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
@@ -66,6 +64,7 @@ exports[`NotificationPoliciesSelect it renders properly 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
diff --git a/grafana-plugin/src/containers/AddResponders/parts/TeamResponder/__snapshots__/TeamResponder.test.tsx.snap b/grafana-plugin/src/containers/AddResponders/parts/TeamResponder/__snapshots__/TeamResponder.test.tsx.snap index 1ae6010d..2e38c662 100644 --- a/grafana-plugin/src/containers/AddResponders/parts/TeamResponder/__snapshots__/TeamResponder.test.tsx.snap +++ b/grafana-plugin/src/containers/AddResponders/parts/TeamResponder/__snapshots__/TeamResponder.test.tsx.snap @@ -43,15 +43,11 @@ exports[`TeamResponder it renders data properly 1`] = ` > + /> diff --git a/grafana-plugin/src/containers/AddResponders/parts/UserResponder/__snapshots__/UserResponder.test.tsx.snap b/grafana-plugin/src/containers/AddResponders/parts/UserResponder/__snapshots__/UserResponder.test.tsx.snap index 9e79722e..2878310a 100644 --- a/grafana-plugin/src/containers/AddResponders/parts/UserResponder/__snapshots__/UserResponder.test.tsx.snap +++ b/grafana-plugin/src/containers/AddResponders/parts/UserResponder/__snapshots__/UserResponder.test.tsx.snap @@ -60,6 +60,7 @@ exports[`UserResponder it renders data properly 1`] = ` aria-live="polite" aria-relevant="additions text" class="css-1f43avz-a11yText-A11yText" + role="log" />
-
-
+ />
@@ -100,15 +98,11 @@ exports[`UserResponder it renders data properly 1`] = ` > + /> diff --git a/grafana-plugin/src/containers/MobileAppConnection/__snapshots__/MobileAppConnection.test.tsx.snap b/grafana-plugin/src/containers/MobileAppConnection/__snapshots__/MobileAppConnection.test.tsx.snap index af350393..0951e496 100644 --- a/grafana-plugin/src/containers/MobileAppConnection/__snapshots__/MobileAppConnection.test.tsx.snap +++ b/grafana-plugin/src/containers/MobileAppConnection/__snapshots__/MobileAppConnection.test.tsx.snap @@ -24,14 +24,9 @@ exports[`MobileAppConnection it shows a QR code if the app isn't already connect Loading...
- -
+ />
- -
+ />
- -
+ />