oncall-engine/engine/common/custom_celery_tasks/dedicated_queue_retry_task.py
Joey Orlando f85cc6d33b
add more logging on celery task retry (#3695)
# What this PR does

This is a follow up to https://github.com/grafana/oncall/pull/3677.

It appears that when a task uses the [`autoretry_for`
kwarg](https://docs.celeryq.dev/en/stable/userguide/tasks.html#automatic-retry-for-known-exceptions)
in the task decorator, it doesn't log the exception in `on_failure` as
would be expected. Now when retrying, we log out a message + any
exception/stack trace information.

## Checklist

- [x] Unit, integration, and e2e (if applicable) tests updated
- [x] Documentation added (or `pr:no public docs` PR label added if not
required)
- [x] `CHANGELOG.md` updated (or `pr:no changelog` PR label added if not
required)
2024-01-16 07:13:16 -05:00

37 lines
1.1 KiB
Python

from celery import shared_task
from celery.utils.log import get_task_logger
from common.custom_celery_tasks.log_exception_on_failure_task import LogExceptionOnFailureTask
RETRY_QUEUE = "retry"
logger = logger = get_task_logger(__name__)
class DedicatedQueueRetryTask(LogExceptionOnFailureTask):
"""
Custom task sends all retried task to the dedicated retry queue.
Is is needed to not to overload regular (high, medium, low) queues with retried tasks.
"""
def retry(
self, args=None, kwargs=None, exc=None, throw=True, eta=None, countdown=None, max_retries=None, **options
):
logger.warn("Retrying celery task", exc_info=exc)
# Just call retry with queue argument
return super().retry(
args=args,
kwargs=kwargs,
exc=exc,
throw=throw,
eta=eta,
countdown=countdown,
max_retries=max_retries,
queue=RETRY_QUEUE,
**options,
)
def shared_dedicated_queue_retry_task(*args, **kwargs):
return shared_task(*args, base=DedicatedQueueRetryTask, **kwargs)