One startup command to rule them all (#760)

* Modify `docker-compose-developer` configuration files, and `Makefile`
to support running everything in containers for local development

- Make use of the COMPOSE_PROFILES env var that is supported by
docker-compose to allow swapping-out/turning off certain docker-compose
services.
- add makefile cleanup command. Will remove all docker resources related
to running the project locally
- The "restart grafana container" issue, where users would need
to restart their grafana container when setting up the project for the
first time, is now fixed (make command now runs yarn build:dev before docker-compose startup;
this ensures grafana-plugin/dist is available for grafana container before it starts up)
- The DEVELOPER.md has been updated as well to reflect these new changes. It
has been moved to ./dev/README.md (and references to the old file have
been updated).
- The redis image that is referenced in the docker-compose files
has been pinned to v7.0.5 (latest version as of this commit) to avoid
any surprises w/ future releases.
- remove root .dockerignore in favour of individual .dockerignore files
in ./engine and ./grafana-plugin
This commit is contained in:
Joey Orlando 2022-11-07 16:34:43 +01:00 committed by GitHub
parent 88f736beaf
commit 78d01df864
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
25 changed files with 651 additions and 548 deletions

View file

@ -1,9 +0,0 @@
venv/*
venv2.7/*
.DS_Store
frontend/node_modules
frontend/build
package-lock.json
./engine/extensions
.env
.env-hobby

12
.gitignore vendored
View file

@ -1,17 +1,12 @@
# Backend
*/db.sqlite3
engine/oncall_dev.db
engine/*.db
*.pyc
venv
.python-version
.env
.env_hobby
.env.dev
.vscode
dump.rdb
.idea
engine/celerybeat-schedule.db
engine/sqlite_data
jupiter_playbooks/*
engine/reports/*.csv
engine/jupiter_playbooks/*
@ -29,11 +24,8 @@ node_modules
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
.swp
.env
npm-debug.log*
yarn-debug.log*

View file

@ -1,325 +0,0 @@
- [Developer quickstart](#developer-quickstart)
- [Code style](#code-style)
- [Backend setup](#backend-setup)
- [Frontend setup](#frontend-setup)
- [Setup using Makefile](#setup-using-makefile)
- [Slack application setup](#slack-application-setup)
- [Update drone build](#update-drone-build)
- [Troubleshooting](#troubleshooting)
- [ld: library not found for -lssl](#ld-library-not-found-for--lssl)
- [Could not build wheels for cryptography which use PEP 517 and cannot be installed directly](#could-not-build-wheels-for-cryptography-which-use-pep-517-and-cannot-be-installed-directly)
- [django.db.utils.OperationalError: (1366, "Incorrect string value ...")](#djangodbutilsoperationalerror-1366-incorrect-string-value-)
- [Empty queryset when filtering against datetime field](#empty-queryset-when-filtering-against-datetime-field)
- [Hints](#hints)
- [Building the all-in-one docker container](#building-the-all-in-one-docker-container)
- [Running Grafana with plugin (frontend) folder mounted for dev purposes](#running-grafana-with-plugin-frontend-folder-mounted-for-dev-purposes)
- [How to recreate the local database](#recreating-the-local-database)
- [Running tests locally](#running-tests-locally)
- [IDE Specific Instructions](#ide-specific-instructions)
- [PyCharm](#pycharm)
## Developer quickstart
Related: [How to develop integrations](/engine/config_integrations/README.md)
### Code style
- [isort](https://github.com/PyCQA/isort), [black](https://github.com/psf/black) and [flake8](https://github.com/PyCQA/flake8) are used to format backend code
- [eslint](https://eslint.org) and [stylelint](https://stylelint.io) are used to format frontend code
- To run formatters and linters on all files: `pre-commit run --all-files`
- To install pre-commit hooks: `pre-commit install`
### Backend setup
1. Start stateful services (RabbitMQ, Redis, Grafana with mounted plugin folder)
```bash
docker-compose -f docker-compose-developer.yml up -d
```
NOTE: to use a PostgreSQL db backend, use the `docker-compose-developer-pg.yml` file instead.
2. `postgres` is a dependency on some of our Python dependencies (notably `psycopg2` ([docs](https://www.psycopg.org/docs/install.html#prerequisites))). To install this on Mac you can simply run:
```bash
brew install postgresql@14
```
For non Mac installation please visit [here](https://www.postgresql.org/download/) for more information on how to install.
3. Prepare a python environment:
```bash
# Create and activate the virtual environment
python3.9 -m venv venv && source venv/bin/activate
# Verify that python has version 3.9.x
python --version
# Make sure you have latest pip and wheel support
pip install -U pip wheel
# Copy and check .env.dev file.
cp .env.dev.example .env.dev
# NOTE: if you want to use the PostgreSQL db backend add DATABASE_TYPE=postgresql to your .env.dev file;
# currently allowed backend values are `mysql` (default), `postgresql` and `sqlite3`
# Apply .env.dev to current terminal.
# For PyCharm it's better to use https://plugins.jetbrains.com/plugin/7861-envfile/
export $(grep -v '^#' .env.dev | xargs -0)
# Install dependencies.
# Hint: there is a known issue with uwsgi. It's not used in the local dev environment. Feel free to comment it in `engine/requirements.txt`.
cd engine && pip install -r requirements.txt
# Migrate the DB:
python manage.py migrate
# Create user for django admin panel (if you need it):
python manage.py createsuperuser
```
4. Launch the backend:
```bash
# Http server:
python manage.py runserver 0.0.0.0:8080
# Worker for background tasks (run it in the parallel terminal, don't forget to export .env.dev there)
python manage.py start_celery
# Additionally you could launch the worker with periodic tasks launcher (99% you don't need this)
celery -A engine beat -l info
```
5. All set! Check out internal API endpoints at http://localhost:8000/.
### Frontend setup
1. Make sure you have [NodeJS v.14+ < 17](https://nodejs.org/) and [yarn](https://yarnpkg.com/) installed. **Note**: If you are using [`nvm`](https://github.com/nvm-sh/nvm) feel free to simply run `cd grafana-plugin && nvm install` to install the proper Node version.
2. Install the dependencies with `yarn` and launch the frontend server (on port `3000` by default)
```bash
cd grafana-plugin
yarn install
yarn
yarn watch
```
3. Ensure /grafana-plugin/provisioning has no grafana-plugin.yml
4. Generate an invitation token:
```bash
cd engine;
python manage.py issue_invite_for_the_frontend --override
```
... or use output of all-in-one docker container described in the README.md.
5. Open Grafana in the browser http://localhost:3000 (login: oncall, password: oncall) notice OnCall Plugin is not enabled, navigate to Configuration->Plugins and click Grafana OnCall
6. Some configuration fields will appear be available. Fill them out and click Initialize OnCall
```
OnCall API URL:
http://host.docker.internal:8080
Invitation Token (Single use token to connect Grafana instance):
Response from the invite generator command (check above)
Grafana URL (URL OnCall will use to talk to Grafana instance):
http://localhost:3000
```
NOTE: you may not have `host.docker.internal` available, in that case you can get the
host IP from inside the container by running:
```bash
/sbin/ip route|awk '/default/ { print $3 }'
# Alternatively add host.docker.internal as an extra_host for grafana in docker-compose-developer.yml
extra_hosts:
- "host.docker.internal:host-gateway"
```
### Setup using Makefile
- Make sure you have `make` installed
- Backend setup:
- Run stateful services:
`$ make docker-services-start`
(you can change your preferred docker file by defining the `DOCKER_FILE` env variable)
- Setup environment:
`$ make bootstrap`
(you can change your preferred directory for your Python virtualenv by defining the `ENV_DIR` env variable)
- Start the server (this will run bootstrap if needed and apply db migrations):
`$ make run`
- Start the celery workers:
`$ make start-celery`
- Start celery beat:
`$ make start-celery-beat`
- Frontend:
- Build and watch plugin:
`$ make watch-plugin`
- Generate invitation token:
`$ make manage ARGS="issue_invite_for_the_frontend --override"`
- Follow instructions above to setup plugin (see steps 5 and 6)
- Other useful targets:
- `$ make shell` (open Django shell)
- `$ make dbshell` (open DB shell)
- `$ make test` (run tests)
- `$ make lint` (run lint checks)
### Slack application setup
For Slack app configuration check our docs: https://grafana.com/docs/grafana-cloud/oncall/open-source/#slack-setup
### Update drone build
The .drone.yml build file must be signed when changes are made to it. Follow these steps:
If you have not installed drone CLI follow [these instructions](https://docs.drone.io/cli/install/)
To sign the .drone.yml file:
```bash
export DRONE_SERVER=https://drone.grafana.net
# Get your drone token from https://drone.grafana.net/account
export DRONE_TOKEN=<Your DRONE_TOKEN>
drone sign --save grafana/oncall .drone.yml
```
## Troubleshooting
### ld: library not found for -lssl
**Problem:**
```
pip install -r requirements.txt
...
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
...
```
**Solution:**
```
export LDFLAGS=-L/usr/local/opt/openssl/lib
pip install -r requirements.txt
```
### Could not build wheels for cryptography which use PEP 517 and cannot be installed directly
Happens on Apple Silicon
**Problem:**
```
build/temp.macosx-12-arm64-3.9/_openssl.c:575:10: fatal error: 'openssl/opensslv.h' file not found
#include <openssl/opensslv.h>
^~~~~~~~~~~~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for cryptography
```
**Solution:**
```
LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install `cat requirements.txt | grep cryptography`
```
### django.db.utils.OperationalError: (1366, "Incorrect string value ...")
**Problem:**
```
django.db.utils.OperationalError: (1366, "Incorrect string value: '\\xF0\\x9F\\x98\\x8A\\xF0\\x9F...' for column 'cached_name' at row 1")
```
**Solution:**
Recreate the database with the correct encoding.
### Grafana OnCall plugin does not show up in plugin list
**Problem:**
I've run `yarn watch` in `grafana_plugin` but I do not see Grafana OnCall in the list of plugins
**Solution:**
If it is the first time you have run `yarn watch` and it was run after starting Grafana in docker-compose; Grafana will not have detected a plugin to fix: `docker-compose -f developer-docker-compose.yml restart grafana`
## Hints:
### Building the all-in-one docker container
```bash
cd engine;
docker build -t grafana/oncall-all-in-one -f Dockerfile.all-in-one .
```
### Running Grafana with plugin (frontend) folder mounted for dev purposes
Do it only after you built frontend at least once! Also developer-docker-compose.yml has similar Grafana included.
```bash
docker run --rm -it -p 3000:3000 -v "$(pwd)"/grafana-plugin:/var/lib/grafana/plugins/grafana-plugin -e GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=grafana-oncall-app --name=grafana grafana/grafana:8.3.2
```
Credentials: admin/admin
### Running tests locally
In the `engine` directory, with the `.env.dev` vars exported and virtualenv activated
```bash
pytest
```
You can also install `pytest.xdist` in your env and run tests in parallel:
```bash
pip install pytest.xdist
pytest -n4
```
## IDE Specific Instructions
### PyCharm
1. Create venv and copy .env.dev file
```bash
python3.9 -m venv venv
cp .env.dev.example .env.dev
```
2. Open the project in PyCharm
3. Settings &rarr; Project OnCall
- In Python Interpreter click the gear and create a new Virtualenv from existing environment selecting the venv created in Step 1.
- In Project Structure make sure the project root is the content root and add /engine to Sources
4. Under Settings &rarr; Languages & Frameworks &rarr; Django
- Enable Django support
- Set Django project root to /engine
- Set Settings to settings/dev.py
5. Create a new Django Server run configuration to Run/Debug the engine
- Use a plugin such as EnvFile to load the .env.dev file
- Change port from 8000 to 8080

165
Makefile
View file

@ -1,73 +1,136 @@
include .env.dev
DOCKER_COMPOSE_FILE = docker-compose-developer.yml
DOCKER_COMPOSE_DEV_LABEL = com.grafana.oncall.env=dev
ENV_DIR ?= venv
ENV = $(CURDIR)/$(ENV_DIR)
CELERY = $(ENV)/bin/celery
PRECOMMIT = $(ENV)/bin/pre-commit
PIP = $(ENV)/bin/pip
PYTHON3 = $(ENV)/bin/python3
PYTEST = $(ENV)/bin/pytest
# compose profiles
MYSQL_PROFILE = mysql
POSTGRES_PROFILE = postgres
SQLITE_PROFILE = sqlite
ENGINE_PROFILE = engine
UI_PROFILE = oncall_ui
REDIS_PROFILE = redis
RABBITMQ_PROFILE = rabbitmq
GRAFANA_PROFILE = grafana
DOCKER_FILE ?= docker-compose-developer.yml
DEV_ENV_DIR = ./dev
DEV_ENV_FILE = $(DEV_ENV_DIR)/.env.dev
DEV_ENV_EXAMPLE_FILE = $(DEV_ENV_FILE).example
define setup_engine_env
export `grep -v '^#' .env.dev | xargs -0` && cd engine
ENGINE_DIR = ./engine
SQLITE_DB_FILE = $(ENGINE_DIR)/oncall.db
# -n flag only copies DEV_ENV_EXAMPLE_FILE-> DEV_ENV_FILE if it doesn't already exist
$(shell cp -n $(DEV_ENV_EXAMPLE_FILE) $(DEV_ENV_FILE))
include $(DEV_ENV_FILE)
# if COMPOSE_PROFILES is set in DEV_ENV_FILE use it
# otherwise use a default (or what is passed in as an arg)
ifeq ($(COMPOSE_PROFILES),)
COMPOSE_PROFILES=$(ENGINE_PROFILE),$(UI_PROFILE),$(REDIS_PROFILE),$(GRAFANA_PROFILE)
endif
# conditionally assign DB based on what is present in COMPOSE_PROFILES
ifeq ($(findstring $(MYSQL_PROFILE),$(COMPOSE_PROFILES)),$(MYSQL_PROFILE))
DB=$(MYSQL_PROFILE)
else ifeq ($(findstring $(POSTGRES_PROFILE),$(COMPOSE_PROFILES)),$(POSTGRES_PROFILE))
DB=$(POSTGRES_PROFILE)
else
DB=$(SQLITE_PROFILE)
endif
# conditionally assign BROKER_TYPE based on what is present in COMPOSE_PROFILES
# if the user specifies both rabbitmq and redis, we'll make the assumption that rabbitmq is the broker
ifeq ($(findstring $(RABBITMQ_PROFILE),$(COMPOSE_PROFILES)),$(RABBITMQ_PROFILE))
BROKER_TYPE=$(RABBITMQ_PROFILE)
else
BROKER_TYPE=$(REDIS_PROFILE)
endif
define run_engine_docker_command
DB=$(DB) BROKER_TYPE=$(BROKER_TYPE) docker-compose -f $(DOCKER_COMPOSE_FILE) run --rm oncall_engine_commands $(1)
endef
$(ENV):
python3.9 -m venv $(ENV_DIR)
define run_docker_compose_command
COMPOSE_PROFILES=$(COMPOSE_PROFILES) DB=$(DB) BROKER_TYPE=$(BROKER_TYPE) docker-compose -f $(DOCKER_COMPOSE_FILE) $(1)
endef
bootstrap: $(ENV)
$(PIP) install -U pip wheel
cp -n .env.dev.example .env.dev
cd engine && $(PIP) install -r requirements.txt
@touch $@
# touch SQLITE_DB_FILE if it does not exist and DB is eqaul to SQLITE_PROFILE
start:
ifeq ($(DB),$(SQLITE_PROFILE))
@if [ ! -f $(SQLITE_DB_FILE) ]; then \
touch $(SQLITE_DB_FILE); \
fi
endif
migrate: bootstrap
$(setup_engine_env) && $(PYTHON3) manage.py migrate
# if the oncall UI is to be run in docker we should do an initial build of the frontend code
# this makes sure that it will be available when the grafana container starts up without the need to
# restart the grafana container initially
ifeq ($(findstring $(UI_PROFILE),$(COMPOSE_PROFILES)),$(UI_PROFILE))
cd grafana-plugin && yarn install && yarn build:dev
endif
clean:
rm -rf $(ENV)
$(call run_docker_compose_command,up --remove-orphans -d)
lint: bootstrap
cd engine && $(PRECOMMIT) run --all-files
stop:
$(call run_docker_compose_command,down)
dbshell: bootstrap
$(setup_engine_env) && $(PYTHON3) manage.py dbshell $(ARGS)
restart:
$(call run_docker_compose_command,restart)
shell: bootstrap
$(setup_engine_env) && $(PYTHON3) manage.py shell $(ARGS)
cleanup: stop
docker system prune --filter label="$(DOCKER_COMPOSE_DEV_LABEL)" --all --volumes
test: bootstrap
$(setup_engine_env) && $(PYTEST) --ds=settings.dev $(ARGS)
install-pre-commit:
@if [ ! -x "$$(command -v pre-commit)" ]; then \
echo "installing pre-commit"; \
pip install $$(grep "pre-commit" $(ENGINE_DIR)/requirements.txt); \
else \
echo "pre-commit already installed"; \
fi
manage: bootstrap
$(setup_engine_env) && $(PYTHON3) manage.py $(ARGS)
lint: install-pre-commit
pre-commit run --all-files
run: bootstrap migrate
$(setup_engine_env) && $(PYTHON3) manage.py runserver
install-precommit-hook: install-pre-commit
pre-commit install
start-celery: bootstrap
. $(ENV)/bin/activate && $(setup_engine_env) && $(PYTHON3) manage.py start_celery
get-invite-token:
$(call run_engine_docker_command,python manage.py issue_invite_for_the_frontend --override)
start-celery-beat: bootstrap
$(setup_engine_env) && $(CELERY) -A engine beat -l info
test:
$(call run_engine_docker_command,pytest)
purge-queues: bootstrap
$(setup_engine_env) && $(CELERY) -A engine purge
start-celery-beat:
$(call run_engine_docker_command,celery -A engine beat -l info)
docker-services-start:
docker-compose -f $(DOCKER_FILE) up -d
@echo "Waiting for database connection..."
until $$(nc -z -v -w30 localhost 3306); do sleep 1; done;
purge-queues:
$(call run_engine_docker_command,celery -A engine purge -f)
docker-services-restart:
docker-compose -f $(DOCKER_FILE) restart
shell:
$(call run_engine_docker_command,python manage.py shell)
docker-services-stop:
docker-compose -f $(DOCKER_FILE) stop
dbshell:
$(call run_engine_docker_command,python manage.py dbshell)
watch-plugin:
cd grafana-plugin && yarn install && yarn && yarn watch
# The below commands are useful for running backend services outside of docker
define backend_command
export `grep -v '^#' $(DEV_ENV_FILE) | xargs -0` && \
export BROKER_TYPE=$(BROKER_TYPE) && \
cd engine && \
$(1)
endef
.PHONY: grafana-plugin
backend-bootstrap:
pip install -U pip wheel
cd engine && pip install -r requirements.txt
backend-migrate:
$(call backend_command,python manage.py migrate)
run-backend-server:
$(call backend_command,python manage.py runserver)
run-backend-celery:
$(call backend_command,python manage.py start_celery)
backend-command:
$(call backend_command,$(CMD))

View file

@ -18,7 +18,11 @@ Developer-friendly incident response with brilliant Slack integration.
## Getting Started
We prepared multiple environments: [production](https://grafana.com/docs/grafana-cloud/oncall/open-source/#production-environment), [developer](DEVELOPER.md) and hobby:
We prepared multiple environments:
- [production](https://grafana.com/docs/grafana-cloud/oncall/open-source/#production-environment)
- [developer](./dev/README.md)
- hobby (described in the following steps)
1. Download [`docker-compose.yml`](docker-compose.yml):

View file

@ -15,7 +15,7 @@ TWILIO_AUTH_TOKEN=
TWILIO_NUMBER=
DJANGO_SETTINGS_MODULE=settings.dev
SECRET_KEY=jkashdkjashdkjh
SECRET_KEY=jyRnfRIeMjYfKdoFa9dKXcNaEGGc8GH1TChmYoWW
BASE_URL=http://localhost:8080
FEATURE_TELEGRAM_INTEGRATION_ENABLED=True
@ -26,3 +26,17 @@ SLACK_INSTALL_RETURN_REDIRECT_HOST=http://localhost:8080
SOCIAL_AUTH_REDIRECT_IS_HTTPS=False
GRAFANA_INCIDENT_STATIC_API_KEY=
CELERY_WORKER_QUEUE="default,critical,long,slack,telegram,webhook,retry,celery"
CELERY_WORKER_CONCURRENCY=1
CELERY_WORKER_MAX_TASKS_PER_CHILD=100
CELERY_WORKER_SHUTDOWN_INTERVAL=65m
CELERY_WORKER_BEAT_ENABLED=True
RABBITMQ_USERNAME=rabbitmq
RABBITMQ_PASSWORD=rabbitmq
RABBITMQ_HOST=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_DEFAULT_VHOST="/"
REDIS_URI=redis://redis:6379/0

12
dev/.env.mysql.dev Normal file
View file

@ -0,0 +1,12 @@
DATABASE_USER=root
DATABASE_NAME=oncall_local_dev
DATABASE_PASSWORD=empty
DATABASE_HOST=mysql
DATABASE_PORT=3306
# specific for the grafana container
GF_DATABASE_TYPE=mysql
GF_DATABASE_HOST=mysql:3306
GF_DATABASE_USER=root
GF_DATABASE_PASSWORD=empty
GF_DATABASE_SSL_MODE=disable

13
dev/.env.postgres.dev Normal file
View file

@ -0,0 +1,13 @@
DATABASE_TYPE=postgresql
DATABASE_NAME=oncall_local_dev
DATABASE_USER=postgres
DATABASE_PASSWORD=empty
DATABASE_HOST=postgres
DATABASE_PORT=5432
# specific for the grafana container
GF_DATABASE_TYPE=postgres
GF_DATABASE_HOST=postgres:5432
GF_DATABASE_NAME=grafana
GF_DATABASE_USER=postgres
GF_DATABASE_PASSWORD=empty

2
dev/.env.sqlite.dev Normal file
View file

@ -0,0 +1,2 @@
DATABASE_TYPE=sqlite3
DATABASE_NAME=/var/lib/oncall/oncall.db

1
dev/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
.env.dev

203
dev/README.md Normal file
View file

@ -0,0 +1,203 @@
# Developer quickstart
- [Running the project](#running-the-project)
- [`COMPOSE_PROFILES`](#compose_profiles)
- [`GRAFANA_VERSION`](#grafana_version)
- [Running backend services outside Docker](#running-backend-services-outside-docker)
- [Useful `make` commands](#useful-make-commands)
- [Setting environment variables](#setting-environment-variables)
- [Slack application setup](#slack-application-setup)
- [Update drone build](#update-drone-build)
- [Troubleshooting](#troubleshooting)
- [ld: library not found for -lssl](#ld-library-not-found-for--lssl)
- [Could not build wheels for cryptography which use PEP 517 and cannot be installed directly](#could-not-build-wheels-for-cryptography-which-use-pep-517-and-cannot-be-installed-directly)
- [django.db.utils.OperationalError: (1366, "Incorrect string value ...")](#djangodbutilsoperationalerror-1366-incorrect-string-value)
- [IDE Specific Instructions](#ide-specific-instructions)
- [PyCharm](#pycharm-professional-edition)
Related: [How to develop integrations](/engine/config_integrations/README.md)
## Running the project
By default everything runs inside Docker. These options can be modified via the [`COMPOSE_PROFILES`](#compose_profiles) environment variable.
1. Firstly, ensure that you have `docker` [installed](https://docs.docker.com/get-docker/) and running on your machine. **NOTE**: the `docker-compose-developer.yml` file uses some syntax/features that are only supported by Docker Compose v2. For insturctions on how to enable this (if you haven't already done so), see [here](https://www.docker.com/blog/announcing-compose-v2-general-availability/).
2. Run `make start`. By default this will run everything in Docker, using SQLite as the database and Redis as the message broker/cache. See [Running in Docker](#running-in-docker) below for more details on how to swap out/disable which components are run in Docker.
3. Open Grafana in a browser [here](http://localhost:3000/plugins/grafana-oncall-app) (login: `oncall`, password: `oncall`).
4. You should now see the OnCall plugin configuration page. Fill out the configuration options as follows:
- Invite token: run `make get-invite-token` and copy/paste the token that gets printed out
- OnCall backend URL: http://host.docker.internal:8080 (this is the URL that is running the OnCall API; it should be accessible from Grafana)
- Grafana URL: http://grafana:3000 (this is the URL OnCall will use to talk to the Grafana Instance)
5. Enjoy! Check our [OSS docs](https://grafana.com/docs/grafana-cloud/oncall/open-source/) if you want to set up Slack, Telegram, Twilio or SMS/calls through Grafana Cloud.
6. (Optional) Install `pre-commit` hooks by running `make install-precommit-hook`
### `COMPOSE_PROFILES`
This configuration option represents a comma-separated list of [`docker-compose` profiles](https://docs.docker.com/compose/profiles/). It allows you to swap-out, or disable, certain components in Docker.
This option can be configured in two ways:
1. Setting a `COMPOSE_PROFILE` environment variable in `.env.dev`. This allows you to avoid having to set `COMPOSE_PROFILE` for each `make` command you execute afterwards.
2. Passing in a `COMPOSE_PROFILES` argument when running `make` commands. For example:
```bash
make start COMPOSE_PROFILES=postgres,engine,grafana,rabbitmq
```
The possible profiles values are:
- `grafana`
- `engine`
- `oncall_ui`
- `redis`
- `rabbitmq`
- `postgres`
- `mysql`
The default is `engine,oncall_ui,redis,grafana`. This runs:
- all OnCall components (using SQLite as the database)
- Redis as the Celery message broker/cache
- a Grafana container
### `GRAFANA_VERSION`
If you would like to change the version of Grafana being run, simply pass in a `GRAFANA_VERSION` environment variable to `make start` (or alternatively set it in your `.env.dev` file). The value of this environment variable should be a valid `grafana/grafana` published Docker [image tag](https://hub.docker.com/r/grafana/grafana/tags).
### Running backend services outside Docker
By default everything runs inside Docker. If you would like to run the backend services outside of Docker (for integrating w/ PyCharm for example), follow these instructions:
1. Create a Python 3.9 virtual environment using a method of your choosing (ex. [venv](https://docs.python.org/3.9/library/venv.html) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv)). Make sure the virtualenv is "activated".
2. `postgres` is a dependency on some of our Python dependencies (notably `psycopg2` ([docs](https://www.psycopg.org/docs/install.html#prerequisites))). Please visit [here](https://www.postgresql.org/download/) for installation instructions.
3. `make backend-bootstrap` - installs all backend dependencies
4. Modify your `.env.dev` by copying the contents of one of `.env.mysql.dev`, `.env.postgres.dev`, or `.env.sqlite.dev` into `.env.dev` (you should exclude the `GF_` prefixed environment variables). In most cases where you are running stateful services via `docker-compose` and backend services outside of docker you will simply need to change the database host to `localhost` (or in the case of `sqlite` update the file-path to your `sqlite` database file).
5. `make backend-migrate` - runs necessary database migrations
6. Open two separate shells and then run the following:
- `make run-backend-server` - runs the HTTP server
- `make run-backend-celery` - runs Celery workers
## Useful `make` commands
See [`COMPOSE_PROFILES`](#compose_profiles) for more information on what this option is and how to configure it.
```bash
make stop # stop all of the docker containers
make restart # restart all docker containers
# this will remove all of the images, containers, volumes, and networks
# associated with your local OnCall developer setup
make cleanup
make get-invite-token # generate an invitation token
make start-celery-beat # start celery beat
make purge-queues # purge celery queues
make shell # starts an OnCall engine Django shell
make dbshell # opens a DB shell
make test # run backend tests
# run both frontend and backend linters
# may need to run `yarn install` from within `grafana-plugin` to install several `pre-commit` dependencies
make lint
```
## Setting environment variables
If you need to override any additional environment variables, you should set these in a root `.env.dev` file. This file is automatically picked up by the OnCall engine Docker containers. This file is ignored from source control and also overrides any defaults that are set in other `.env*` files
## Slack application setup
For Slack app configuration check our docs: https://grafana.com/docs/grafana-cloud/oncall/open-source/#slack-setup
## Update drone build
The `.drone.yml` build file must be signed when changes are made to it. Follow these steps:
If you have not installed drone CLI follow [these instructions](https://docs.drone.io/cli/install/)
To sign the `.drone.yml` file:
```bash
export DRONE_SERVER=https://drone.grafana.net
# Get your drone token from https://drone.grafana.net/account
export DRONE_TOKEN=<Your DRONE_TOKEN>
drone sign --save grafana/oncall .drone.yml
```
## Troubleshooting
### ld: library not found for -lssl
**Problem:**
```
make backend-bootstrap
...
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
...
```
**Solution:**
```
export LDFLAGS=-L/usr/local/opt/openssl/lib
make backend-bootstrap
```
### Could not build wheels for cryptography which use PEP 517 and cannot be installed directly
Happens on Apple Silicon
**Problem:**
```
build/temp.macosx-12-arm64-3.9/_openssl.c:575:10: fatal error: 'openssl/opensslv.h' file not found
#include <openssl/opensslv.h>
^~~~~~~~~~~~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for cryptography
```
**Solution:**
```
LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install `cat engine/requirements.txt | grep cryptography`
```
### django.db.utils.OperationalError: (1366, "Incorrect string value ...")
**Problem:**
```
django.db.utils.OperationalError: (1366, "Incorrect string value: '\\xF0\\x9F\\x98\\x8A\\xF0\\x9F...' for column 'cached_name' at row 1")
```
**Solution:**
Recreate the database with the correct encoding.
## IDE Specific Instructions
### PyCharm
1. Follow the instructions listed in ["Running backend services outside Docker"](#running-backend-services-outside-docker).
2. Open the project in PyCharm
3. Settings &rarr; Project OnCall
- In Python Interpreter click the gear and create a new Virtualenv from existing environment selecting the venv created in Step 1.
- In Project Structure make sure the project root is the content root and add /engine to Sources
4. Under Settings &rarr; Languages & Frameworks &rarr; Django
- Enable Django support
- Set Django project root to /engine
- Set Settings to settings/dev.py
5. Create a new Django Server run configuration to Run/Debug the engine
- Use a plugin such as EnvFile to load the .env.dev file
- Change port from 8000 to 8080

View file

@ -1,84 +0,0 @@
version: "3.8"
services:
postgres:
image: postgres:14.4
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: oncall_local_dev
POSTGRES_PASSWORD: empty
POSTGRES_INITDB_ARGS: --encoding=UTF-8
deploy:
resources:
limits:
memory: 500m
cpus: '0.5'
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis
restart: always
ports:
- "6379:6379"
deploy:
resources:
limits:
memory: 100m
cpus: '0.1'
rabbit:
image: "rabbitmq:3.7.15-management"
environment:
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
RABBITMQ_DEFAULT_VHOST: "/"
deploy:
resources:
limits:
memory: 1000m
cpus: '0.5'
ports:
- "15672:15672"
- "5672:5672"
postgres_to_create_grafana_db:
image: postgres:14.4
command: bash -c "PGPASSWORD=empty psql -U postgres -h postgres -tc \"SELECT 1 FROM pg_database WHERE datname = 'grafana'\" | grep -q 1 || PGPASSWORD=empty psql -U postgres -h postgres -c \"CREATE DATABASE grafana\""
depends_on:
postgres:
condition: service_healthy
grafana:
image: "grafana/grafana:main"
restart: always
environment:
GF_DATABASE_TYPE: postgres
GF_DATABASE_HOST: postgres:5432
GF_DATABASE_NAME: grafana
GF_DATABASE_USER: postgres
GF_DATABASE_PASSWORD: empty
GF_DATABASE_SSL_MODE: disable
GF_SECURITY_ADMIN_USER: ${GRAFANA_USER:-admin}
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-admin}
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS: grafana-oncall-app
GF_INSTALL_PLUGINS: grafana-oncall-app
deploy:
resources:
limits:
memory: 500m
cpus: '0.5'
volumes:
- ./grafana-plugin:/var/lib/grafana/plugins/grafana-plugin
ports:
- "3000:3000"
depends_on:
postgres_to_create_grafana_db:
condition: service_completed_successfully
postgres:
condition: service_healthy

View file

@ -1,80 +1,273 @@
version: "3.8"
x-labels: &oncall-labels
- "com.grafana.oncall.env=dev"
x-oncall-build: &oncall-build-args
context: ./engine
target: dev
labels: *oncall-labels
x-oncall-volumes: &oncall-volumes
- ./engine:/etc/app
- ./engine/oncall.db:/var/lib/oncall/oncall.db
x-env-files: &oncall-env-files
- ./dev/.env.dev
- ./dev/.env.${DB}.dev
x-env-vars: &oncall-env-vars
BROKER_TYPE: ${BROKER_TYPE}
services:
mysql:
image: mysql:5.7
platform: linux/x86_64
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
oncall_ui:
container_name: oncall_ui
labels: *oncall-labels
build:
context: ./grafana-plugin
dockerfile: Dockerfile.dev
labels: *oncall-labels
volumes:
- ./grafana-plugin:/etc/app
- /etc/app/node_modules
profiles:
- oncall_ui
oncall_engine:
container_name: oncall_engine
labels: *oncall-labels
build: *oncall-build-args
restart: always
command: "python manage.py runserver 0.0.0.0:8080"
env_file: *oncall-env-files
environment: *oncall-env-vars
volumes: *oncall-volumes
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: empty
MYSQL_DATABASE: oncall_local_dev
deploy:
resources:
limits:
memory: 500m
cpus: '0.5'
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
timeout: 20s
retries: 10
- "8080:8080"
depends_on:
oncall_db_migration:
condition: service_completed_successfully
profiles:
- engine
# used to invoke one-off commands, primarily from the Makefile
# oncall_engine couldn't (easily) be used due to it's depends_on property
# we could alternatively just use `docker run` however that would require
# duplicating the env-files, volume mounts, etc in the Makefile
oncall_engine_commands:
container_name: oncall_engine_commands
labels: *oncall-labels
build: *oncall-build-args
env_file: *oncall-env-files
environment: *oncall-env-vars
volumes: *oncall-volumes
profiles:
# no need to start this except from within the Makefile
- _engine_commands
oncall_celery:
container_name: oncall_celery
labels: *oncall-labels
build: *oncall-build-args
restart: always
command: "python manage.py start_celery"
env_file: *oncall-env-files
environment: *oncall-env-vars
volumes: *oncall-volumes
depends_on:
oncall_db_migration:
condition: service_completed_successfully
profiles:
- engine
oncall_db_migration:
container_name: oncall_db_migration
labels: *oncall-labels
build: *oncall-build-args
command: "python manage.py migrate --noinput"
env_file: *oncall-env-files
environment: *oncall-env-vars
volumes: *oncall-volumes
depends_on:
postgres:
condition: service_healthy
mysql:
condition: service_healthy
rabbitmq:
condition: service_healthy
redis:
condition: service_healthy
profiles:
- engine
redis:
image: redis
container_name: redis
labels: *oncall-labels
image: redis:7.0.5
restart: always
ports:
- "6379:6379"
deploy:
labels: *oncall-labels
resources:
limits:
memory: 100m
cpus: '0.1'
memory: 500m
cpus: "0.5"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
timeout: 5s
interval: 5s
retries: 10
volumes:
- redisdata_dev:/data
profiles:
- redis
rabbit:
rabbitmq:
container_name: rabbitmq
labels: *oncall-labels
image: "rabbitmq:3.7.15-management"
restart: always
environment:
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
RABBITMQ_DEFAULT_VHOST: "/"
deploy:
resources:
limits:
memory: 1000m
cpus: '0.5'
ports:
- "15672:15672"
- "5672:5672"
deploy:
labels: *oncall-labels
resources:
limits:
memory: 1000m
cpus: "0.5"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
timeout: 30s
retries: 3
volumes:
- rabbitmqdata_dev:/var/lib/rabbitmq
profiles:
- rabbitmq
mysql-to-create-grafana-db:
mysql:
container_name: mysql
labels: *oncall-labels
image: mysql:5.7
platform: linux/x86_64
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
restart: always
environment:
MYSQL_ROOT_PASSWORD: empty
MYSQL_DATABASE: oncall_local_dev
ports:
- "3306:3306"
deploy:
labels: *oncall-labels
resources:
limits:
memory: 500m
cpus: "0.5"
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- mysqldata_dev:/var/lib/mysql
profiles:
- mysql
mysql_to_create_grafana_db:
container_name: mysql_to_create_grafana_db
labels: *oncall-labels
image: mysql:5.7
platform: linux/x86_64
command: bash -c "mysql -h mysql -uroot -pempty -e 'CREATE DATABASE IF NOT EXISTS grafana CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;'"
depends_on:
mysql:
condition: service_healthy
profiles:
- mysql
grafana:
image: "grafana/grafana:main"
postgres:
container_name: postgres
labels: *oncall-labels
image: postgres:14.4
restart: always
environment:
GF_DATABASE_TYPE: mysql
GF_DATABASE_HOST: mysql
GF_DATABASE_USER: root
GF_DATABASE_PASSWORD: empty
GF_SECURITY_ADMIN_USER: oncall
GF_SECURITY_ADMIN_PASSWORD: oncall
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS: grafana-oncall-app
POSTGRES_DB: oncall_local_dev
POSTGRES_PASSWORD: empty
POSTGRES_INITDB_ARGS: --encoding=UTF-8
ports:
- "5432:5432"
deploy:
labels: *oncall-labels
resources:
limits:
memory: 500m
cpus: '0.5'
cpus: "0.5"
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ./grafana-plugin:/var/lib/grafana/plugins/grafana-plugin
- postgresdata_dev:/var/lib/postgresql/data
profiles:
- postgres
postgres_to_create_grafana_db:
container_name: postgres_to_create_grafana_db
labels: *oncall-labels
image: postgres:14.4
command: bash -c "PGPASSWORD=empty psql -U postgres -h postgres -tc \"SELECT 1 FROM pg_database WHERE datname = 'grafana'\" | grep -q 1 || PGPASSWORD=empty psql -U postgres -h postgres -c \"CREATE DATABASE grafana\""
depends_on:
postgres:
condition: service_healthy
profiles:
- postgres
grafana:
container_name: grafana
labels: *oncall-labels
image: "grafana/grafana:${GRAFANA_VERSION:-main}"
restart: always
environment:
GF_SECURITY_ADMIN_USER: oncall
GF_SECURITY_ADMIN_PASSWORD: oncall
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS: grafana-oncall-app
env_file:
- ./dev/.env.${DB}.dev
ports:
- "3000:3000"
depends_on:
mysql:
condition: service_healthy
deploy:
labels: *oncall-labels
resources:
limits:
memory: 500m
cpus: "0.5"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- grafanadata_dev:/var/lib/grafana
- ./grafana-plugin:/var/lib/grafana/plugins/grafana-plugin
profiles:
- grafana
volumes:
redisdata_dev:
labels: *oncall-labels
grafanadata_dev:
labels: *oncall-labels
rabbitmqdata_dev:
labels: *oncall-labels
postgresdata_dev:
labels: *oncall-labels
mysqldata_dev:
labels: *oncall-labels
networks:
default:
name: oncall_dev
labels: *oncall-labels

View file

@ -1,7 +1,6 @@
version: "3.8"
x-environment:
&oncall-environment
x-environment: &oncall-environment
BASE_URL: $DOMAIN
SECRET_KEY: $SECRET_KEY
RABBITMQ_USERNAME: "rabbitmq"
@ -82,14 +81,14 @@ services:
resources:
limits:
memory: 500m
cpus: '0.5'
cpus: "0.5"
healthcheck:
test: "mysql -uroot -p$MYSQL_PASSWORD oncall_hobby -e 'select 1'"
timeout: 20s
retries: 10
redis:
image: redis
image: redis:7.0.5
restart: always
expose:
- 6379
@ -97,7 +96,7 @@ services:
resources:
limits:
memory: 100m
cpus: '0.1'
cpus: "0.1"
rabbitmq:
image: "rabbitmq:3.7.15-management"
@ -113,7 +112,7 @@ services:
resources:
limits:
memory: 1000m
cpus: '0.5'
cpus: "0.5"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
@ -148,7 +147,7 @@ services:
resources:
limits:
memory: 500m
cpus: '0.5'
cpus: "0.5"
depends_on:
mysql_to_create_grafana_db:
condition: service_completed_successfully

View file

@ -1,7 +1,6 @@
version: "3.8"
x-environment:
&oncall-environment
x-environment: &oncall-environment
DATABASE_TYPE: sqlite3
BROKER_TYPE: redis
BASE_URL: $DOMAIN
@ -55,7 +54,7 @@ services:
condition: service_healthy
redis:
image: redis
image: redis:7.0.5
restart: always
expose:
- 6379
@ -65,7 +64,7 @@ services:
resources:
limits:
memory: 500m
cpus: '0.5'
cpus: "0.5"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
timeout: 5s
@ -88,7 +87,7 @@ services:
resources:
limits:
memory: 500m
cpus: '0.5'
cpus: "0.5"
profiles:
- with_grafana

View file

@ -18,7 +18,7 @@ This guide describes the necessary installation and configuration steps needed t
There are three Grafana OnCall OSS environments available:
- **Hobby** playground environment for local usage: [README.md](https://github.com/grafana/oncall#getting-started)
- **Development** environment for contributors: [DEVELOPER.md](https://github.com/grafana/oncall/blob/dev/DEVELOPER.md)
- **Development** environment for contributors: [development README.md](https://github.com/grafana/oncall/blob/dev/dev/README.md)
- **Production** environment for reliable cloud installation using Helm: [Production Environment](#production-environment)
## Production Environment

8
engine/.dockerignore Normal file
View file

@ -0,0 +1,8 @@
__pycache__
.pytest_cache
*.pyc
celerybeat-schedule
*.db
./extensions
.DS_Store

View file

@ -1,12 +1,13 @@
FROM python:3.9-alpine3.16
FROM python:3.9-alpine3.16 AS base
RUN apk add bash python3-dev build-base linux-headers pcre-dev mariadb-connector-c-dev openssl-dev libffi-dev git
RUN pip install uwsgi
WORKDIR /etc/app
COPY ./requirements.txt ./
RUN pip install regex==2021.11.2
RUN pip install -r requirements.txt
# we intentionally have two COPY commands, this is to have the requirements.txt in a separate build step
# which only invalidates when the requirements.txt actually changes. This avoids having to unneccasrily reinstall deps (which is time-consuming)
# https://stackoverflow.com/questions/34398632/docker-how-to-run-pip-requirements-txt-only-if-there-was-a-change/34399661#34399661
COPY ./ ./
# Collect static files and create an SQLite database
@ -14,6 +15,12 @@ RUN mkdir -p /var/lib/oncall
RUN DJANGO_SETTINGS_MODULE=settings.prod_without_db DATABASE_TYPE=sqlite3 DATABASE_NAME=/var/lib/oncall/oncall.db SECRET_KEY="ThEmUsTSecretKEYforBUILDstage123" python manage.py collectstatic --no-input
RUN chown -R 1000:2000 /var/lib/oncall
FROM base AS dev
# these are needed for the django dbshell command
RUN apk add sqlite mysql-client postgresql-client
FROM base AS prod
# This is required for prometheus_client to sync between uwsgi workers
RUN mkdir -p /tmp/prometheus_django_metrics;

View file

@ -40,4 +40,4 @@ PyMySQL==1.0.2
psycopg2-binary==2.9.3
emoji==1.7.0
apns2==0.7.2
regex==2021.11.2

View file

@ -17,15 +17,7 @@ else:
"PORT": DATABASE_PORT or DATABASE_DEFAULTS[DATABASE_TYPE]["PORT"],
}
if BROKER_TYPE == BrokerTypes.RABBITMQ:
CELERY_BROKER_URL = "pyamqp://rabbitmq:rabbitmq@localhost:5672"
elif BROKER_TYPE == BrokerTypes.REDIS:
CELERY_BROKER_URL = "redis://localhost:6379"
CACHES["default"]["LOCATION"] = ["localhost:6379"]
SECRET_KEY = os.environ.get("SECRET_KEY", "osMsNM0PqlRHBlUvqmeJ7+ldU3IUETCrY9TrmiViaSmInBHolr1WUlS0OFS4AHrnnkp1vp9S9z1")
MIRAGE_SECRET_KEY = os.environ.get(
"MIRAGE_SECRET_KEY", "sIrmyTvh+Go+h/2E46SnYGwgkKyH6IF6MXZb65I40HVCbj0+dD3JvpAqppEwFb7Vxnxlvtey+EL"
)

View file

@ -0,0 +1,4 @@
node_modules
frontend_enterprise
dist
.DS_Store

View file

@ -0,0 +1,14 @@
FROM node:14.17.0-alpine
WORKDIR /etc/app
ENV PATH /etc/app/node_modules/.bin:$PATH
# this allows hot reloading of the container
# https://stackoverflow.com/a/72478714
ENV WATCHPACK_POLLING true
COPY ./package.json ./
COPY ./yarn.lock ./
RUN yarn install
CMD ["yarn", "start"]

View file

@ -8,6 +8,7 @@
"stylelint": "stylelint ./src/**/*.{css,scss,module.css,module.scss}",
"stylelint:fix": "stylelint --fix ./src/**/*.{css,scss,module.css,module.scss}",
"build": "grafana-toolkit plugin:build",
"build:dev": "grafana-toolkit plugin:build --skipTest --skipLint",
"test": "jest --verbose",
"dev": "grafana-toolkit plugin:dev",
"watch": "grafana-toolkit plugin:dev --watch",

View file

@ -299,7 +299,7 @@ Seek for such a line: “Your invite token: <<LONG TOKEN>> , use it in the Graf
<>
<Input id="onCallInvitationToken" onChange={handleInvitationTokenChange} />
<a
href="https://github.com/grafana/oncall/blob/dev/DEVELOPER.md#frontend-setup"
href="https://github.com/grafana/oncall/blob/dev/dev/README.md#frontend-setup"
target="_blank"
rel="noreferrer"
>