Observability and alerting SaaS for Supabase, based on supabase-grafana
az login
az acr login --name supafanacr
docker build supabase-grafana --tag supabase-grafana
docker tag supabase-grafana supafanacr.azurecr.io/supabase-grafana:<VERSION>
docker push supafanacr.azurecr.io/supabase-grafana:<VERSION>- Update VM grafana image with new in
nix/hosts/grafana/grafana-container.nix. - Upload new VM grafana image version (see next section).
To test Azure image build:
nix build '.#supafana-image'
This should create a result directory containing a VHD image.
To create and upload an image to Azure:
az login
# Upload supafana image
./scripts/azure-upload-image-gallery.sh \
-g supafana-common-rg \
-r supafanasig \
-n supafana \
-v '0.0.1' \
-l eastus \
-i '.#supafana-image'
# Upload grafana image
./scripts/azure-upload-image-gallery.sh \
-g supafana-common-rg \
-r supafanasig \
-n grafana \
-v '0.0.1' \
-l eastus \
-i '.#grafana-image'Run the ./infra/modules/grafana-template.bicep template with the following parameters:
- supabaseProjectRef - supabase project reference id
- supabaseServiceRoleKey - service role key
- supafanaDomain - Supafana domain (supafana-test.com for test, supafana.com for prod)
- grafanaPassword - optional: admin user password (default password is
admin)
Use supafana-test-rg resource group for test env deployment.
See examples in `./infra/resources/grafana-mk-{1,2}.bicepparam
az deployment group create -c --debug --name supafana-test-grafana-mk-1-deploy --resource-group supafana-test-rg --parameters infra/resources/grafana-mk-1.bicepparamAfter provisioning, the host should be accessible via https://<supafanaDomain>/dashboard/<supabaseProjectRef>
- Example:
https://supafana-test.com/dashboard/kczjrdfrkmmofxkbjxex/
Internally, the new instance is accessible via supafana-<env>-grafana-<supabaseProjectRef>.supafana-<env>.local
- Example:
supafana-prod-grafana-xjzrrbkmeubsmkgdwgfq.supafana-prod.local
Grafana instances don't have public IPs and can be accessed only via our main servers (supafana-test.com and supafana.com).
To simplify access (to supafana-test.com), add the following lines to your ~/.ssh/config file:
#test access
Host *.supafana-test.local
ProxyJump admin@supafana-test.com
#prod access
Host *.supafana-prod.local
ProxyJump admin@supafana.com
With this, Grafana instances can be accessed directly, e.g., ssh admin@supafana-prod-grafana-xjzrrbkmeubsmkgdwgfq.supafana-prod.local
While there, to access the Elixir shell of the running app:
supafana remote
Internally, each Grafana instance runs on a NixOS VM, which, in turn, runs the supafana-grafana container as a podman-grafana systemd service.
To examine the service, use systemd commands:
systemctl status podman-grafanajournalctl -u podman-grafana
To get into a Grafana container, run podman exec:
sudo podman exec -ti grafana bash
More info about working with containers:
sudo podman pssudo podman inspect grafana
All resources created for a particular Grafana instance are tagged with vm:supafana-<env>-grafana-<supabaseProjectRef>. To delete them, filter all resources by tag:
az resource list --tag vm=supafana-<env>-grafana-<supabaseProjectRef> --query "[].id" -o tsvThen, delete:
az resource list --tag vm='supafana-prod-grafana-kczjrdfrkmmofxkbjxex' --query "[].id" -o tsv | xargs -I {} az resource delete --ids {}cd storefront && npx openapi-typescript https://api.supabase.com/api/v1-json -o src/types/supabase-api-schema.d.ts
