Skip to content
This repository was archived by the owner on Nov 25, 2025. It is now read-only.

buildwithgrove/path-external-auth-server

Repository files navigation

🫛 PEAS

🫛 PEAS
PATH External Auth Server

Grove logo

Introduction

PEAS (PATH External Auth Server) is an external authorization server that can be used to authorize requests to the PATH Gateway.

It is part of the GUARD authorization system for PATH and runs in the PATH Kubernetes cluster.

PEAS Responsibilities

PEAS has the following two responsibilities:

Authenticating Requests

Is request to GUARD authorized?

  • If authorized, forward the request upstream
  • If not authorized, return an error

Assigning Rate Limiting Headers

Is request to GUARD rate limited?

  • If rate limited, forward the request upstream with rate limiting headers
  • If not rate limited, forward the request upstream without rate limiting headers

Data for authentication and rate limiting is sourced from the Grove Portal Database. For more information about the Grove Portal Database, see the Grove Portal Database README.

Docker Image

PEAS GHCR Package

docker pull ghcr.io/buildwithgrove/path-external-auth-server:latest

Architecture Diagram

graph TD
    User[/"<big>PATH<br>User</big>"\]
    Envoy[<big>Envoy Proxy</big>]

    AUTH["PEAS (PATH External Auth Server)"]
    AUTH_DECISION{Did<br>Authorize<br>Request?}
    PATH[<big>PATH</big>]

    Error[[Error Returned to User]]
    Result[[Result Returned to User]]

    GroveDB[("Grove Portal Database<br>(Postgres)")]

    subgraph AUTH["PEAS<br/>PATH External Auth Server"]
    end

    User -->|1.Send Request| Envoy
    Envoy -.->|2.Authorization Check<br>gRPC| AUTH
    AUTH -.->|3.Authorization Result<br>gRPC| Envoy
    Envoy --> AUTH_DECISION
    AUTH_DECISION -->|4.No <br> Forward Request| Error
    AUTH_DECISION -->|4.Yes <br> Forward Request| PATH
    PATH -->|5.Response| Result

    GroveDB <-->|Postgres Connection| AUTH
Loading

PortalApp Structure

The PortalApp structure is defined in the store package and contains all data required from the Grove Portal Database for authorization and rate limiting.

See PortalApp structure here.

Request Headers

PEAS adds the following headers to authorized requests before forwarding them to the upstream service:

Header Contents Included For All Requests Example Value
Portal-Application-ID The portal app ID of the authorized portal app "a12b3c4d"
Portal-Account-ID The account ID associated with the portal app "3f4g2js2"

Rate Limiting Implementation

PEAS provides rate limiting capabilities through an in-memory rate limit store that tracks account usage and enforces monthly limits:

How does Rate Limiting Work?

  1. Rate Limit Store: Maintains an in-memory map of rate limited accounts, refreshed periodically from BigQuery data warehouse
  2. Monthly Usage Tracking: Monitors account usage against their monthly relay limits based on plan type
  3. Plan-Based Limits:
    • Free Plan (PLAN_FREE): 1,000,000 relays per month
    • Unlimited Plan (PLAN_UNLIMITED): Custom limits set per account, or unlimited if no limit specified
  4. Real-time Enforcement: Blocks requests from accounts that exceed their monthly limits

Rate Limit Store Refresh

The rate limit store automatically refreshes from the data warehouse to update account usage:

  • Default Refresh Interval: 5 minutes
  • Data Source: BigQuery data warehouse for monthly usage statistics
  • Configuration: RATE_LIMIT_STORE_REFRESH_INTERVAL environment variable
  • Monitoring: Refresh operations are logged and metrics are available via Prometheus

Portal App Store Refresh

PEAS maintains an in-memory store of portal app data for fast authorization lookups. This store is automatically refreshed from the Grove Portal Database on a configurable interval.

How does Portal App Store Refresh Work?

  1. Initial Load: On startup, PEAS fetches all portal app data from the database to populate the in-memory store
  2. Background Refresh: A background goroutine periodically refreshes the store by fetching the latest data from the database
  3. Thread-Safe Updates: The store uses read-write locks to ensure thread-safe access during refresh operations
  4. Performance Monitoring: Each refresh operation is timed and logged with metrics for monitoring

Configuration

The refresh interval is configurable via the REFRESH_INTERVAL environment variable:

  • Default: 30 seconds
  • Format: Duration string (e.g., 30s, 1m, 2m30s)
  • Purpose: Balance between data freshness and database load

Envoy Gateway Integration

PEAS exposes a gRPC service that adheres to the spec expected by Envoy Proxy's ext_authz HTTP Filter.

For more information see:

Prometheus Metrics

PEAS exposes Prometheus metrics on the /metrics endpoint for monitoring authorization performance, rate limiting, and system health.

Key Metrics

  • Authorization Metrics: Request counts, success rates, and response times
  • Rate Limiting Metrics: Account usage, rate limit decisions, and store sizes
  • System Health: Data source refresh errors and store performance

Endpoints

  • /metrics - Prometheus metrics endpoint (port 9090 by default)
  • /healthz - Health check endpoint
  • /debug/pprof/ - Runtime profiling (port 6060 by default)

A comprehensive Grafana dashboard is available at grafana/dashboard.json for visualizing all metrics.

Getting Portal App Auth & Rate Limit Status

PEAS includes a convenient Makefile target for testing authorization and rate limit status for Portal Apps during development.

Prerequisites

Usage

# Test without API key (for apps that don't require authentication)
make get_portal_app_auth_status PORTAL_APP_ID=1a2b3c4d

# Test with API key (for apps that require authentication)
make get_portal_app_auth_status PORTAL_APP_ID=1a2b3c4d API_KEY=4c352139ec5ca9288126300271d08867

Example Output

Successful Authorization:

{
  "status": {
    "message": "ok"
  },
  "okResponse": {
    "headers": [
      {
        "header": {
          "key": "Portal-Application-ID",
          "value": "1a2b3c4d"
        }
      },
      {
        "header": {
          "key": "Portal-Account-ID",
          "value": "d4c3b2a1"
        }
      }
    ]
  }
}

Failed Authorization:

{
  "status": {
    "code": 7,
    "message": "portal app not found"
  },
  "deniedResponse": {
    "status": {
      "code": "NotFound"
    },
    "body": "{\"code\": 404, \"message\": \"portal app not found\"}"
  }
}

Failed Rate Limit Check:

{
  "status": {
    "code": 7,
    "message": "This account is rate limited. To upgrade your plan or modify your account settings, log in to your account at https://portal.grove.city/"
  },
  "deniedResponse": {
    "status": {
      "code": "TooManyRequests"
    },
    "body": "{\"code\": 429, \"message\": \"This account is rate limited. To upgrade your plan or modify your account settings, log in to your account at https://portal.grove.city/\"}"
  }
}

This tool uses gRPC reflection to communicate with PEAS, testing the same authorization flow that Envoy Gateway uses in production.

PEAS Environment Variables

PEAS is configured via environment variables.

Variable Required Type Description Example Default Value
POSTGRES_CONNECTION_STRING string PostgreSQL connection string for the PortalApp database postgresql://username:password@localhost:5432/dbname -
GCP_PROJECT_ID string GCP project ID for the data warehouse used by rate limiting your-project-id -
PORT int Port to run the external auth server on 10001 10001
METRICS_PORT int Port to run the Prometheus metrics server on 9090 9090
PPROF_PORT int Port to run the pprof server on 6060 6060
LOGGER_LEVEL string Log level for the external auth server info, debug, warn, error info
IMAGE_TAG string Image tag/version for the application v1.0.0 development
PORTAL_APP_STORE_REFRESH_INTERVAL duration Refresh interval for portal app data from the database 30s, 1m, 2m30s 30s
RATE_LIMIT_STORE_REFRESH_INTERVAL duration Refresh interval for rate limit data from the data warehouse 30s, 1m, 2m30s 5m

Developing Metrics Dashboard Locally

This section describes how to run and test the PEAS metrics dashboard locally using Docker Compose, Prometheus, and Grafana.

Stack Components

  • Prometheus: Metrics collection from locally running PEAS
  • Grafana: Dashboard visualization using the PEAS dashboard

Prerequisites

  • PEAS running locally: Run PEAS directly on your machine (not in Docker)
  • Create a .env file in the parent directory (../) with your database credentials and configuration
  • Ensure you have access to your remote PostgreSQL and GCP BigQuery instances

Quick Start

  1. Start the monitoring stack:
    cd grafana/local
    docker compose up -d
  2. Start PEAS locally (in another terminal, from repo root):
    go run .
  3. Access the services:
    • PEAS gRPC Server: localhost:10001
    • PEAS Metrics: http://localhost:9090/metrics
    • PEAS Health: http://localhost:9090/healthz
    • PEAS pprof: http://localhost:6060/debug/pprof/
    • Prometheus: http://localhost:9091
    • Grafana: http://localhost:3000 (admin/admin)
  4. View the dashboard:
    • Go to Grafana at http://localhost:3000
    • Login with admin/admin
    • The PEAS dashboard should be automatically loaded

Local Testing

Test the health endpoint:

curl http://localhost:9090/healthz | jq

Test the metrics endpoint:

curl http://localhost:9090/metrics | grep peas_

Generate some test traffic:

Since PEAS is a gRPC server, you can use grpcurl to send test requests:

# Install grpcurl if needed
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
# Test authorization request (will likely fail but generate metrics)
grpcurl -plaintext localhost:10001 envoy.service.auth.v3.Authorization/Check

Load Testing

You can run a load test using the provided script:

make load-test

Or with custom parameters:

make load-test-custom TOTAL_REQUESTS=5000 SUCCESS_RATE=80

Cleanup

docker compose down -v  # Removes containers and volumes

Dashboard

The PEAS dashboard is automatically provisioned in Grafana when running the observability stack locally for development purposes.

For production deployments, you can import the dashboard manually.

Dashboard Screenshot

Dashboard Screenshot

Importing Dashboard to Production Grafana

To import the PEAS dashboard into your production Grafana instance follow the documentation on the Grafana documentation.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors