Convert natural‑language questions into SQL for the InteLIS database. Runs a query plan through an LLM, validates the SQL, executes it, and returns results (with privacy rules enforced).
- Install deps
composer install- Export the DB schema (required)
php ./bin/export-schema.phpThis generates var/schema.json. Re‑run it whenever the database schema changes.
- Copy
.env.exampleto.envand fill in DB creds + LLM API keys. - Optional HTTPS + domain: set
ENABLE_TRAEFIK=true, setAPP_DOMAINto your domain, setTRAEFIK_ACME_EMAIL, and point DNS A/AAAA to this host (ports 80/443 must be reachable for Let’s Encrypt). - Run
./deploy.sh(auto-installs Docker on Ubuntu if needed, then builds/starts the stack). - Export the schema once the DB is reachable:
docker compose exec app php bin/export-schema.php - Visit the app:
http://localhost:${APP_PORT:-8080}/chat(API:POST /ask).
Services: app (PHP/Slim), qdrant (vector store on 6333), rag-api (RAG helper on 8089).
-
Web UI: open
/chatand ask a question. -
API:
-
Ask a question
POST /ask Content-Type: application/json { "q": "How many VL tests in the last 6 months?", "provider": "ollama|openai|anthropic", "model": "optional-model-id" }
Response (minimal shape)
{ "sql": "SELECT …", "rows": [ { "col": "val" } ], "timing": { "provider": "…", "model_used": "…", "total_ms": 0 } } -
Clear conversation context (reset the server-side context window)
POST /ask Content-Type: application/json { "clear_context": true }
Response
{ "message": "Conversation context cleared", "context_reset": true }Note: when
clear_contextistrue, it is handled immediately and anyqvalue (if present) is ignored for that request.
-
High-level flow High-level flow
- User asks a question (UI
/chatorPOST /ask). - Context is built: user query + conversation history +
var/schema.json+ business rules + field guide → prompt to the selected LLM. - LLM generates SQL (QueryService validates & enforces privacy rules).
- SQL is executed against MySQL (DatabaseService).
- Charts are generated based on the query output.
- Results are returned to the caller (rows, counts, timing, debug info, charts) and conversation context is updated.
flowchart TD
U["User Query in Natural Language (via UI or API)"] --> QS["<strong>QueryService generates Prompt for LLM.</strong> <br><br>User Query, Schema, Business rules, and Field Guides are attached to Prompt "]
SCH["Database Schema - without any actual data"] --> QS
BR["Predefined Business Rules"] --> QS
FG["Predefined Field Guide"] --> QS
CTX[Conversation context/history] --> QS
QS --> LLM["LLM generates SQL based on Prompt"]
LLM --> VAL["Validate SQL & enforce privacy"]
VAL --> DB[(MySQL executes SQL)]
DB --> CS["Chart generation based on Query Output"]
CS --> RESP[Response JSON with Query Output + Charts]
RESP --> CTX
Works with Ollama, OpenAI, and Anthropic. Pick a provider/model in the /chat settings or send provider/model in the /ask payload.
- Privacy rules prevent returning disallowed columns.
- If you see “model not found”, use an explicit model id (e.g., for Anthropic use a dated id).
- If SQL generation looks off after schema changes, re‑export the schema (
export-schema.php).