Multi-Agent Linked Open Data Pipeline for Smart Traffic Management
Features • One Command • Quick Start • Architecture • Team • Community • Contributing
This project is developed and maintained by:
![]() Nguyễn Nhật Quang Lead Developer 🏗️ Architecture | 🔧 Backend | ⚙️ DevOps |
![]() Nguyễn Việt Hoàng Backend Developer 🤖 Agents | 📊 Data | 🧪 Testing |
![]() Nguyễn Đình Anh Tuấn Full Stack Developer 🎨 Frontend | 🔌 API | 📚 Docs |
📄 See AUTHORS.md for detailed information, CONTRIBUTORS.md for all contributors, and CODEOWNERS for code ownership rules.
# Windows PowerShell - Just run everything with ONE command!
.\justrun.ps1 dev# Linux/macOS
./justrun.sh devThat's it! This single command will:
- ✅ Auto-detect prerequisites (Python, Node.js, Docker)
- ✅ Auto-install all dependencies if needed
- ✅ Copy environment files (
.env.example→.env) - ✅ Create required directories (
logs/,data/, etc.) - ✅ Start Docker infrastructure (12 services)
- ✅ Wait for databases to be healthy
- ✅ Launch Python Orchestrator + Citizen API (port 8001)
- ✅ Start TypeScript Backend API (port 5000)
- ✅ Start React Frontend (port 5173)
First time setup? The script handles everything automatically!
| Command | Description |
|---|---|
.\justrun.ps1 dev |
🚀 Start everything (auto-setup if needed) |
.\justrun.ps1 setup |
📦 Install all dependencies only |
.\justrun.ps1 prod |
🐳 Start with Docker (production) |
.\justrun.ps1 stop |
⏹️ Stop all services |
.\justrun.ps1 status |
📊 Check status of all services |
.\justrun.ps1 test |
🧪 Run all tests |
.\justrun.ps1 clean |
🧹 Clean and reset |
| Service | URL | Credentials |
|---|---|---|
| Frontend (React) | http://localhost:5173 | - |
| Backend (Express) | http://localhost:5000 | - |
| Citizen API (FastAPI) | http://localhost:8001/docs | - |
| Stellio Context Broker | http://localhost:8080 | - |
| Neo4j Browser | http://localhost:7474 | neo4j / test12345 |
| Apache Jena Fuseki | http://localhost:3030 | admin / test_admin |
UIP - Urban Intelligence Platform is a production-ready multi-agent system designed to process real-time traffic data in Ho Chi Minh City and publish it as Linked Open Data (LOD). The system integrates computer vision (YOLOX + DETR), semantic web technologies (RDF, NGSI-LD, SOSA/SSN), and a modern microservices architecture.
- 🚀 Production-Ready: Battle-tested with comprehensive error handling, retry logic, and graceful shutdown mechanisms.
- 🔧 YAML-Configurable: 100% configured via YAML — no code changes required for new domains.
- 🌐 Standards-Compliant: Full support for ETSI NGSI-LD, W3C SOSA/SSN, and Smart Data Models.
- 📊 Full Stack Solution: Python Backend + React/TypeScript Frontend + Docusaurus Documentation.
- 🐳 Cloud-Native: Docker Compose orchestration with 12 integrated services.
| Category | Count | Agents |
|---|---|---|
| Data Collection | 2 | image_refresh, external_data_collector |
| Ingestion | 1 | citizen_ingestion |
| Analytics | 4 | cv_analysis, congestion_detection, accident_detection, pattern_recognition |
| Transformation | 2 | ngsi_ld_transformer, sosa_ssn_mapper |
| Context Management | 4 | entity_publisher, state_updater, temporal_data_manager, stellio_state_query |
| RDF & Linked Data | 5 | ngsi_ld_to_rdf, triplestore_loader, lod_linkset_enrichment, content_negotiation, smart_data_models_validation |
| State Management | 4 | state_manager, accident_state_manager, congestion_state_manager, temporal_state_tracker |
| Monitoring | 3 | health_check, data_quality_validator, performance_monitor |
| Notification | 5 | alert_dispatcher, incident_report_generator, subscription_manager, email_notification, webhook_notification |
| Graph Database | 2 | neo4j_query, neo4j_sync |
| Cache | 2 | cache_manager, cache_invalidator |
| Integration | 3 | api_gateway, cache_manager, neo4j_sync |
| Kafka | 1 | kafka_entity_publisher |
| TypeScript AI | 3 | TrafficMaestroAgent, GraphInvestigatorAgent, EcoTwinAgent |
| Layer | Technologies |
|---|---|
| Backend | Python 3.9+, FastAPI, AsyncIO, APScheduler, YOLOX, DETR |
| Frontend | React 18, TypeScript, Vite, TailwindCSS, Zustand |
| Databases | PostgreSQL/TimescaleDB, Neo4j 5.12, MongoDB 7.0, Redis 7 |
| Semantic Web | Apache Jena Fuseki, Stellio Context Broker, RDF/SPARQL |
| Messaging | Apache Kafka (KRaft), WebSocket, Socket.IO |
| DevOps | Docker Compose (12 services), GitHub Actions, Prometheus, Grafana |
| Documentation | Docusaurus 3.0, OpenAPI/Swagger |
- NGSI-LD: ETSI CIM standard for context information management
- SOSA/SSN: W3C ontologies for sensor observations
- Smart Data Models: TM Forum/FIWARE standardized data models
- LOD Cloud: Integration with GeoNames, DBpedia, Wikidata
UIP uses GitHub Actions with 10 automated workflows:
| Stage | Workflows | Description |
|---|---|---|
| CI | test.yml, lint.yml, integration-tests.yml |
Unit tests (Python 3.9-3.11), code quality (Ruff, Black, mypy), integration tests |
| Security | codeql.yml, dependency-review.yml |
SAST scanning, dependency vulnerability audit |
| CD | deploy.yml, deploy-docs.yml |
Auto-deploy to Ubuntu VPS, Docusaurus to GitHub Pages |
| Release | release.yml |
Semantic versioning, Docker images, PyPI publishing |
| Automation | auto-label.yml, stale.yml |
PR labeling, stale issue management |
📖 See CI/CD Pipeline Wiki for detailed workflow documentation.
| Component | Requirement | Note |
|---|---|---|
| RAM | 16 GB | 12 Docker services + AI models require significant memory |
| CPU | 4 cores | Recommended: 6+ cores for better performance |
| Storage | 20 GB free space | Docker images, databases, and AI models |
| OS | Windows 10/11, Ubuntu 20.04+, macOS 11+ | Docker Desktop required for Windows/macOS |
| Component | Requirement | Note |
|---|---|---|
| RAM | 32 GB | Optimal for concurrent AI processing |
| CPU | 8+ cores | Intel i7/Ryzen 7 or equivalent |
| GPU | NVIDIA GPU with 6GB+ VRAM | For YOLOX/DETR acceleration (optional but recommended) |
| Storage | 50 GB SSD | Fast I/O for databases (Neo4j, PostgreSQL, MongoDB) |
| Network | 100 Mbps+ | For camera feeds and external API calls |
Windows/macOS Docker Desktop Settings:
Resources → Advanced:
- CPUs: Minimum 4, Recommended 6-8
- Memory: Minimum 12 GB, Recommended 20-24 GB
- Swap: 2 GB
- Disk Image Size: 60 GB
Linux Docker:
- No memory limits by default, but monitor with
docker stats - Ensure sufficient swap space (8-16 GB recommended)
WARNING: Running this system on machines with < 16GB RAM may cause:
- System freezes or crashes
- Out of Memory (OOM) errors
- Docker container failures
- Extremely slow performance
For Low-Spec Machines (8-12 GB RAM):
- Use
docker-composewith selective services only - Disable AI services (YOLOX/DETR) if not needed
- Run backend and frontend separately without full Docker stack
- Consider cloud deployment (AWS, GCP, Azure) instead
- Python 3.9 or higher
- Node.js 18 or higher
- Docker & Docker Compose (Docker Desktop 4.0+ for Windows/macOS)
- Git
- NVIDIA Docker (optional, for GPU support)
# Clone repository
git clone https://github.com/UIP-Urban-Intelligence-Platform/UIP-Urban_Intelligence_Platform.git
cd UIP-Urban_Intelligence_Platform
# Copy environment configuration
cp .env.example .env
# Start all services
docker-compose up -d
# Check services status
docker-compose psAccess Points:
| Service | URL | Credentials |
|---|---|---|
| Frontend (React) | http://localhost:5173 | - |
| Backend (Express) | http://localhost:5000 | - |
| Citizen API (FastAPI) | http://localhost:8001 | - |
| API Docs (Swagger) | http://localhost:8001/docs | - |
| Stellio Context Broker | http://localhost:8080 | - |
| Neo4j Browser | http://localhost:7474 | neo4j / test12345 |
| Fuseki SPARQL | http://localhost:3030 | admin / test_admin |
# Clone repository
git clone https://github.com/UIP-Urban-Intelligence-Platform/UIP-Urban_Intelligence_Platform.git
cd UIP-Urban_Intelligence_Platform
# Create Python virtual environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements/base.txt
pip install -r requirements/dev.txt
# Copy environment configuration
cp .env.example .env
# Run the unified system
python main.pyThis project supports standard GNU Make targets for building and installing:
# Clone repository
git clone https://github.com/UIP-Urban-Intelligence-Platform/UIP-Urban_Intelligence_Platform.git
cd UIP-Urban_Intelligence_Platform
# View all available targets
make help
# Build the Python package
make # or: make all
# Build and install the package
make install
# Run tests
make check # or: make test
# Clean build artifacts
make clean
# Clean everything (including venv, node_modules)
make distclean
# Uninstall the package
make uninstallGNU Make Standard Targets:
| Target | Description |
|---|---|
make / make all |
Build the Python package (creates dist/) |
make install |
Build and install the package |
make uninstall |
Uninstall the package |
make check |
Run all tests |
make clean |
Remove build artifacts |
make distclean |
Remove all generated files |
Project-Specific Targets:
| Target | Description |
|---|---|
make setup |
Install all dependencies (Python + Node.js) |
make dev |
Run all services in development mode |
make prod |
Run all services with Docker Compose |
make stop |
Stop all running services |
# Full system (API + Orchestrator every 60 minutes)
python main.py
# Custom orchestrator interval (30 minutes)
python main.py --orchestrator-interval 30
# Run orchestrator immediately on startup
python main.py --run-orchestrator-now
# API only (no orchestrator)
python main.py --no-orchestrator
# Orchestrator only (no API)
python main.py --no-api
# Run specific workflow phase
python orchestrator.py --phase transformation
# Dry run (validate without execution)
python orchestrator.py --dry-run38 Python Agents | 3 TypeScript AI Agents | 12 Docker Services | 7 Backend Services | 12 API Routes
flowchart TD
%% --- Định nghĩa các lớp (Layers) ---
subgraph ClientLayer ["🌐 CLIENT LAYER"]
Frontend["🖥️ React Frontend"]
ExtClients["📱 External Clients"]
end
subgraph GatewayLayer ["🔀 API GATEWAY LAYER"]
APIGW["Express.js Backend & TS AI Agents"]
end
subgraph PythonLayer ["🐍 PYTHON ORCHESTRATION & AGENTS"]
Orchestrator["📡 Ingestion API & Scheduler"]
MAS["🤖 Multi-Agent System"]
end
subgraph MQLayer ["📨 MESSAGE QUEUE LAYER"]
Kafka["Apache Kafka"]
end
subgraph StorageLayer ["🗄️ DATA STORAGE LAYER"]
Stellio["🌐 Stellio Context Broker"]
Databases["Neo4j, Fuseki, Postgres, Mongo, Redis"]
end
subgraph ExternalLayer ["🌍 EXTERNAL INTEGRATIONS"]
ExternalSources["Cameras, APIs, LOD Cloud, AI Models"]
end
%% --- Định nghĩa các luồng kết nối (Flows) ---
%% Client kết nối đến Gateway
Frontend -->|HTTP/WebSocket| APIGW
ExtClients -->|REST/WS| APIGW
%% Gateway giao tiếp với lớp Python
APIGW -->|API Calls| Orchestrator
%% Nguồn bên ngoài nạp dữ liệu vào Orchestrator
ExternalSources -->|Data Ingestion| Orchestrator
%% Orchestrator điều phối hệ thống Multi-Agent
Orchestrator -->|Triggers/Coordinates| MAS
%% Multi-Agent System xử lý và đẩy dữ liệu vào Kafka
MAS -->|Publishes Processed Data| Kafka
%% Kafka phân phối dữ liệu xuống lớp lưu trữ
Kafka -->|Streams Data| StorageLayer
%% Các kết nối trực tiếp đến lưu trữ (đơn giản hóa)
APIGW -.->|Queries| StorageLayer
MAS -.->|Reads/Writes| StorageLayer
%% Styling (Tùy chọn màu sắc)
style ClientLayer fill:#f9f,stroke:#333,stroke-width:2px,color:#000
style GatewayLayer fill:#ccf,stroke:#333,stroke-width:2px,color:#000
style PythonLayer fill:#cfc,stroke:#333,stroke-width:2px,color:#000
style MQLayer fill:#fcf,stroke:#333,stroke-width:2px,color:#000
style StorageLayer fill:#ff9,stroke:#333,stroke-width:2px,color:#000
style ExternalLayer fill:#eee,stroke:#333,stroke-width:2px,color:#000
flowchart LR
%% --- ĐỊNH NGHĨA STYLE (CSS) ---
%% Style node cơ bản: Bo góc, màu chữ đậm
classDef baseNode fill:#fff,stroke:#666,stroke-width:1px,rx:5,ry:5,color:#333;
%% Style tiêu đề: Không viền, in đậm
classDef titleNode fill:none,stroke:none,font-size:14px,font-weight:bold,color:#000;
%% Style màu sắc từng Phase
classDef p1 fill:#e3f2fd,stroke:#2196f3,stroke-width:2px;
classDef p2 fill:#fff3e0,stroke:#ff9800,stroke-width:2px;
classDef p3 fill:#e8f5e9,stroke:#4caf50,stroke-width:2px;
classDef p4 fill:#f3e5f5,stroke:#9c27b0,stroke-width:2px;
classDef p5 fill:#ffebee,stroke:#ef5350,stroke-width:2px;
%% --- PHASE 1 (CỘT 1) ---
subgraph Phase1 [ ]
direction TB
T1["📥 DATA COLLECTION"]:::titleNode
P1_Cam["📷 Cameras"]:::baseNode
P1_Wea["🌤️ Weather"]:::baseNode
P1_Cit["👤 Citizen"]:::baseNode
%% Xếp thẳng hàng dọc bên trong
T1 ~~~ P1_Cam ~~~ P1_Wea ~~~ P1_Cit
end
%% --- PHASE 2 (CỘT 2) ---
subgraph Phase2 [ ]
direction TB
T2["👁️ ANALYTICS & CV"]:::titleNode
P2_Yolo["🚗 YOLOX/DETR"]:::baseNode
P2_Cong["🚦 Congestion"]:::baseNode
P2_Acc["💥 Accident"]:::baseNode
T2 ~~~ P2_Yolo ~~~ P2_Cong ~~~ P2_Acc
end
%% --- PHASE 3 (CỘT 3) ---
subgraph Phase3 [ ]
direction TB
T3["🔄 TRANSFORMATION"]:::titleNode
P3_NGSI["📦 NGSI-LD"]:::baseNode
P3_SOSA["📚 SOSA/SSN"]:::baseNode
T3 ~~~ P3_NGSI --> P3_SOSA
end
%% --- PHASE 4 (CỘT 4) ---
subgraph Phase4 [ ]
direction TB
T4["🌐 CONTEXT"]:::titleNode
P4_Stellio["💠 Stellio Broker"]:::baseNode
P4_Kafka[("🔥 Kafka")]:::baseNode
P4_Mongo[("🍃 Mongo")]:::baseNode
T4 ~~~ P4_Stellio --> P4_Kafka ~~~ P4_Mongo
end
%% --- PHASE 5 (CỘT 5) ---
subgraph Phase5 [ ]
direction TB
T5["🔗 LINKED DATA"]:::titleNode
P5_Conv["⚙️ RDF Convert"]:::baseNode
P5_Fuseki[("🔺 Fuseki")]:::baseNode
P5_Neo4j[("🔷 Neo4j")]:::baseNode
T5 ~~~ P5_Conv --> P5_Fuseki ~~~ P5_Neo4j
end
%% --- MŨI TÊN KẾT NỐI GIỮA CÁC CỘT (HÀNG NGANG) ---
%% Kết nối từ giữa cột này sang giữa cột kia cho đẹp
P1_Wea ==> P2_Cong
P2_Cong ==> P3_NGSI
P3_SOSA ==> P4_Stellio
P4_Kafka ==> P5_Conv
%% --- ÁP DỤNG MÀU ---
class Phase1 p1
class Phase2 p2
class Phase3 p3
class Phase4 p4
class Phase5 p5
The application uses environment variables for configuration management. Follow these steps to set up:
# Copy the example environment file
cp .env.example .env
# Edit the file with your preferred editor
nano .env # or vim, code, notepad++, etc.| Variable | Required | Default | Description |
|---|---|---|---|
ENVIRONMENT |
✅ | development |
Application environment: development, staging, production |
LOG_LEVEL |
✅ | INFO |
Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL |
🔧 Core Application Configuration
# ============================================================================
# APPLICATION SETTINGS
# ============================================================================
# Environment mode (affects logging, debugging, and performance optimizations)
ENVIRONMENT=development # Options: development | staging | production
# Logging configuration
LOG_LEVEL=INFO # Options: DEBUG | INFO | WARNING | ERROR | CRITICAL
LOG_FORMAT=json # Options: json | text
LOG_FILE=logs/uip.log # Path to log file (relative to project root)
# Application metadata
APP_NAME="UIP - Urban Intelligence Platform"
APP_VERSION=2.0.0
APP_HOST=0.0.0.0 # Host to bind the application
APP_PORT=8000 # Port for the main application
# Debug mode (disable in production)
DEBUG=false # Options: true | false⏱️ Workflow Orchestrator Settings
# ============================================================================
# ORCHESTRATOR CONFIGURATION
# ============================================================================
# Orchestration scheduling
ORCHESTRATOR_INTERVAL=60 # Minutes between orchestration runs
ORCHESTRATOR_AUTO_START=true # Auto-start orchestrator on application launch
ORCHESTRATOR_MAX_RETRIES=3 # Maximum retry attempts for failed agents
# Workflow configuration file
WORKFLOW_CONFIG=config/workflow.yaml # Path to workflow definition
# Execution settings
ORCHESTRATOR_PARALLEL_AGENTS=4 # Number of agents to run in parallel
ORCHESTRATOR_TIMEOUT=3600 # Maximum execution time per phase (seconds)
# Run immediately on startup
RUN_ORCHESTRATOR_NOW=false # Options: true | false🗄️ Data Storage & Databases
# ============================================================================
# DATA STORES
# ============================================================================
# Stellio Context Broker (NGSI-LD)
STELLIO_URL=http://localhost:8080
STELLIO_TENANT=urn:ngsi-ld:tenant:default
STELLIO_CONTEXT=https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context.jsonld
STELLIO_TIMEOUT=30 # Request timeout in seconds
STELLIO_MAX_RETRIES=3 # Maximum retry attempts
# Neo4j Graph Database
NEO4J_URL=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=test12345 # ⚠️ CHANGE IN PRODUCTION
NEO4J_DATABASE=neo4j # Database name
NEO4J_MAX_CONNECTION_LIFETIME=3600 # Connection lifetime in seconds
NEO4J_MAX_CONNECTION_POOL_SIZE=50 # Maximum connection pool size
NEO4J_CONNECTION_TIMEOUT=30 # Connection timeout in seconds
# Apache Jena Fuseki (RDF Triplestore)
FUSEKI_URL=http://localhost:3030
FUSEKI_DATASET=traffic # Primary dataset name
FUSEKI_USER=admin
FUSEKI_PASSWORD=test_admin # ⚠️ CHANGE IN PRODUCTION
FUSEKI_TIMEOUT=60 # Query timeout in seconds
# MongoDB (Document Store)
MONGODB_URI=mongodb://localhost:27017
MONGODB_DATABASE=uip_traffic # Database name
MONGODB_USER= # Leave empty for no authentication
MONGODB_PASSWORD= # Leave empty for no authentication
MONGODB_AUTH_SOURCE=admin # Authentication database
MONGODB_MAX_POOL_SIZE=10 # Maximum connection pool size
MONGODB_MIN_POOL_SIZE=1 # Minimum connection pool size
MONGODB_SERVER_SELECTION_TIMEOUT=5000 # Server selection timeout (ms)
# PostgreSQL + TimescaleDB (for Stellio backend)
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=stellio
POSTGRES_USER=stellio
POSTGRES_PASSWORD=stellio_password # ⚠️ CHANGE IN PRODUCTION
POSTGRES_SCHEMA=public
POSTGRES_MAX_CONNECTIONS=20
# Redis (Cache & Session Store)
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD= # Leave empty if no password
REDIS_DB=0 # Redis database number
REDIS_MAX_CONNECTIONS=10
REDIS_SOCKET_TIMEOUT=5 # Socket timeout in seconds
REDIS_SOCKET_CONNECT_TIMEOUT=5 # Connection timeout in seconds📨 Apache Kafka Configuration
# ============================================================================
# MESSAGE QUEUE (Apache Kafka)
# ============================================================================
# Kafka broker connection
KAFKA_BOOTSTRAP_SERVERS=localhost:9092
KAFKA_CLIENT_ID=uip-producer
KAFKA_GROUP_ID=uip-consumer-group
# Producer settings
KAFKA_PRODUCER_ACKS=all # Options: 0 | 1 | all
KAFKA_PRODUCER_RETRIES=3
KAFKA_PRODUCER_MAX_IN_FLIGHT_REQUESTS=5
KAFKA_PRODUCER_COMPRESSION_TYPE=snappy # Options: none | gzip | snappy | lz4 | zstd
# Consumer settings
KAFKA_CONSUMER_AUTO_OFFSET_RESET=earliest # Options: earliest | latest
KAFKA_CONSUMER_ENABLE_AUTO_COMMIT=true
KAFKA_CONSUMER_AUTO_COMMIT_INTERVAL_MS=5000
KAFKA_CONSUMER_SESSION_TIMEOUT_MS=30000
KAFKA_CONSUMER_MAX_POLL_RECORDS=500
# Topic configuration
KAFKA_TOPIC_PREFIX=cim.entity # Prefix for all Kafka topics
KAFKA_TOPIC_PARTITIONS=3 # Number of partitions per topic
KAFKA_TOPIC_REPLICATION_FACTOR=1 # Replication factor (set to 3 in production)
# Security (optional - for production)
KAFKA_SECURITY_PROTOCOL=PLAINTEXT # Options: PLAINTEXT | SSL | SASL_PLAINTEXT | SASL_SSL
KAFKA_SASL_MECHANISM= # Options: PLAIN | SCRAM-SHA-256 | SCRAM-SHA-512
KAFKA_SASL_USERNAME=
KAFKA_SASL_PASSWORD=🤖 YOLOX & DETR Configuration
# ============================================================================
# COMPUTER VISION (YOLOX + DETR)
# ============================================================================
# YOLOX Object Detection (Apache-2.0 License)
YOLOX_MODEL=assets/models/yolox_s.pth # Model weights file
YOLOX_MODEL_TYPE=yolox-s # Options: yolox-nano | yolox-tiny | yolox-s | yolox-m | yolox-l | yolox-x
YOLOX_DEVICE=cpu # Options: cpu | cuda | cuda:0 | cuda:1 | mps
YOLOX_CONFIDENCE=0.25 # Confidence threshold (0.0 - 1.0)
YOLOX_NMS_THRESHOLD=0.45 # Non-Maximum Suppression threshold
YOLOX_INPUT_SIZE=640 # Input image size (pixels)
YOLOX_FP16=false # Use FP16 half-precision (GPU only)
YOLOX_BATCH_SIZE=1 # Batch size for inference
# DETR Accident Detection (via HuggingFace Transformers)
DETR_MODEL=hilmantm/detr-traffic-accident-detection # HuggingFace model ID
DETR_DEVICE=cpu # Options: cpu | cuda
DETR_CONFIDENCE=0.7 # Confidence threshold for accident detection
DETR_CACHE_DIR=assets/models/detr # Model cache directory
DETR_MAX_SIZE=800 # Maximum image dimension
# Model download settings
MODEL_DOWNLOAD_TIMEOUT=300 # Timeout for model downloads (seconds)
MODEL_CACHE_ENABLED=true # Cache downloaded models locally
HF_HOME=assets/models/huggingface # HuggingFace cache directory
# GPU Memory Management (if using CUDA)
CUDA_VISIBLE_DEVICES=0 # GPU device ID(s), comma-separated
CUDA_LAUNCH_BLOCKING=0 # Set to 1 for debugging🌍 Third-Party API Keys
# ============================================================================
# EXTERNAL APIs
# ============================================================================
# OpenWeatherMap API
OPENWEATHERMAP_API_KEY=your_api_key_here # Get from: https://openweathermap.org/api
OPENWEATHERMAP_UNITS=metric # Options: metric | imperial | standard
OPENWEATHERMAP_LANG=en # Language code
# GeoNames API
GEONAMES_USERNAME=your_username_here # Register at: http://www.geonames.org/login
GEONAMES_MAX_ROWS=10 # Maximum results per query
# OpenAQ Air Quality API
OPENAQ_API_URL=https://api.openaq.org/v2
OPENAQ_LIMIT=100 # Results per page
OPENAQ_PAGE=1 # Page number
# Google Maps API (Optional - for geocoding/routing)
GOOGLE_MAPS_API_KEY= # Optional: Enhanced geocoding
GOOGLE_MAPS_LANGUAGE=en
# GeoServer (Optional - for WMS/WFS services)
GEOSERVER_URL=http://localhost:8080/geoserver
GEOSERVER_WORKSPACE=traffic
GEOSERVER_USER=admin
GEOSERVER_PASSWORD=geoserver
# LOD Cloud Integration
DBPEDIA_SPARQL_ENDPOINT=https://dbpedia.org/sparql
WIKIDATA_SPARQL_ENDPOINT=https://query.wikidata.org/sparql
GEONAMES_RDF_ENDPOINT=http://sws.geonames.org🔐 Security Configuration
# ============================================================================
# SECURITY & AUTHENTICATION
# ============================================================================
# API Security
API_KEY_HEADER=X-API-Key
API_KEYS=key1,key2,key3 # Comma-separated API keys
JWT_SECRET=your-super-secret-jwt-key-change-this # ⚠️ CHANGE IN PRODUCTION
JWT_ALGORITHM=HS256
JWT_EXPIRATION=3600 # Token expiration (seconds)
# CORS Configuration
CORS_ORIGINS=http://localhost:3000,http://localhost:5173 # Allowed origins
CORS_ALLOW_CREDENTIALS=true
CORS_ALLOW_METHODS=GET,POST,PUT,DELETE,OPTIONS
CORS_ALLOW_HEADERS=*
# Rate Limiting
RATE_LIMIT_ENABLED=true
RATE_LIMIT_PER_MINUTE=60 # Requests per minute per IP
RATE_LIMIT_STORAGE=redis # Options: memory | redis
# SSL/TLS (for production)
SSL_ENABLED=false
SSL_CERT_PATH=/path/to/cert.pem
SSL_KEY_PATH=/path/to/key.pem📊 Monitoring Configuration
# ============================================================================
# MONITORING & OBSERVABILITY
# ============================================================================
# Prometheus Metrics
PROMETHEUS_ENABLED=true
PROMETHEUS_PORT=9090
PROMETHEUS_PATH=/metrics
# Health Check
HEALTH_CHECK_ENABLED=true
HEALTH_CHECK_INTERVAL=30 # Seconds between checks
HEALTH_CHECK_TIMEOUT=10 # Timeout per check
# Performance Monitoring
PERFORMANCE_MONITORING=true
SLOW_QUERY_THRESHOLD=1000 # Log queries slower than N milliseconds
# Sentry Error Tracking (Optional)
SENTRY_DSN= # Sentry project DSN
SENTRY_ENVIRONMENT=development
SENTRY_TRACES_SAMPLE_RATE=0.1 # Sample 10% of transactions
# Grafana Integration
GRAFANA_URL=http://localhost:3001
GRAFANA_API_KEY=🧪 Development Settings
# ============================================================================
# DEVELOPMENT & TESTING
# ============================================================================
# Testing
TEST_DATABASE_URL=mongodb://localhost:27017/uip_test
TEST_NEO4J_URL=bolt://localhost:7687
TEST_REDIS_DB=15 # Separate Redis DB for testing
# Mock Services
MOCK_EXTERNAL_APIS=false # Use mock responses for external APIs
MOCK_CV_MODELS=false # Use mock CV model responses
# Code Quality
ENABLE_PROFILING=false # Enable code profiling
PROFILE_OUTPUT_DIR=profiles/
# Hot Reload
AUTO_RELOAD=true # Auto-reload on code changes (dev only)
RELOAD_DIRS=src,config # Directories to watch for changes🚀 Production-Specific Settings
# ============================================================================
# PRODUCTION DEPLOYMENT
# ============================================================================
# High Availability
HA_ENABLED=false
HA_REDIS_SENTINEL= # Redis Sentinel URLs
HA_KAFKA_BROKERS=kafka1:9092,kafka2:9092,kafka3:9092
# Resource Limits
MAX_WORKERS=4 # Number of worker processes
WORKER_CONNECTIONS=1000 # Max connections per worker
WORKER_TIMEOUT=30 # Worker timeout (seconds)
# Database Connection Pooling
DB_POOL_SIZE=20 # Connection pool size
DB_MAX_OVERFLOW=10 # Maximum overflow connections
# Caching
CACHE_TTL=3600 # Cache TTL in seconds
CACHE_PREFIX=uip: # Cache key prefix
# Backup & Recovery
BACKUP_ENABLED=false
BACKUP_SCHEDULE=0 2 * * * # Cron expression (2 AM daily)
BACKUP_RETENTION_DAYS=30
BACKUP_S3_BUCKET= # S3 bucket for backups
# Feature Flags
FEATURE_CITIZEN_REPORTS=true
FEATURE_PATTERN_RECOGNITION=true
FEATURE_LOD_LINKSETS=true| Category | Variables | Documentation |
|---|---|---|
| Application | ENVIRONMENT, LOG_LEVEL, DEBUG |
See Application Settings |
| Orchestrator | ORCHESTRATOR_INTERVAL, WORKFLOW_CONFIG |
See Orchestrator Config |
| Databases | STELLIO_URL, NEO4J_URL, FUSEKI_URL, MONGODB_URI |
See Database Connections |
| Kafka | KAFKA_BOOTSTRAP_SERVERS, KAFKA_TOPIC_PREFIX |
See Message Queue |
| AI Models | YOLOX_MODEL, DETR_MODEL, YOLOX_DEVICE |
See CV Configuration |
| External APIs | OPENWEATHERMAP_API_KEY, GEONAMES_USERNAME |
See External APIs |
| Security | JWT_SECRET, API_KEYS, CORS_ORIGINS |
See Security |
IMPORTANT: Never commit
.envfiles to version control!
- Change Default Passwords: All default passwords must be changed in production
- Use Strong Secrets: Generate cryptographically secure random strings for
JWT_SECRET - Restrict CORS: Only allow trusted origins in production
- Enable SSL/TLS: Always use HTTPS in production environments
- Rotate Keys: Regularly rotate API keys and credentials
- Use Environment-Specific Configs: Separate
.env.development,.env.staging,.env.production
# Generate a secure JWT secret (Linux/macOS)
openssl rand -base64 32
# Generate a secure API key
openssl rand -hex 32
# Windows PowerShell
[Convert]::ToBase64String([System.Security.Cryptography.RandomNumberGenerator]::GetBytes(32))Download the complete .env.example template:
# Download from repository
curl -O https://raw.githubusercontent.com/UIP-Urban-Intelligence-Platform/UIP-Urban_Intelligence_Platform/main/.env.example
# Or copy from the repository root
cp .env.example .env- 📖 GitHub Pages: https://uip-urban-intelligence-platform.github.io/UIP-Urban_Intelligence_Platform/
- Configuration Guide: docs/data-access/
- Deployment Guide: docs/deployment/
- Security Policy: .github/SECURITY.md
- Code Owners: .github/CODEOWNERS
- Docker Configuration: docker-compose.yml
Define orchestration phases in config/workflow.yaml:
workflow:
name: "UIP - Urban Intelligence Platform"
version: "2.0.0"
phases:
- name: data_collection
parallel: true
agents:
- module: src.agents.data_collection.image_refresh_agent
enabled: true
- module: src.agents.data_collection.external_data_collector_agent
enabled: true
- name: analytics
parallel: false
agents:
- module: src.agents.analytics.cv_analysis_agent
enabled: true
config:
model: yolox_s.pth # YOLOX model
confidence: 0.25
accident_model: hilmantm/detr-traffic-accident-detection # DETR from HuggingFace
- name: transformation
agents:
- module: src.agents.transformation.ngsi_ld_transformer_agent
- module: src.agents.transformation.sosa_ssn_mapper_agent# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html --cov-report=term-missing
# Run specific test category
pytest tests/unit/ -v
pytest tests/integration/ -v
# Run tests in parallel
pytest -n auto
# Run with specific markers
pytest -m "not slow"| Component | Target | Current |
|---|---|---|
| Core modules | 90% | ✅ |
| Agents | 80% | ✅ |
| Integration | 70% | ✅ |
| Resource | Description | Link |
|---|---|---|
| API Reference | OpenAPI/Swagger documentation | docs/api/ |
| Architecture Guide | System design & data flow | docs/architecture/ |
| Configuration Guide | YAML configuration reference | docs/data-access/ |
| Contributing Guide | Development workflow | CONTRIBUTING.md |
| Security Policy | Vulnerability reporting | SECURITY.md |
| Changelog | Version history | docs/CHANGELOG.md |
cd docs
npm install
npm run start # Development server
npm run build # Production buildJoin our community to get help, share ideas, and connect with other users:
| Channel | Description | Link |
|---|---|---|
| 💬 GitHub Discussions | Q&A, ideas, and general discussion | Join Discussions |
| 📧 Mailing List | Announcements & release notifications | Google Groups |
| 📖 Wiki | Comprehensive documentation | GitHub Wiki |
| 🐛 Issues | Bug reports & feature requests | GitHub Issues |
- 📣 Announcements: Official project announcements (maintainers only)
- ❓ Q&A: Ask questions and get help from the community
- 💡 Ideas: Share and discuss new feature ideas
- 🎉 Show and Tell: Share your projects built with UIP
- 📋 RFCs: Request for Comments on major changes
Subscribe to receive:
- 📢 Release announcements
- 🔔 Important updates
- 📝 Monthly newsletters
Subscribe: Send email to uip-platform+subscribe@googlegroups.com
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Install dev dependencies:
pip install -r requirements/dev.txt - Install pre-commit hooks:
pre-commit install - Make your changes
- Test your changes:
pytest --cov=src - Lint your code:
pre-commit run --all-files - Commit your changes:
git commit -m "feat: add amazing feature" - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
We follow Conventional Commits:
feat: add new feature
fix: bug fix
docs: documentation changes
style: formatting changes
refactor: code refactoring
test: adding tests
chore: maintenance tasks
- Python: Black formatter, 100 character line length
- TypeScript: ESLint + Prettier
- Commits: Conventional Commits
- Docs: Google-style docstrings
Please see our Security Policy for reporting vulnerabilities.
- 🔐 API key authentication
- 🔒 CORS configuration
- 🛡️ Input validation
- 📝 Audit logging
- 🔑 Secret management via environment variables
This project is licensed under the MIT License.
This project uses MIT-compatible computer vision libraries:
- YOLOX (Apache-2.0 by Megvii) — Object detection for vehicles and pedestrians
- DETR (Apache-2.0) — Accident detection via HuggingFace Transformers
- PyTorch (BSD-style) — Deep learning framework
All dependencies use permissive licenses (MIT, Apache-2.0, BSD) that are compatible with MIT licensing.
| License | File | Description |
|---|---|---|
| MIT | LICENSE | Main project license with third-party attribution |
| N/A | COPYING | Licensing information and third-party notices |
| Document | Description |
|---|---|
| LICENSE | MIT license with third-party attribution |
| COPYING | Detailed licensing information |
| THIRD_PARTY_LICENSES.md | Complete list of 120+ third-party dependencies and their licenses |
The MIT License covers all code in this project. You are free to:
- ✅ Use commercially
- ✅ Modify and distribute
- ✅ Sublicense
- ✅ Use in proprietary software
| Component | License | Use Case |
|---|---|---|
| YOLOX | Apache-2.0 | Vehicle/pedestrian detection |
| DETR (HuggingFace) | Apache-2.0 | Accident detection |
| PyTorch | BSD-style | Deep learning framework |
| FastAPI | MIT | Web framework |
| RDFLib | BSD-3-Clause | RDF/Linked Data processing |
MIT License
Copyright (c) 2025 UIP - Urban Intelligence Platform Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- ETSI NGSI-LD — Context Information Management
- W3C SOSA/SSN — Sensor Observation Ontologies
- Smart Data Models — TM Forum & FIWARE
- Stellio Context Broker — NGSI-LD implementation
- Apache Jena — Semantic Web framework
- YOLOX — Object Detection (Apache-2.0)
- HuggingFace Transformers — DETR Accident Detection
- FastAPI — Modern Python web framework
| Channel | Description | Link |
|---|---|---|
| 📧 Mailing List | Announcements & discussions | Google Groups |
| 💬 Discord | Real-time chat & support | Join Discord |
| 💬 Discussions | Q&A & community | GitHub Discussions |
- 📖 Documentation: docs/
- 🐛 Bug Reports: GitHub Issues
- 📧 Email: nguyennhatquang522004@gmail.com
📦 Legacy Roadmap
- Multi-agent system architecture (37 agents)
- NGSI-LD entity management
- YOLOX computer vision integration (Apache-2.0)
- RDF triple store publishing
- Docker Compose deployment
- CI/CD pipelines (9 workflows)
- Docusaurus documentation site
- Multi-agent system architecture (38 agents)
- YOLOX object detection (Apache-2.0 by Megvii)
- DETR accident detection via HuggingFace (Apache-2.0)
- Full MIT license compatibility
- All dependencies use permissive licenses
- NGSI-LD entity management
- RDF triple store publishing
- Docker Compose deployment
- Real-time streaming analytics
- Advanced traffic prediction (ML)
- Mobile application
- Public API with rate limiting
- Federated learning
- Edge computing support
- Knowledge graph reasoning
- Multi-city deployment
Built with ❤️ for the Semantic Web and Linked Open Data community



