AI-powered weather visualizations – aka when AI meets HTML visualization.
See it in action at https://ai-weather.codetopic.eu! ^^
Yapping section—skip to the next one for the real info.
So, one day, one of my friends sent me a link to this web page. It's a simple web page. All it does is, it shows you nine visualizations of analog clocks made by different AIs. Cool, right? So wacky! ^^
I thought, "Why not make something similar?" And here we are. With a web server that generates these visualizations, but for weather.
It's a server that asks various AI models to create HTML/CSS visualizations of current weather data, every hour. The results are displayed in real-time on a web page.
-
Set your OpenWeather API key
cp .env.example .env # Edit .env with your API key -
Start ollama service
docker-compose -f docker/docker-compose.yml up ollama -d
-
Pull Ollama models
docker exec -it docker-ollama-1 ollama pull llama3.2 docker exec -it docker-ollama-1 ollama pull qwen3 docker exec -it docker-ollama-1 ollama pull mistral
-
Start the rest of the services
docker-compose -f docker/docker-compose.yml up -d
-
Access the application at
http://localhost:8000
# Create a shared network
docker network create ai-weather-network
# Start Ollama container
docker run -d \
--name ollama \
--network ai-weather-network \
-v ollama_data:/root/.ollama \
ollama/ollama:latest
# Pull models
docker exec ollama ollama pull llama3.2
docker exec ollama ollama pull qwen3
docker exec ollama ollama pull mistral
# Start ai-weather container
docker run -d \
-p 8000:8000 \
-e WEATHER__API_KEY=your_key \
-e OLLAMA__BASE_URL=http://ollama:11434 \
--network ai-weather-network \
-v $(pwd)/data:/app/data \
-v $(pwd)/config/config.yaml:/app/config/config.yaml:ro \
--name ai-weather \
docker.io/anty0/ai-weather:latestIf you already have Ollama running on your host machine:
Linux:
docker run -d \
-p 8000:8000 \
-e WEATHER__API_KEY=your_key \
-e OLLAMA__BASE_URL=http://host.docker.internal:11434 \
-v $(pwd)/data:/app/data \
-v $(pwd)/config/config.yaml:/app/config/config.yaml:ro \
--name ai-weather \
--add-host=host.docker.internal:host-gateway \
docker.io/anty0/ai-weather:latestmacOS/Windows:
docker run -d \
-p 8000:8000 \
-e WEATHER__API_KEY=your_key \
-e OLLAMA__BASE_URL=http://host.docker.internal:11434 \
-v $(pwd)/data:/app/data \
-v $(pwd)/config/config.yaml:/app/config/config.yaml:ro \
--name ai-weather \
docker.io/anty0/ai-weather:latestNote
On macOS/Windows, host.docker.internal resolves to the host automatically.
On Linux, the --add-host flag is required.
For available environment variables, see .env.example.
For configuration options, see config/config.yaml.example.
The /app/config/config.yaml can be omitted if you configure everything via environment variables.
- Python 3.14+
- Ollama installed and running
- OpenWeather One Call API 3.0 key (get one free here)
-
Clone the repository
git clone https://github.com/Anty0/ai-weather.git cd ai-weather -
Set up Python environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt pip install -r requirements-dev.txt
-
Configure the application
cp .env.example .env cp config/config.example.yaml config/config.yaml
-
Edit
.envwith your OpenWeather API keyWEATHER__API_KEY=your_actual_api_key_here
-
Customize
config/config.yaml(optional)- Set your location (latitude/longitude)
- Configure AI models
- Adjust prompt template
-
Pull Ollama models (if using Ollama)
ollama pull llama3.2 ollama pull qwen3 ollama pull mistral
-
Run the application
uvicorn aiweather:app --reload
-
Open your browser to
http://localhost:8000
pip install -r requirements-dev.txt
pytest --cov=aiweatherblack aiweather/ tests/
ruff check aiweather/ tests/
mypy aiweather/- Create a new provider class in
aiweather/ai/ - Inherit from
AIProviderbase class - Implement
generate_html()andis_available()methods - Register provider in
AIManager.__init__() - Add configuration to
config.yaml
Example:
from .base import AIProvider
class OpenAIProvider(AIProvider):
async def generate_html(self, prompt: str, model_id: str, **kwargs) -> str:
# Implementation here
pass
async def is_available(self) -> bool:
# Check API availability
passAll generated visualizations are saved to the data/ directory:
data/
2025-11/
28-14/
weather.json # OpenWeather API response
metadata.json # Timestamp + models
Llama_2_7B.html # Generated HTML
CodeLlama.html
Mistral.html
28-15/
...
2025-12/
...
Archives are kept forever by default. You can manually delete old directories if needed.
Project is licensed under MIT License—see LICENSE file for details
- Inspired by AI World Clocks
- Weather Data: OpenWeather One Call API 3.0
Contributions are welcome! Please feel free to submit a Pull Request.
If you encounter any issues or have questions, please open an issue on GitHub.