Local AI Chat Assistant for Windows
A privacy-first, GPU-accelerated chat application that runs large language models entirely on your machine. No cloud required.
- Private by default - Your conversations never leave your computer
- RTX-optimized - Built for NVIDIA GPUs with CUDA acceleration
- Native Windows experience - WinUI 3 with Fluent Design
- Multiple backends - Ollama, llama.cpp, or bring your own
- Markdown rendering - Rich text, code blocks, and syntax highlighting
| Component | Minimum | Recommended |
|---|---|---|
| GPU | RTX 3060 (8GB) | RTX 4080/5080 (16GB) |
| RAM | 16GB | 32GB |
| OS | Windows 10 1809+ | Windows 11 |
| .NET | 9.0 | 9.0 |
- Download the latest MSIX package from Releases
- Double-click to install
- Launch from Start Menu
# Clone and build
git clone https://github.com/mcp-tool-shop-org/InControl-Desktop.git
cd InControl-Desktop
dotnet restore
dotnet build
# Run (requires Ollama running locally)
dotnet run --project src/InControl.AppInControl requires a local LLM backend. We recommend Ollama:
# Install Ollama from https://ollama.ai/download
# Pull a model
ollama pull llama3.2
# Start the server (runs on http://localhost:11434)
ollama serve# Run verification script
./scripts/verify.ps1dotnet build# Creates release artifacts in artifacts/
./scripts/release.ps1dotnet testInControl follows a clean, layered architecture:
+-------------------------------------------+
| InControl.App (WinUI 3) | UI Layer
+-------------------------------------------+
| InControl.ViewModels | Presentation
+-------------------------------------------+
| InControl.Services | Business Logic
+-------------------------------------------+
| InControl.Inference | LLM Backends
+-------------------------------------------+
| InControl.Core | Shared Types
+-------------------------------------------+
See ARCHITECTURE.md for detailed design documentation.
All data is stored locally:
| Data | Location |
|---|---|
| Sessions | %LOCALAPPDATA%\InControl\sessions\ |
| Logs | %LOCALAPPDATA%\InControl\logs\ |
| Cache | %LOCALAPPDATA%\InControl\cache\ |
| Exports | %USERPROFILE%\Documents\InControl\exports\ |
See PRIVACY.md for complete data handling documentation.
Common issues and solutions are documented in TROUBLESHOOTING.md.
App won't start:
- Check that .NET 9.0 Runtime is installed
- Run
dotnet --list-runtimesto verify
No models available:
- Ensure Ollama is running:
ollama serve - Pull a model:
ollama pull llama3.2
GPU not detected:
- Update NVIDIA drivers to latest version
- Check CUDA toolkit installation
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Write tests for new functionality
- Submit a pull request
- Check TROUBLESHOOTING.md first
- Use the "Copy Diagnostics" feature in the app
- Open an issue with diagnostics info attached
| Layer | Technology |
|---|---|
| UI Framework | WinUI 3 (Windows App SDK 1.6) |
| Architecture | MVVM with CommunityToolkit.Mvvm |
| LLM Integration | OllamaSharp, Microsoft.Extensions.AI |
| DI Container | Microsoft.Extensions.DependencyInjection |
| Configuration | Microsoft.Extensions.Configuration |
| Logging | Microsoft.Extensions.Logging + Serilog |
Current version: 0.4.0-alpha
See CHANGELOG.md for release history.
MIT
Built for Windows. Powered by local AI.