This repository provides a practical, NDA-safe workflow for using locally hosted LLMs to assist in writing smart contract audit reports. It is intended as a step-by-step educational guide for security researchers who want to use AI productively without violating confidentiality or legal obligations.
Using cloud-based LLMs (like ChatGPT or Claude) can expose sensitive client code to third parties, violating NDAs and IP agreements. This guide shows how to:
- Maintain full control over data and models
- Draft audit issues quickly and securely
- Customize your workflow with a local chat interface
- Cloud LLMs = NDA Risk: Sending code to third-party servers can lead to data leaks.
- Local LLMs = Control: Run models like Gemma or LLaMA on your machine with no outbound traffic.
- Prompt Engineering: Customize system prompts to output clean, structured vulnerability descriptions.
This guide uses Ollama as the LLM runtime and Open WebUI as the interface.
curl -fsSL https://ollama.com/install.sh | shollama run gemma3:latestOther supported models include: llama3, deepseek-r1
docker run -d -p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data --name open-webui \
--restart always ghcr.io/open-webui/open-webui:v0.6.5Then open http://localhost:3000 in your browser.
To ensure consistent and professional output tailored for audit reporting:
- Open Open WebUI in your browser at http://localhost:3000.
- Go to Settings → Custom Instructions.
- Copy the contents of
system_prompt.mdinto the System Prompt field. - Scroll to the Model Settings section.
- Set Temperature to
0.15.
Customize your assistant’s behavior using system_prompt.md in this repo. It helps structure raw inputs (e.g., GitHub notes or findings) into ready-to-paste audit report issues.
Example workflow:
- Copy a vulnerability note from a GitHub PR or your note app.
- Paste it into Open WebUI.
- Let the LLM generate a formatted issue (title, description, recommendation).