“ Hello Chris. What are we doing today?.”
- Web Scraper: Ingests clean, readable content from any URL.
- Local Knowledge Base: Embeds documents using LangChain + OllamaEmbeddings + ChromaDB.
- Smart Query Engine: RAG & agent-based querying with tool invocation.
- Integrated Tools:
nmapscannerpingconnectivity checkfindfile locator- semantic knowledge search
- Memory-Enhanced Agent: React-based LangChain agent with conversational memory.
- Interactive Mode: Stay in the terminal and interact in real-time.
Python 3.10+- LangChain
- Ollama (local LLM runner)
- [llama3.1:8b] (current running this model, great for technical assistance)
Chroma(vector DB)BeautifulSoup(scraping).envfor config- Modular tool handler using LangChain’s
Toolwrapper
pip install -r requirements.txtollama run llama3
name= (this is where the LLM will refrence you)
Take advanatge of the RAG by injesting documention via URL
python main.py --ingest "https://example.com"Ask it a direct question
python main.py --ask "example question"Interact with the LLM
python main.pyThen type away
> scan my network
> ping google.com
> find /etc/passwd-
RAG Pipeline for standard Q&A
-
ReAct Agent invokes tools when needed
-
Tool Usage triggered via natural language
-
Memory System stores context (last 5 messages)
-
Streaming Handler gives live feedback from LLM
-
Embeddings generated via OllamaEmbeddings("llama3.1:8b")
[] PDF / Markdown / TXT ingestion
[] GUI wrapper (Gradio / TUI) for desktop companion option
[] Voice command support
[] Docker packaging
Personalize your assistant:
Swap llama3.1:8b with any local Ollama-compatible LLM
Drop in new Tool functions via LangChain's API
