A full-stack conversational AI application built with LangChain, Nuxt.js, FastAPI, and MongoDB.
Frontend (Nuxt.js) → API (FastAPI) → LangChain Agent → AI Models
↓
MongoDB (Chat History)
- Real-time Conversations: Event streaming for dynamic AI responses
- Persistent Memory: MongoDB-backed chat history storage
- Token Management: Intelligent token generation and queue handling
- Async Processing: Non-blocking conversation flow with asyncio tasks
- Agent Orchestration: LangChain agents with custom prompt templates
- Multi-format Responses: Handles AImessage, STEP_END, String, and Unknown tokens
| Component | Technology |
|---|---|
| Frontend | Nuxt.js (Vue.js framework) |
| Backend | FastAPI (Python) |
| Database | MongoDB |
| AI Framework | LangChain |
| LLM | Gemini 2.0 Flash |
- uv installed
- Python 3.8+
- Node.js 20+
- MongoDB or Docker image
- Gemini API key
- SerpAPI API key
- Clone repository
git clone https://github.com/afadel151/langchain.git
cd langchain/chat-system- Backend setup
cd /api
uv sync (or uv pip install -r requirements.txt)- Frontend setup
cd ../frontend
bun installBackend (.env)
MONGODB_URI=mongodb://localhost:27017/conversational_ai
GOOGLE_API_KEY=your_gemini_api_key
PORT=8000
SERPAPI_API_KEY=<your_serpapi_api_key>Frontend (.env)
GOOGLE_API_KEY=<your_google_api_key>-
Start MongoDB
-
Start backend
cd /api
uvicorn main:app --reload - Start frontend
cd frontend
bun devAccess at http://localhost:3000
project/
├── api/
│ ├── main.py # FastAPI application
│ ├── db.py # Database logic
│ ├── agent.py # CustomAgentExecutor logic
│ ├── agent_tools.py # Agent tools
│ ├── custom_requests.py # Custom request classes
│ ├── callback.py # QueueCallbackHandler class
│ ├── stream.py # token generator logic
│ └── requirements.txt # Python dependencies
│ └── pyproject.toml # configuration
│ └── docker-compose.yml # Docker compose for MongoDB
├── frontend/
│ ├── pages/ # Nuxt.js pages
│ ├── components/ # Vue components
│ ├── shared/types # TypeScript interfaces
│ └── package.json # Node dependencies
└── README.md
- User Input: Frontend captures user message
- API Call: Nuxt.js invokes FastAPI endpoint
- Token Generation: System generates conversation tokens
- Agent Processing: LangChain agent processes with custom prompts
- Database Storage: Conversation history saved to MongoDB
- Response Streaming: Real-time response via event streaming
- Queue Handling: Async callback management for smooth UX
Handles multiple response types:
AImessage: AI-generated responsesSTEP_END: Process completion markersString: Text responsesUnknown: Fallback handling
- Custom prompt templates
- Conversation context management
- Integration with Gemini 2.0 Flash
- Async invocation with task queuing
MongoDB collections for:
- Conversation history
- LangGraph Integration: Visual workflow management
- Multi-agent Orchestration: Complex reasoning chains
- Enhanced Memory: Long-term conversation context
- Enhanced Streaming: Due to some issues in streaming
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Cheks for API |
/invoke |
POST | Invoke the Agent and stream the output |
/create_conversation |
POST | Create a new conversation with a new title |
/conversations |
GET | Get a list of existing conversations (titles and ids) |
/get_conversation |
GET | Retrieves a conversation by its ID |
/get_messages/{conversation_id} |
GET | Retrieves just the messages array for a conversation |
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License.
- LangChain team for the amazing framework
- Google for Gemini API
- FastAPI and Nuxt.js communities