Batch-optimized LLM-based automated customer review classification and sentiment detection engine using Ollama + Llama3.
Ecommerce platforms receive thousands of customer reviews daily.
Manually reading and categorizing them into:
- Delivery Issues
- Payment Issues
- Product Quality
- Refund Problems
is inefficient and time-consuming.
This project implements a batch-optimized LLM pipeline that:
- Loads customer reviews from CSV
- Processes reviews in batches
- Classifies review category
- Detects sentiment (Positive / Neutral / Negative)
- Returns structured JSON output
- Handles parsing safely
- Includes error handling
- Python
- Pandas
- Ollama
- Llama3
- Batch Prompt Engineering
- JSON Parsing
- Reviews are loaded from CSV
- Reviews are grouped into batches (default: 10 per call)
- Llama3 processes multiple reviews in a single API call
- Output is strictly enforced in JSON format
- Safe JSON extraction prevents crashes
- Errors are handled gracefully
Install dependencies:
pip install -r requirements.txtMake sure Ollama is installed and running:
ollama run llama3python llm_review_batch_analyzer.py[
{"review_number": 1, "category": "Delivery Issue", "sentiment": "Negative"},
{"review_number": 2, "category": "Payment Issue", "sentiment": "Negative"}
]Instead of making 1 API call per review:
Old Approach: 30 reviews = 30 API calls ❌
New Batch Approach: 30 reviews = 3 API calls ✅
- Reduced latency
- Reduced token usage
- Scalable architecture
- Negative sentiment alert threshold
- Save output to CSV
- Real-time dashboard integration
- Deploy as REST API