A flexible Flask-based API boilerplate for interacting with Large Language Models (LLMs) like Google Gemini and OpenAI. This project provides ready-to-use endpoints and infrastructure, allowing you to focus on crafting prompts rather than writing boilerplate code.
This project simplifies the process of making API calls to various LLM providers. Simply change the prompt in your request, and get responses from your chosen LLM - all the heavy lifting is already done for you.
-
Ready-to-use API endpoints - Pre-configured Flask application
-
Multiple LLM support - Currently supports Google Gemini (OpenAI support coming soon)
-
Response persistence - Automatically saves all prompts and responses to gemini_response.json
-
Environment-based configuration - Secure API key management via .env files
.
├── gemini/
│ ├── app.py # Main Flask application
│ ├── .env # Environment variables (API keys)
│ ├── requirements.txt # Python dependencies
│ └── gemini_response.json # Stored prompts and responses
├── .gitignore
├── .python-version
├── LICENSE
└── README.md
- Python 3.x
- pip (Python package manager)
- Clone the repository
git clone <repository-url>
cd Genral_LLM_API- Install dependencies
cd gemini
pip install -r requirements.txt
- Configure environment variables
Create or update the .env file with your API key:
GEMINI_API_KEY=your_api_key_here
cd gemini
python app.pyThe main function in app.py demonstrates usage:
if __name__ == '__main__':
prompt = """
How to make a perfect cup of tea?
"""
response = call_gemini_api(prompt)Using the call_gemini_api Function
The call_gemini_api function is the core of this boilerplate:
response = call_gemini_api("Your custom prompt here")
This function:
- Takes your prompt as input
- Calls the Gemini API using the
gemini-2.0-flash-litemodel - Returns the text response
- Automatically saves both prompt and response to
gemini_response.json
From requirements.txt:
google-generativeai- Google Gemini API clientpython-dotenv- Environment variable management
- API keys are stored in .env files (excluded from version control via [
.gitignore]) - Never commit your .env file to version control
All prompts and responses are automatically saved to gemini_response.json in the following format:
{
"prompt": [
"First prompt...",
"Second prompt..."
],
"response": [
"First response...",
"Second response..."
]
}This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2025 Vatsal Patel
Feel free to submit issues and enhancement requests!
- OpenAI API integration
- Additional LLM provider support
Vatsal Patel
Note: This is a boilerplate project designed to accelerate LLM integration. Simply modify the prompts in
app.pyto suit your use case!