Skip to content

IamVatsal/Genral_LLM_API

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

General LLM API

A flexible Flask-based API boilerplate for interacting with Large Language Models (LLMs) like Google Gemini and OpenAI. This project provides ready-to-use endpoints and infrastructure, allowing you to focus on crafting prompts rather than writing boilerplate code.

📋 Overview

This project simplifies the process of making API calls to various LLM providers. Simply change the prompt in your request, and get responses from your chosen LLM - all the heavy lifting is already done for you.

🚀 Features

  • Ready-to-use API endpoints - Pre-configured Flask application

  • Multiple LLM support - Currently supports Google Gemini (OpenAI support coming soon)

  • Response persistence - Automatically saves all prompts and responses to gemini_response.json

  • Environment-based configuration - Secure API key management via .env files

📁 Project Structure

.
├── gemini/
│   ├── app.py                    # Main Flask application
│   ├── .env                      # Environment variables (API keys)
│   ├── requirements.txt          # Python dependencies
│   └── gemini_response.json      # Stored prompts and responses
├── .gitignore
├── .python-version
├── LICENSE
└── README.md

🛠️ Setup

Prerequisites

  • Python 3.x
  • pip (Python package manager)

Installation

  1. Clone the repository
git clone <repository-url>
cd Genral_LLM_API
  1. Install dependencies
cd gemini
pip install -r requirements.txt
  1. Configure environment variables

Create or update the .env file with your API key:

GEMINI_API_KEY=your_api_key_here

🎯 Usage

Running the Application

cd gemini
python app.py

Making API Calls

The main function in app.py demonstrates usage:

if __name__ == '__main__':
    prompt = """
    How to make a perfect cup of tea?
    """
    response = call_gemini_api(prompt)

Using the call_gemini_api Function

The call_gemini_api function is the core of this boilerplate:

response = call_gemini_api("Your custom prompt here")

This function:

  1. Takes your prompt as input
  2. Calls the Gemini API using the gemini-2.0-flash-lite model
  3. Returns the text response
  4. Automatically saves both prompt and response to gemini_response.json

📦 Dependencies

  • google-generativeai - Google Gemini API client
  • python-dotenv - Environment variable management

🔒 Security

  • API keys are stored in .env files (excluded from version control via [.gitignore])
  • Never commit your .env file to version control

📝 Response Storage

All prompts and responses are automatically saved to gemini_response.json in the following format:

{
    "prompt": [
        "First prompt...",
        "Second prompt..."
    ],
    "response": [
        "First response...",
        "Second response..."
    ]
}

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Copyright (c) 2025 Vatsal Patel

🤝 Contributing

Feel free to submit issues and enhancement requests!

🔮 Future Enhancements

  • OpenAI API integration
  • Additional LLM provider support

👨‍💻 Author

Vatsal Patel

Note: This is a boilerplate project designed to accelerate LLM integration. Simply modify the prompts in app.py to suit your use case!

About

A flexible Flask-based API boilerplate for interacting with Large Language Models (LLMs) like Google Gemini

Resources

License

Stars

Watchers

Forks

Contributors

Languages