This project empowers developers to engage in an interactive conversation with their codebase using Google's Gemini AI model. By ingesting your project's source code, this tool acts as an intelligent assistant, helping you understand, debug, refactor, and explore your code more efficiently through natural language queries. It's designed to provide immediate, context-aware insights, streamlining the development process.
The primary purpose of "Chat with Code" is to bridge the gap between human understanding and vast codebases. Instead of manually sifting through files, developers can ask the Gemini AI questions like:
- "What does this function do?"
- "How are these modules connected?"
- "Explain the core logic of this feature."
- "Identify potential areas for optimization in
utils.py."
This transforms code exploration into a dynamic dialogue, making it easier to onboard new team members, understand legacy code, or quickly get up to speed on different parts of a project without extensive manual analysis.
- Intelligent Code Q&A: Leverage Gemini AI to ask questions about your project's code and receive informative answers.
- Contextual Conversations: Maintains chat history, allowing for ongoing, natural dialogues about the codebase, building on previous questions and answers.
- Dynamic Code Loading:
- New Project: Easily load and initiate a chat session with code from any specified local directory.
- Persistent Session: Continue a chat with the previously loaded project code without re-scanning.
- Refresh Codebase: Re-scan the last used project path to incorporate recent code changes and updates into your AI conversation.
- Smart File Handling:
- Automatically scans common code file extensions (
.py,.js,.ts,.go,.java,.txt) to ensure relevant content is included. - Intelligently skips irrelevant directories like
.venvand__pycache__to focus the AI on actual source code.
- Automatically scans common code file extensions (
- Local Storage: Saves scanned project code and the last used project path locally for convenience and quick access across sessions.
- Python 3.x
- Google Gemini API Key (
google-genailibrary dependency)
-
Clone the repository (if hosted on GitHub):
# git clone <repository_url> # cd <project_directory>
-
Install dependencies from
requirements.txt:pip install -r requirements.txt
-
Set up your Gemini API Key:
- Obtain your API key from the Google AI Studio.
- Create a file named
keys.pyin the project's root directory. - Add your API key to
keys.pyas follows:Replace# keys.py GEMINI_API_KEY = "YOUR_GEMINI_API_KEY"
"YOUR_GEMINI_API_KEY"with your actual API key. (It is highly recommended to addkeys.pyto your.gitignorefile to prevent exposing your API key in version control for real projects.)
-
Run the main script:
python main.py
-
Interact with the command-line menu:
======================== Chat with code =========================== Enter any 1 option from below: 1. Chat with new project code (input = your project path) 2. Chat with current project code (input = nothing) 3. Chat with updated project code (input = nothing: but your previous path will be used) 4. Exit (Type 'exit' to quit) ->- Option 1 (New Project): Enter the local path to the project directory you wish to analyze. The tool will scan relevant files, store their content, and initiate a new AI chat session. This path will also be saved for future use.
- Option 2 (Current Project): Continue chatting with the project code that was last loaded and saved (e.g., from a previous Option 1 or 3 session).
- Option 3 (Updated Project): Re-scan the project code from the last saved path. Use this option to include any recent code changes in your AI conversation without specifying the path again.
- Option 4 (Exit): Quit the application.
-
Chat with the AI: Once a project is loaded, you can type your questions or prompts related to the codebase. The AI will respond based on the provided code context. To end the current chat session and exit the application, type
exitwhen prompted forUser prompt:.
main.py: The application's entry point, managing the user interface, interaction flow, and orchestrating calls to utility functions.utils.py: Contains core utility functions, including:get_project_code_from_path(): Scans a given directory for code files.save_project_code_list(): Persists scanned code content to a JSON file.load_project_code_list(): Retrieves previously saved code content.invoke_gemini(): Handles communication with the Google Gemini API.- Initializes the Gemini client.
keys.py: Stores yourGEMINI_API_KEY. (Crucial for API authentication, but should be kept out of version control).requirements.txt: Defines all Python package dependencies.last_project_path.txt: A plain text file storing the path to the last project directory scanned, enabling quick access for options 2 and 3.project_code/project_code_list.json: A directory and JSON file used to cache the content of the scanned project files, preventing redundant disk I/O.
python main.py
======================== Chat with code ===========================
Enter any 1 option from below:
1. Chat with new project code (input = your project path)
2. Chat with current project code (input = nothing)
3. Chat with updated project code (input = nothing: but your previous path will be used)
4. Exit (Type 'exit' to quit)
-> 1
Enter your project path: ./my_awesome_project
User prompt: What is the main purpose of the `User` class in `models.py`?
Model Response: The `User` class in `models.py` represents a user entity in the application. It typically includes attributes such as `username`, `email`, `password_hash`, and methods for user authentication (e.g., `set_password`, `check_password`) and potentially relationships to other data models. Its main purpose is to manage user data and provide core authentication functionalities.
User prompt: Can you show me the `invoke_gemini` function from `utils.py`?
Model Response:
```python
def invoke_gemini(gemini_model, model_prompt):
response = client.models.generate_content(
model = gemini_model,
contents = f'{model_prompt}'
).text
return responseThis function takes the Gemini model name and a prompt (which includes the chat history and context) and sends it to the Gemini API, returning the text response from the model.
User prompt: exit Exiting...