FastAPI backend for analyzing resumes using OpenAI's GPT models. This API provides endpoints for resume analysis, comparison, and rewriting functionality.
- Resume Analysis: Analyze a resume PDF against a job description and get ATS score, matched/missing keywords, and suggestions
- Resume Comparison: Compare two resumes against the same job description
- Resume Rewriting: AI-powered rewriting of resume sections to better match job descriptions
- Result Storage: Store and retrieve analysis results using SQLite database
- CORS Enabled: Configured for frontend integration
- Optional API Keys: Users can provide their own OpenAI API key in requests
- Health Checks: Check OpenAI API health and quota status before making requests
- Error Handling: Comprehensive error handling for API key, quota, and other OpenAI errors
- Python 3.8+
- OpenAI API key
- Clone the repository:
git clone <repository-url>
cd ai-resume-analyzer-backend- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Create a
.envfile in the root directory:
cp .env.example .env- Edit
.envand add your OpenAI API key:
OPENAI_API_KEY=your-openai-api-key-hereThe .env file is gitignored and will not be committed to the repository.
Start the FastAPI server using uvicorn:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000
API documentation (Swagger UI) will be available at:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
Analyze a resume PDF against a job description.
Request: multipart/form-data
resume(File): PDF file (max 10MB)job_description(string): Job description textapi_key(string, optional): OpenAI API key. If not provided, uses key from .env file.
Response:
{
"result_id": "uuid-string",
"score": 82,
"matched_keywords": ["React", "Node.js"],
"missing_keywords": ["Docker", "CI/CD"],
"suggestions": "Include backend deployment..."
}Retrieve a previously stored analysis result.
Response: Same as /analyze response
Compare two resumes against the same job description.
Request: multipart/form-data
resume_1(File): First PDF resumeresume_2(File): Second PDF resumejob_description(string): Job description textapi_key(string, optional): OpenAI API key. If not provided, uses key from .env file.
Response:
{
"resume_1_score": 78,
"resume_2_score": 85,
"better_resume": "resume_2",
"analysis_summary": "Resume 2 aligns better..."
}Rewrite a resume section to better match a job description.
Request: application/json
{
"section_text": "Led team of developers...",
"job_description": "Looking for a senior developer...",
"api_key": "sk-..." // Optional: OpenAI API key
}Response:
{
"improved_text": "Led cross-functional teams...",
"reasoning": "Reworded to highlight..."
}Health check endpoint.
Response:
{
"status": "ok"
}Check OpenAI API health and quota status.
Request: application/x-www-form-urlencoded
api_key(string, optional): OpenAI API key to test. If not provided, uses key from .env file.
Response (200 OK):
{
"status": "healthy",
"message": "OpenAI API is accessible and working",
"model": "gpt-4o-mini"
}Response (503 Service Unavailable):
{
"status": "error",
"message": "API quota exceeded or rate limit reached...",
"error_type": "rate_limit_error",
"error_detail": "..."
}Error Types:
authentication_error: Invalid API keyrate_limit_error: Quota exceeded or rate limit reachedapi_error: General OpenAI API errorunknown_error: Unexpected error
ai-resume-analyzer-backend/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI application entry point
│ ├── config.py # Configuration settings
│ ├── database.py # Database models and setup
│ ├── schemas.py # Pydantic schemas
│ └── services/
│ ├── __init__.py
│ ├── openai_service.py # OpenAI API integration
│ └── pdf_service.py # PDF parsing utilities
├── internal_docs/
│ ├── BACKEND_PROMPT.md
│ ├── BACKEND_REQUIREMENTS.md
│ └── backend_types.py
├── requirements.txt
├── README.md
├── .env.example # Environment variables template
├── .env # Environment variables (not in git)
└── .gitignore
Configuration is managed through environment variables in the .env file. See .env.example for all available options.
- OPENAI_API_KEY: Your OpenAI API key (required)
- CORS_ORIGINS: Comma-separated list of allowed origins (default:
http://localhost:3000) - MAX_FILE_SIZE: Maximum file upload size in bytes (default:
10485760= 10MB) - DATABASE_URL: Database connection string (default:
sqlite:///./resume_analyzer.db) - OpenAI Model:
gpt-4o-mini(configurable inapp/services/openai_service.py)
All errors follow a consistent format:
{
"detail": "Error message"
}Common HTTP status codes:
400: Bad Request (invalid input, file validation errors)404: Not Found (result_id doesn't exist)500: Internal Server Error
# Install test dependencies
pip install pytest pytest-asyncio httpx
# Run tests (when available)
pytestThe project follows PEP 8 style guidelines. Consider using:
blackfor code formattingflake8for lintingmypyfor type checking
- The API uses OpenAI's
gpt-4o-minimodel for cost efficiency - Results are stored in SQLite database for persistence
- PDF files are validated for type and size before processing
- All responses are JSON formatted
- CORS is enabled for frontend integration
- API Key Management: Users can provide their own OpenAI API key in requests. If not provided, the backend uses the key from
.envfile. - Health Checks: Use
POST /health/openaito check API health and quota before making requests. - Error Handling: The API returns specific error types for authentication, rate limits, and other OpenAI errors.
For detailed frontend integration instructions, see FRONTEND_INTEGRATION.md.
Key points for frontend developers:
- Check OpenAI health on app load using
POST /health/openai - Allow users to provide their own API key if backend key is missing/invalid
- Pass the API key in all requests that require OpenAI
- Handle quota and authentication errors gracefully
- Store API keys securely (localStorage, etc.)