A powerful AI agent that transforms blog posts into engaging social media content. Automatically generates Twitter threads and comprehensive LinkedIn posts from any blog URL.
- Twitter Thread Generation: Creates multi-tweet threads (3-5 tweets) with proper numbering and narrative flow
- LinkedIn Post Generation: Produces comprehensive 300-800 word professional posts with detailed insights
- Automatic Platform Detection: Always generates for both LinkedIn and Twitter regardless of input
- AI-Powered: Uses Google Gemini AI for intelligent content generation
- FastAPI Integration: Built with FastAPI for high-performance API endpoints
- JSON-RPC 2.0 Protocol: Standardized communication protocol
- Python 3.8+
uvpackage manager- Google Gemini API key
-
Clone the repository
git clone <repository-url> cd postcraft-agent
-
Install dependencies
uv sync
-
Set up environment variables
cp .env.sample .env # Edit .env with your API keys -
Run the application
uv run python main.py
The agent will be available at http://localhost:8000
Create a .env file with the following variables:
# Required: Google Gemini API Key
GEMINI_API_KEY=your_gemini_api_key_here
# Optional: Groq API Key (fallback)
GROQ_API_KEY=your_groq_api_key_here
# Optional: Server Configuration
AGENT_URL=http://localhost:8000
PORT=8000You need at least one AI API key:
- Google Gemini (recommended): Get your API key from Google AI Studio
- Groq (fallback): Get your API key from Groq Console
-
Build the Docker image
docker build -t postcraft-agent . -
Run the container
docker run -p 8000:8000 --env-file .env postcraft-agent
-
Or use Docker Compose
docker-compose up -d
Create a docker-compose.yml file:
version: '3.8'
services:
postcraft-agent:
build: .
ports:
- "8000:8000"
env_file:
- .env
environment:
# Override any environment variables here if needed
- PORT=8000
- AGENT_URL=http://localhost:8000
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
container_name: postcraft-agentThen run:
docker-compose up -dThe Docker setup properly handles environment variables in multiple ways:
-
Via .env file (recommended):
# Create .env file with your variables GEMINI_API_KEY=your_key_here GROQ_API_KEY=your_key_here PORT=8000 -
Via docker-compose.yml:
environment: - GEMINI_API_KEY=your_key_here - PORT=8000
-
Via docker run command:
docker run -p 8000:8000 \ -e GEMINI_API_KEY=your_key_here \ -e PORT=8000 \ postcraft-agent
GET /.well-known/agent.jsonReturns agent metadata and capabilities.
GET /healthReturns service health status and API key availability.
POST /
Content-Type: application/jsonRequest Body:
{
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"id": "unique-task-id",
"message": {
"role": "user",
"parts": [
{
"kind": "text",
"text": "https://example.com/blog/my-awesome-post"
}
]
}
},
"id": 1
}Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "unique-task-id",
"status": {
"state": "completed",
"timestamp": "2024-01-01T12:00:00.000Z"
},
"artifacts": [
{
"artifactId": "artifact-id",
"name": "social_media_posts",
"parts": [
{
"kind": "text",
"text": "# π Social Media Posts Generated\n\n## π¦ Twitter\nTweet 1/3: [First tweet content]\nTweet 2/3: [Second tweet content]\nTweet 3/3: [Third tweet content]\n\n---\n\n## πΌ LinkedIn\n[Comprehensive LinkedIn post content]\n\n---"
}
]
}
]
}
}uv run uvicorn main:app --host 0.0.0.0 --port 8000 --reloaduv run python test_generator.py
uv run python test_message_parser.py- 3-5 tweets per thread
- Proper numbering (1/n, 2/n, etc.)
- Hook β Development β Call-to-action structure
- Relevant hashtags in final tweet
- Smooth narrative flow
- 300-800 words comprehensive posts
- Professional tone with actionable insights
- Structured content with bullet points
- 3-5 relevant hashtags
- Engagement-focused questions
- Original blog URL included
src/
βββ api.py # FastAPI endpoints and agent info
βββ config.py # Configuration and settings
βββ extractor.py # Blog content extraction
βββ generator.py # AI-powered content generation
βββ message_parser.py # Message parsing and URL extraction
βββ models.py # Pydantic data models
βββ processor.py # Main processing logic
The agent gracefully handles:
- Invalid blog URLs
- Network connectivity issues
- AI API failures
- Malformed requests
- Missing API keys
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
[Add your license information here]
For issues and questions:
- Create an issue in the repository
- Check the health endpoint for service status
- Verify API key configuration in logs