Production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents
Build intelligent AI agents in Ruby with a clean DSL, automatic execution tracking, cost analytics, budget controls, and a beautiful real-time dashboard. Supports OpenAI GPT-4, Anthropic Claude, Google Gemini, and more through RubyLLM.
- Rails-Native - Seamlessly integrates with your Rails app: models, jobs, caching, and Hotwire
- Production-Ready - Built-in retries, model fallbacks, circuit breakers, and budget limits
- Full Observability - Track every execution with costs, tokens, duration, and errors
- Workflow Orchestration - Compose agents into pipelines, parallel tasks, and conditional routers
- Zero Lock-in - Works with any LLM provider supported by RubyLLM
| Feature | Description | Docs |
|---|---|---|
| Agent DSL | Declarative configuration with model, temperature, parameters | Agent DSL |
| Execution Tracking | Automatic logging with token usage and cost analytics | Tracking |
| Cost Analytics | Track spending by agent, model, and time period | Analytics |
| Reliability | Automatic retries, model fallbacks, circuit breakers | Reliability |
| Budget Controls | Daily/monthly limits with hard and soft enforcement | Budgets |
| Workflows | Pipelines, parallel execution, conditional routers | Workflows |
| Dashboard | Real-time Turbo-powered monitoring UI | Dashboard |
| Streaming | Real-time response streaming with TTFT tracking | Streaming |
| Conversation History | Multi-turn conversations with message history | Conversation History |
| Attachments | Images, PDFs, and multimodal support | Attachments |
| PII Redaction | Automatic sensitive data protection | Security |
| Alerts | Slack, webhook, and custom notifications | Alerts |
# Gemfile
gem "ruby_llm-agents"bundle install
rails generate ruby_llm_agents:install
rails db:migrate# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...rails generate ruby_llm_agents:agent SearchIntent query:required# app/agents/search_intent_agent.rb
class SearchIntentAgent < ApplicationAgent
model "gpt-4o"
temperature 0.0
param :query, required: true
def user_prompt
"Extract search intent from: #{query}"
end
def schema
@schema ||= RubyLLM::Schema.create do
string :refined_query, description: "Cleaned search query"
array :filters, of: :string, description: "Extracted filters"
end
end
endresult = SearchIntentAgent.call(query: "red summer dress under $50")
result.content # => { refined_query: "red dress", filters: ["color:red", "price:<50"] }
result.total_cost # => 0.00025
result.total_tokens # => 150
result.duration_ms # => 850Build chatbots and conversational agents with message history:
result = ChatAgent.call(
query: "What's my name?",
messages: [
{ role: :user, content: "My name is Alice" },
{ role: :assistant, content: "Nice to meet you, Alice!" }
]
)
# => "Your name is Alice!"See Conversation History for more patterns.
| Guide | Description |
|---|---|
| Getting Started | Installation, configuration, first agent |
| Agent DSL | All DSL options: model, temperature, params, caching |
| Reliability | Retries, fallbacks, circuit breakers, timeouts |
| Workflows | Pipelines, parallel execution, routers |
| Budget Controls | Spending limits, alerts, enforcement |
| Dashboard | Setup, authentication, analytics |
| Production | Deployment best practices, background jobs |
| API Reference | Complete class documentation |
| Examples | Real-world use cases and patterns |
Build resilient agents with built-in fault tolerance:
class ReliableAgent < ApplicationAgent
model "gpt-4o"
# Retry on failures with exponential backoff
retries max: 3, backoff: :exponential
# Fall back to alternative models
fallback_models "gpt-4o-mini", "claude-3-5-sonnet"
# Prevent cascading failures
circuit_breaker errors: 10, within: 60, cooldown: 300
# Maximum time for all attempts
total_timeout 30
param :query, required: true
def user_prompt
query
end
endCompose agents into complex workflows:
# Sequential pipeline - each step's output feeds the next
class ContentPipeline < RubyLLM::Agents::Workflow::Pipeline
timeout 60.seconds
max_cost 1.00
step :classify, agent: ClassifierAgent
step :enrich, agent: EnricherAgent
step :format, agent: FormatterAgent, optional: true
end
result = ContentPipeline.call(text: data)
result.steps[:classify].content # Individual step result
result.total_cost # Sum of all steps
# Parallel execution - run agents concurrently
class AnalysisPipeline < RubyLLM::Agents::Workflow::Parallel
fail_fast false # Continue even if a branch fails
branch :sentiment, agent: SentimentAgent
branch :entities, agent: EntityAgent
branch :summary, agent: SummaryAgent
end
result = AnalysisPipeline.call(text: content)
result.branches[:sentiment].content # Individual branch result
# Conditional routing - dispatch based on classification
class SupportRouter < RubyLLM::Agents::Workflow::Router
classifier_model "gpt-4o-mini"
route :support, to: SupportAgent, description: "Technical support issues"
route :sales, to: SalesAgent, description: "Sales and pricing questions"
route :default, to: GeneralAgent
end
result = SupportRouter.call(message: user_input)
result.routed_to # :support, :sales, or :defaultTrack and limit LLM spending:
# config/initializers/ruby_llm_agents.rb
RubyLLM::Agents.configure do |config|
config.budgets = {
global_daily: 100.0, # $100/day limit
global_monthly: 2000.0, # $2000/month limit
per_agent_daily: {
"ExpensiveAgent" => 50.0
},
enforcement: :hard # Block when exceeded
}
config.alerts = {
on_events: [:budget_soft_cap, :budget_hard_cap, :breaker_open],
slack_webhook_url: ENV['SLACK_WEBHOOK_URL']
}
endMount the real-time monitoring dashboard:
# config/routes.rb
mount RubyLLM::Agents::Engine => "/agents"Features:
- Execution history with filtering and search
- Cost analytics by agent, model, and time period
- Performance trends and charts
- Token usage breakdowns
- Error tracking and debugging
- Ruby >= 3.1.0
- Rails >= 7.0
- RubyLLM >= 1.0
Bug reports and pull requests are welcome at GitHub.
- Fork the repository
- Create your feature branch (
git checkout -b my-feature) - Commit your changes (
git commit -am 'Add feature') - Push to the branch (
git push origin my-feature) - Create a Pull Request
The gem is available as open source under the MIT License.
Built with love by Adham Eldeeb
Powered by RubyLLM
