Autonomous Code Optimization System
An intelligent coding agent that iteratively analyzes, fixes, tests, optimizes, journals, and pushes code until it achieves flawless execution. Supports multiple AI providers, local AI on Apple Silicon, multi-agent orchestration, and web-based code search.
- Overview
- Features
- Installation
- Quick Start
- Configuration
- CLI Reference
- Modes of Operation
- AI Providers
- Local AI Setup
- Multi-Agent Orchestration
- Code Search
- Local Code Search
- Experimental Features
- Self-Optimization
- Version Control
- Timed Sessions
- User Interaction
- Journaling System
- Git Integration
- API Reference
- Architecture
- Troubleshooting
- Contributing
- License
Coding Agent automates the software development cycle through an intelligent loop:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ ANALYZE │────▶│ FIX │────▶│ TEST │
└─────────────┘ └─────────────┘ └─────────────┘
▲ │
│ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ PUSH │◀────│ JOURNAL │◀────│ OPTIMIZE │
└─────────────┘ └─────────────┘ └─────────────┘
│
└─────────────── REPEAT ──────────────▶
The agent continues iterating until:
- All tests pass
- Build succeeds
- No issues remain
- Or maximum iterations reached
- Iterative Optimization: Autonomous fix → test → optimize → journal → push loop
- Multi-AI Support: Claude, ChatGPT, Gemini, Perplexity, Junie (JetBrains)
- Local AI: Optimized for Apple Silicon M3/M4 using MLX, llama.cpp, or Ollama
- Standalone Mode: Works without AI using rule-based analysis and fixes
- Code Search: Find similar code, solutions, and examples across the web
- Multi-Agent Orchestration: Run multiple AI agents in parallel
- Consensus Voting: Multiple agents vote on best solutions
- Pipeline Processing: Chain agents for specialized tasks
- Swarm Intelligence: Dynamic task allocation
- User Interaction: Asks for input, resources, and clarifications when needed
- Rich Terminal UI: Beautiful progress tracking and status display
- Comprehensive Journaling: Full audit trail of all activities
- Git Integration: Automatic commits and multi-account GitHub support
- Python 3.10 or higher
- Git
- gh CLI (for GitHub operations):
brew install gh(macOS) or see cli.github.com
# Clone the repository
git clone https://github.com/qwitch13/coding-agent.git
cd coding-agent
# Create virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt# For Apple Silicon MLX support
pip install mlx mlx-lm
# For llama.cpp support
pip install llama-cpp-python
# For Ollama (easiest)
brew install ollama
ollama serve # Start server
ollama pull codellama:7b-instructpython main.pyThis launches the interactive shell with commands like:
run <target>- Run full optimizationfix <file>- Fix issuestest- Run testsstatus- Show statushelp- Show all commands
# Run agent on a directory
python main.py run src/
# Fix a specific file
python main.py fix main.py
# Optimize code
python main.py optimize utils.py
# Run tests
python main.py test
# Search for similar code
python main.py search "binary search tree implementation"# Set API keys
export ANTHROPIC_API_KEY=your_claude_key
export OPENAI_API_KEY=your_chatgpt_key
export GOOGLE_API_KEY=your_gemini_key
export PERPLEXITY_API_KEY=your_perplexity_key
# Run with multi-agent mode
python main.py --mode multi_agent run src/Create agent_config.json in the project root:
{
"agent": {
"mode": "single_ai",
"max_iterations": 10,
"test_command": "pytest",
"build_command": "python -m py_compile",
"lint_command": "ruff check",
"max_parallel_agents": 4,
"test_timeout": 300,
"build_timeout": 120,
"ai_timeout": 60,
"journal_enabled": true,
"verbose": false
},
"local_ai": {
"enabled": true,
"backend": "mlx",
"model_name": "codellama-7b",
"context_size": 4096,
"gpu_layers": -1,
"threads": 8
},
"git": {
"auto_commit": false,
"auto_push": false,
"default_branch": "main",
"commit_message_template": "[Agent] {action}: {description}",
"github_accounts": ["nebulai13", "qwitch13"]
}
}| Variable | Description | Required |
|---|---|---|
ANTHROPIC_API_KEY |
Claude API key | No |
OPENAI_API_KEY |
ChatGPT API key | No |
GOOGLE_API_KEY |
Gemini API key | No |
PERPLEXITY_API_KEY |
Perplexity API key | No |
GITHUB_TOKEN |
GitHub personal access token | No |
- Command-line arguments (highest)
- Environment variables
agent_config.json- Default values (lowest)
python main.py [OPTIONS] COMMAND [ARGS]
Options:
-V, --version Show version
-v, --verbose Verbose output
-m, --mode MODE Operation mode (standalone, single_ai, multi_agent)python main.py run <target> [OPTIONS]
Arguments:
target File or directory to process
Examples:
python main.py run src/
python main.py run main.py
python main.py --mode multi_agent run .python main.py fix <target>
Examples:
python main.py fix utils.py
python main.py fix src/modules/python main.py optimize <target>
Examples:
python main.py optimize slow_function.pypython main.py test
# Uses test_command from config (default: pytest)python main.py search "<query>" [OPTIONS]
Options:
--language LANG Filter by language (python, javascript, etc.)
--source SOURCE Search source (github, stackoverflow, all)
--limit N Maximum results (default: 20)
Examples:
python main.py search "quick sort python"
python main.py search "async http client" --language python
python main.py search "React hooks" --source githubpython main.py statuspython main.py providerspython main.py pushpython main.py create-repos [OPTIONS]
Options:
-n, --name NAME Repository name (default: coding-agent)
-d, --description DESC Repository description
-p, --private Create private repositories
Examples:
python main.py create-repos --name my-project
python main.py create-repos --name my-project --privateNo AI required. Uses rule-based analysis and fixes.
python main.py --mode standalone run src/Capabilities:
- Syntax error detection
- Linting with Ruff
- Import organization
- Basic code formatting
- Test execution
Uses one AI provider for all operations.
python main.py --mode single_ai run src/Provider Selection Order:
- Claude (if available)
- ChatGPT (if available)
- Gemini (if available)
- Local AI (if configured)
- Perplexity (if available)
Uses multiple AI providers working together.
python main.py --mode multi_agent run src/Strategies:
parallel- Different agents work on different tasksconsensus- Multiple agents vote on solutionspipeline- Chain of specialized agentsswarm- Dynamic task allocation
export ANTHROPIC_API_KEY=your_keyModels:
claude-sonnet-4-20250514(default)claude-opus-4-5-20251101
Best for: Complex reasoning, code understanding, refactoring
export OPENAI_API_KEY=your_keyModels:
gpt-4o(default)gpt-4-turbo
Best for: General coding, explanations, diverse languages
export GOOGLE_API_KEY=your_keyModels:
gemini-2.0-flash(default)gemini-pro
Best for: Fast responses, multi-modal understanding
export PERPLEXITY_API_KEY=your_keyModels:
llama-3.1-sonar-large-128k-online(default)
Best for: Research-backed solutions, up-to-date information
Requires JetBrains IDE with Junie plugin running.
Best for: IDE-integrated fixes, project-aware suggestions
pip install mlx mlx-lm
# Models are auto-downloaded from HuggingFaceConfiguration:
{
"local_ai": {
"enabled": true,
"backend": "mlx",
"model_name": "codellama-7b"
}
}# Install
brew install ollama # macOS
# or see https://ollama.ai for other platforms
# Start server
ollama serve
# Pull a model
ollama pull codellama:7b-instructConfiguration:
{
"local_ai": {
"enabled": true,
"backend": "ollama",
"model_name": "codellama:7b-instruct"
}
}pip install llama-cpp-python
# Download a GGUF model to models/ directoryConfiguration:
{
"local_ai": {
"enabled": true,
"backend": "llama.cpp",
"model_path": "models/codellama-7b-instruct.Q4_K_M.gguf"
}
}| Model | Size | Use Case | Performance |
|---|---|---|---|
| CodeLlama 7B | 4.1 GB | General code fixes | Fast |
| CodeLlama 13B | 7.9 GB | Complex reasoning | Medium |
| DeepSeek Coder 6.7B | 4.0 GB | Code generation | Fast |
| Qwen2.5 Coder 7B | 4.5 GB | Multi-language | Fast |
| StarCoder2 7B | 4.2 GB | Code completion | Fast |
Multiple agents work on different tasks simultaneously.
from orchestrator import run_multi_agent
results = await run_multi_agent("src/", strategy="parallel")Use when:
- Tasks are independent
- Speed is priority
- Multiple files need processing
Multiple agents solve the same problem and vote on the best solution.
from orchestrator import AgentOrchestrator
orchestrator = AgentOrchestrator()
result = await orchestrator.run_consensus(code, issue, min_agents=3)
print(f"Confidence: {result.confidence}")
print(f"Winning solution: {result.winning_response}")Use when:
- Critical fixes needed
- Want high confidence
- Ambiguous problems
Chain agents for specialized processing.
Analyze → Fix → Review → Optimize
result = await orchestrator.run_pipeline("main.py")
print(f"Approved: {result['approved']}")Use when:
- Need code review
- Quality assurance required
- Complex refactoring
Dynamic task allocation with continuous improvement.
results = await orchestrator.run_swarm("src/", max_iterations=10)Use when:
- Large codebases
- Ongoing optimization
- Unknown scope of issues
The agent can search the web for similar code, solutions, and examples.
# Search for code examples
python main.py search "binary search tree python"
# Search specific source
python main.py search "React useState" --source stackoverflow
# Filter by language
python main.py search "sort algorithm" --language rustfrom code_search import CodeSearchManager
search = CodeSearchManager()
# Search GitHub
results = await search.search_github("async http client", language="python")
# Search StackOverflow
results = await search.search_stackoverflow("how to parse JSON")
# Search all sources
results = await search.search_all("binary tree implementation")
# Find similar code
similar = await search.find_similar_code(my_code_snippet)| Source | Description | Best For |
|---|---|---|
| GitHub | Code repositories | Libraries, implementations |
| StackOverflow | Q&A | How-to, debugging |
| General search | Documentation, tutorials | |
| Semantic Scholar | Academic papers | Algorithms, research |
{
"source": "github",
"title": "Repository name or answer title",
"url": "https://...",
"code": "# Code snippet...",
"language": "python",
"relevance": 0.95,
"metadata": {
"stars": 1234,
"forks": 56,
"author": "username"
}
}Search your own local directories for code patterns and extract matches to your current project.
Add directories that the agent should search when looking for code:
# Add a directory
python main.py local-add ~/projects
python main.py local-add /path/to/libraries --name "My Libraries"
# List configured directories
python main.py local-list
# Remove a directory
python main.py local-remove ~/projects# Basic search
python main.py local-search "class MyClass"
# Search with regex
python main.py local-search "def \w+_handler" --regex
# Filter by language
python main.py local-search "async function" --language javascript
# Search and extract matches
python main.py local-search "authentication" --extractfrom local_code_search import LocalSearchManager, get_local_search_manager
manager = get_local_search_manager()
# Add search directories
manager.add_directory("~/projects")
manager.add_directory("/path/to/libs", name="Libraries")
# Search
results = manager.search("def calculate")
# Display results
results = manager.search_and_display("class Handler", show_context=True)
# Extract matches to current folder
summary = manager.extract_results(results)
print(f"Extracted {summary['success']} files to ./extracted/")| Feature | Description |
|---|---|
| Pattern Matching | Search by string or regex |
| Language Filtering | Filter by programming language |
| Context Display | Show lines around matches |
| Relevance Scoring | Results ranked by relevance |
| File Extraction | Copy matches to current project |
| Directory Config | Save directories for future sessions |
Local search configuration is saved in .local_search_config.json:
{
"directories": [
{
"path": "/Users/me/projects",
"name": "My Projects",
"enabled": true,
"languages": [],
"added_at": "2024-01-15T10:30:00"
}
],
"default_languages": [],
"max_file_size": 1048576,
"max_results": 100
}Python, JavaScript, TypeScript, Java, Kotlin, Go, Rust, C, C++, Ruby, PHP, Swift, Shell, SQL, HTML, CSS, YAML, JSON, Markdown
The following are automatically ignored:
.git,.svn,.hgnode_modules,__pycache__venv,.venv,.envbuild,dist,target,out.idea,.vscode
Experimental features can be toggled on/off. These features may be unstable.
# List all experimental features
python main.py experimental
# Enable a feature
python main.py experimental-enable self_optimization
# Enable dangerous features (requires --force)
python main.py experimental-enable self_update --force
# Disable a feature
python main.py experimental-disable self_optimization| Feature | Description | Dangerous |
|---|---|---|
self_optimization |
Learn from interactions, track trial/error | No |
self_update |
Agent can modify its own code | Yes |
contextualization |
Build context from web, user, code | No |
web_context |
Gather context from GitHub, StackOverflow | No |
user_learning |
Learn user preferences and style | No |
auto_strategy |
Auto-create fix strategies | No |
When enabled, the agent learns from every fix attempt.
The agent tracks:
- Error signatures and categories
- Fix strategies that worked
- Success rates and fix times
- Long-term stability of fixes
# Enable self-optimization
python main.py experimental-enable self_optimization
# View learning statistics
python main.py learn-stats
# Run autonomous web learning (6 hours)
python main.py learn 6h python error-handling
# Run self-optimization task
python main.py optimize-selfThe agent can learn from web sources for extended periods:
# Learn for 6 hours about Python
python main.py learn 6h python
# Learn about specific topics
python main.py learn 2h async await error-handling
# Learn based on project context (auto-detects)
python main.py learn 1hTrack code versions with ratings and smart rollback.
# Show version history for a file
python main.py version src/main.py
# Show overall statistics
python main.py version
# Rollback to previous version
python main.py version-rollback src/main.py
# Rollback to best-rated version
python main.py version-rollback src/main.py --best
# Rate a version (1-5)
python main.py version-rate abc123 5 "Fixed the issue perfectly"| Rating | Description |
|---|---|
| 5 - Excellent | Quick fix, still working after weeks |
| 4 - Good | Worked well, no issues |
| 3 - Acceptable | Works but has minor issues |
| 2 - Poor | Caused problems, needed revision |
| 1 - Broken | Completely broken, reverted |
- Versions stable for 7+ days get upgraded
- Versions rolled back from get downgraded
- Effectiveness score combines rating, stability, rollback history
Run work sessions with deadlines and auto-save.
# Work on src/ for 6 hours
python main.py work 6h src/
# Work with specific mode
python main.py work 2h main.py --mode optimize
# Work for 2 days
python main.py work 2d ./
# Resume a paused session
python main.py resume- Automatic checkpoints: State saved every 5 minutes
- Pause/Resume: Press Ctrl+C to pause, resume later
- Progress tracking: Shows iterations, time remaining
- Comprehensive journaling: All actions recorded
Saved to .cache/session_state.json:
{
"start_time": "2024-01-15T10:00:00",
"deadline": "2024-01-15 16:00",
"target": "src/",
"mode": "fix",
"iterations_completed": 15,
"issues_fixed": 8,
"status": "paused"
}The agent asks for user input when needed.
╔══════════════════════════════════════════════════════════════╗
║ API Key Required ║
╠══════════════════════════════════════════════════════════════╣
║ Provider: Claude ║
║ Environment Variable: ANTHROPIC_API_KEY ║
║ ║
║ You can also set this in your shell: ║
║ export ANTHROPIC_API_KEY=your_key ║
╚══════════════════════════════════════════════════════════════╝
? How would you like to provide the API key?
[1] Enter it now (will be used for this session only)
[2] Skip this provider
[3] Exit and set environment variable
╔══════════════════════════════════════════════════════════════╗
║ Clarification Needed ║
╠══════════════════════════════════════════════════════════════╣
║ Error in utils.py: ║
║ TypeError: unsupported operand type(s) ║
║ ║
║ Can you provide more context about this error? ║
╚══════════════════════════════════════════════════════════════╝
? Select an option:
[1] This is a known issue, proceed with standard fix
[2] Skip this error for now
[3] Let me explain the expected behavior
[4] Other (specify)
╔══════════════════════════════════════════════════════════════╗
║ Code Change Review ║
╠══════════════════════════════════════════════════════════════╣
║ File: utils.py ║
║ Change: Fixed type error in calculate_total function ║
╚══════════════════════════════════════════════════════════════╝
--- original
+++ modified
@@ -10,7 +10,7 @@
def calculate_total(items):
- return sum(items)
+ return sum(float(item) for item in items)
? Apply this change? [y/N]:
For CI/CD or automation:
from user_interaction import set_auto_mode
set_auto_mode(True) # Use defaults for all promptsAll agent activities are logged for audit and debugging.
journal/
├── agent_20240115_143022_a1b2c3.json # Session journal
└── ...
| Type | Description |
|---|---|
iteration_start |
Beginning of optimization iteration |
iteration_end |
End of iteration with results |
analysis |
Code analysis results |
fix_attempt |
Fix attempt details |
test_run |
Test execution results |
build |
Build/compile results |
optimization |
Code optimization applied |
ai_interaction |
AI provider interaction |
agent_task |
Multi-agent task execution |
consensus |
Consensus voting result |
git_action |
Git operation |
error |
Error encountered |
session_end |
Session summary |
python main.py journalOr programmatically:
from journal import get_journal
journal = get_journal()
summary = journal.get_session_summary()
print(f"Total iterations: {summary['total_iterations']}")
print(f"Fixes applied: {summary['successful_fixes']}")journal = get_journal()
# Export as Markdown
report_path = journal.export_report("md")
# Export as JSON
report_path = journal.export_report("json")
# Export as plain text
report_path = journal.export_report("txt")from git_integration import GitManager
git = GitManager()
# Create agent-style commit
success, hash = git.create_agent_commit(
action="fix",
description="Resolved type error in utils.py"
)from git_integration import create_and_push_repos
# Create repos on all configured accounts and push
results = await create_and_push_repos(
repo_name="my-project",
description="Project description",
private=False
)
for account, (success, url) in results["repo_creation"].items():
print(f"{account}: {url}")from git_integration import GitHubManager
gh = GitHubManager()
# Create PR
success, url = gh.create_pr(
title="Fix: Resolved type errors",
body="## Summary\n- Fixed type errors in utils.py",
base="main"
)
# List repos
repos = gh.list_repos("username")Main agent engine class.
from agent_engine import AgentEngine
engine = AgentEngine(progress_callback=my_callback)
# Run optimization loop
iterations = await engine.run(
target="src/",
mode="fix", # fix, optimize, full
ai_provider="claude"
)
# Get summary
summary = engine.get_summary()Manages AI providers.
from ai_providers import provider_manager
# Get available providers
providers = provider_manager.get_available_providers()
# Generate with fallback
response = await provider_manager.generate_with_fallback(
prompt="Fix this code...",
preferred_providers=["claude", "chatgpt"]
)
# Generate in parallel
responses = await provider_manager.generate_parallel(
prompt="Analyze this code...",
providers=["claude", "gemini"]
)Manages local AI models.
from local_ai import local_ai_manager
# Load model
await local_ai_manager.load_model("codellama-7b")
# Generate
response = await local_ai_manager.generate(prompt)
# Generate code fix
fixed = await local_ai_manager.generate_code_fix(code, error)
# Get status
status = local_ai_manager.get_status()Web code search functionality.
from code_search import CodeSearchManager
search = CodeSearchManager()
# Search GitHub
results = await search.search_github(query, language="python")
# Search StackOverflow
results = await search.search_stackoverflow(query)
# Find similar code
similar = await search.find_similar_code(code_snippet)Local directory code search.
from local_code_search import get_local_search_manager
manager = get_local_search_manager()
# Configure directories
manager.add_directory("~/projects")
manager.list_directories()
# Search local files
results = manager.search("class Handler")
# Search with display
results = manager.search_and_display("async def", show_context=True)
# Extract to current folder
summary = manager.extract_results(results)
# Files extracted to ./extracted/User interaction handling.
from user_interaction import get_interaction_manager
interaction = get_interaction_manager()
# Ask confirmation
if interaction.ask_confirmation("Apply fix?"):
apply_fix()
# Ask for input
value = interaction.ask_input("Enter test command:")
# Request resource
response = interaction.request_resource(
resource_type="api_key",
description="Claude API key needed"
)coding-agent/
├── main.py # Entry point and CLI
├── config.py # Configuration management
│ ├── Config # Central config class
│ ├── AIProviderConfig # Provider settings
│ ├── LocalAIConfig # Local AI settings
│ ├── GitConfig # Git settings
│ └── AgentConfig # Agent settings
│
├── ai_providers.py # AI provider integrations
│ ├── BaseAIProvider # Abstract base class
│ ├── ClaudeProvider # Anthropic Claude
│ ├── ChatGPTProvider # OpenAI GPT-4
│ ├── GeminiProvider # Google Gemini
│ ├── PerplexityProvider # Perplexity AI
│ ├── JunieProvider # JetBrains Junie
│ └── AIProviderManager # Provider coordination
│
├── local_ai.py # Local AI support
│ ├── MLXBackend # Apple MLX
│ ├── LlamaCppBackend # llama.cpp
│ ├── OllamaBackend # Ollama
│ └── LocalAIManager # Backend management
│
├── agent_engine.py # Core agent logic
│ ├── AgentEngine # Main engine
│ ├── CodeAnalyzer # Code analysis
│ ├── TestRunner # Test execution
│ ├── BuildRunner # Build execution
│ ├── CodeFixer # Issue fixing
│ └── CodeOptimizer # Code optimization
│
├── orchestrator.py # Multi-agent coordination
│ ├── AgentOrchestrator # Main orchestrator
│ ├── TaskQueue # Task management
│ ├── Agent # Agent representation
│ └── ConsensusResult # Voting results
│
├── code_search.py # Web code search
│ ├── CodeSearchManager # Search coordination
│ ├── GitHubSearch # GitHub search
│ ├── StackOverflowSearch # SO search
│ └── WebSearch # General web search
│
├── local_code_search.py # Local directory search
│ ├── LocalSearchManager # Search management
│ ├── LocalCodeSearcher # File searching
│ ├── LocalSearchConfig # Directory config
│ └── SearchDirectory # Directory settings
│
├── journal.py # Activity logging
│ └── AgentJournal # Journal management
│
├── git_integration.py # Git operations
│ ├── GitManager # Git operations
│ ├── GitHubManager # GitHub API
│ └── GitHubAccountManager # Multi-account
│
├── terminal_ui.py # Terminal interface
│ ├── TerminalUI # Main UI
│ ├── ProgressTracker # Progress display
│ └── ResultsDisplay # Results display
│
├── user_interaction.py # User prompts
│ ├── UserInteractionManager # Interaction handling
│ └── AgentPrompts # Standard prompts
│
└── requirements.txt # Dependencies
User Input
│
▼
┌─────────────────────────────────────────────────────────┐
│ CLI (main.py) │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Agent Engine │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Analyze │─▶│ Fix │─▶│ Test │─▶│Optimize │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
└─────────────────────────────────────────────────────────┘
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ AI Providers│ │ Code Search │
│ - Claude │ │ - GitHub │
│ - ChatGPT │ │ - StackOF │
│ - Gemini │ │ - Web │
│ - Local AI │ └─────────────┘
└─────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Journal │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Git Integration │
│ (Commit & Push) │
└─────────────────────────────────────────────────────────┘
# Check if API keys are set
echo $ANTHROPIC_API_KEY
echo $OPENAI_API_KEY
# Or use standalone mode
python main.py --mode standalone run src/# Install pytest
pip install pytest
# Or configure custom test command
python main.py config test_command "python -m unittest"# For Ollama
ollama list # Check installed models
ollama pull codellama:7b-instruct
# For MLX/llama.cpp - download model to models/# Login with gh CLI
gh auth login
# Check status
gh auth statuspython main.py --verbose run src/# View latest log
cat logs/agent_*.log | tail -100
# View journal
python main.py journal- Fork the repository
- Create feature branch:
git checkout -b feature-name - Make changes and test
- Commit:
git commit -m "Add feature" - Push:
git push origin feature-name - Create Pull Request
- Python 3.10+ type hints
- Async/await for I/O operations
- Rich library for terminal output
- Dataclasses for data structures
MIT License
Copyright (c) 2024
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.