๐ Multi-Agent Orchestration
Run Claude Code, OpenAI Codex, and Gemini CLI in parallel. Get diverse perspectives from multiple AI models and synthesize them into unified recommendations.
parallel-process:
claude-analysis:
input: STDIN
model: claude-code
action: "Analyze architecture and trade-offs"
output: $CLAUDE_RESULT
gemini-analysis:
input: STDIN
model: gemini-cli
action: "Identify patterns and best practices"
output: $GEMINI_RESULT
codex-analysis:
input: STDIN
model: openai-codex
action: "Focus on implementation structure"
output: $CODEX_RESULT
synthesize:
input: |
Claude: $CLAUDE_RESULT
Gemini: $GEMINI_RESULT
Codex: $CODEX_RESULT
model: claude-code
action: "Combine into unified recommendation"
output: STDOUT
๐ Agentic Loops
Iterative refinement until the LLM decides work is complete. Perfect for code generation, document writing, and any task that benefits from self-improvement.
implement:
agentic_loop:
max_iterations: 5
exit_condition: llm_decides
allowed_paths: [./src, ./tests]
tools: [Read, Write, Edit, Bash]
input: STDIN
model: claude-code
action: |
Iteration {{ loop.iteration }}.
Previous: {{ loop.previous_output }}
Implement, test, and refine. Say DONE when complete.
output: STDOUT
Smart defaults: If you omit allowed_paths, comanda auto-infers them from the workflow directory and common project subdirectories (src, lib, test, docs, build). Simple workflows "just work" without explicit configuration.
๐บ Live TUI Dashboard
Watch your workflows run in real-time with a rich terminal UI. See iteration progress, token usage estimation, elapsed time, and resource consumptionโall in a clean dashboard.
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ analyze_codebase โ
โ Model: claude-code-sonnet โ
โ Iteration 3/10 | Time: 45s | Context (est.): 12%โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Status: Running...
CPU: 12% | Memory: 128MB
[Press 'q' to quit | 'd' toggle debug | โโ scroll]
Debug panel: Press d to show a scrollable debug panel when running with --debug or --verbose. All debug output is captured cleanly without breaking the TUI layout.
๐ฟ Git Worktree Support
Run multiple Claude Code sessions in parallel on the same repo without conflicts. Comanda automatically manages Git worktrees so each agent gets an isolated working copy.
$ comanda process parallel-features.yaml --live
โก Parallel Processing
Run independent steps concurrently for faster workflows. Automatically waits for all parallel steps before continuing.
parallel-process:
gpt4:
input: NA
model: gpt-4o
action: "Write a function to parse JSON"
output: gpt4-solution.py
claude:
input: NA
model: claude-3-5-sonnet-latest
action: "Write a function to parse JSON"
output: claude-solution.py
compare:
input: [gpt4-solution.py, claude-solution.py]
model: gpt-4o-mini
action: "Compare these implementations"
output: STDOUT
๐ ๏ธ Tool Execution
Run shell commands, scripts, and CLIs within your workflows. Integrate with grep, jq, git, or any command-line tool.
get-diff:
tool: bash
input: "git diff HEAD~1"
output: $DIFF
review:
input: $DIFF
model: claude-code
action: "Review these changes for issues"
output: STDOUT
๐ Workflow Visualization
See the structure of any workflow at a glance. Understand parallel branches, sequential steps, and data flow.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ WORKFLOW: agentic-explore.yaml โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ INPUT: None required โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ explore_codebase โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Model: claude-code-sonnet โ
โ Explore this codebase and provide โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ OUTPUT: STDOUT โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ STATISTICS โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Steps: 1 total, 0 parallel โ
โ Valid: 1/1 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Multi-Provider Support
Connect to any LLM provider. Cloud APIs, local models via Ollama, or agentic coding toolsโall in the same workflow.
Cloud APIs
OpenAI, Anthropic, Google, X.AI, DeepSeek, Moonshot
Local Models
Ollama, vLLM, any OpenAI-compatible endpoint
Agentic Tools
Claude Code, OpenAI Codex, Gemini CLI
๐ Advanced I/O
Process files, URLs, databases, and images. Batch operations with wildcards. Chunking for large files.
File Processing
Wildcards (*.go), chunking, multi-file input
Web Scraping
Fetch URLs, take screenshots, extract content
Database
Read from and write to PostgreSQL
Vision
Analyze images and screenshots with vision models
๐ Codebase Indexing
Generate rich code context for AI workflows. Index codebases once, reuse across workflows. Compare multiple projects or aggregate context from several repos.
$ comanda index capture ~/my-project -n myproject
$ comanda index list
NAME PATH LAST INDEXED FORMAT FILES
myproject ~/my-project 2024-02-25 15:00 structured 142
$ comanda index update myproject
Scanning for changes... 3 files modified
Updated in 0.3s (vs 2.1s full)
$ comanda index diff myproject
load_context:
codebase_index:
use: [project1, project2]
aggregate: true
compare:
input: |
Compare these codebases:
${INDEX:project1}
${INDEX:project2}
model: claude-code
action: "Identify shared patterns and differences"
output: comparison.md
TurboQuant compression: Indexes are automatically compressed using vector quantization and chunk deduplication, reducing size by up to 50% while preserving semantic quality. The comanda generate command also auto-detects available indexes in .comanda/ and includes them in the prompt context.
๐ Server Mode
Turn any workflow into an HTTP API. Perfect for integrating comanda into your existing services and CI/CD pipelines.
$ comanda server
$ curl -X POST "http://localhost:8080/process?filename=review.yaml" \
-d '{"input": "code to review"}'