๐ Multi-Agent Orchestration
Run Claude Code, OpenAI Codex, and Gemini CLI in parallel. Get diverse perspectives from multiple AI models and synthesize them into unified recommendations.
parallel-process:
claude-analysis:
input: STDIN
model: claude-code
action: "Analyze architecture and trade-offs"
output: $CLAUDE_RESULT
gemini-analysis:
input: STDIN
model: gemini-cli
action: "Identify patterns and best practices"
output: $GEMINI_RESULT
codex-analysis:
input: STDIN
model: openai-codex
action: "Focus on implementation structure"
output: $CODEX_RESULT
synthesize:
input: |
Claude: $CLAUDE_RESULT
Gemini: $GEMINI_RESULT
Codex: $CODEX_RESULT
model: claude-code
action: "Combine into unified recommendation"
output: STDOUT
๐ Agentic Loops
Iterative refinement until the LLM decides work is complete. Perfect for code generation, document writing, and any task that benefits from self-improvement.
implement:
agentic_loop:
max_iterations: 5
exit_condition: llm_decides
allowed_paths: [./src, ./tests]
tools: [Read, Write, Edit, Bash]
input: STDIN
model: claude-code
action: |
Iteration {{ loop.iteration }}.
Previous: {{ loop.previous_output }}
Implement, test, and refine. Say DONE when complete.
output: STDOUT
Smart defaults: If you omit allowed_paths, comanda auto-infers them from the workflow directory and common project subdirectories (src, lib, test, docs, build). Simple workflows "just work" without explicit configuration.
๐บ Live TUI Dashboard
Watch your workflows run in real-time with a rich terminal UI. See iteration progress, token usage estimation, elapsed time, and resource consumptionโall in a clean dashboard.
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ analyze_codebase โ
โ Model: claude-code-sonnet โ
โ Iteration 3/10 | Time: 45s | Context (est.): 12%โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Status: Running...
CPU: 12% | Memory: 128MB
[Press 'q' to quit | 'd' toggle debug | โโ scroll]
Debug panel: Press d to show a scrollable debug panel when running with --debug or --verbose. All debug output is captured cleanly without breaking the TUI layout.
๐ฟ Git Worktree Support
Run multiple Claude Code sessions in parallel on the same repo without conflicts. Comanda automatically manages Git worktrees so each agent gets an isolated working copy.
$ comanda process parallel-features.yaml --live
โก Parallel Processing
Run independent steps concurrently for faster workflows. Automatically waits for all parallel steps before continuing.
parallel-process:
gpt4:
input: NA
model: gpt-4o
action: "Write a function to parse JSON"
output: gpt4-solution.py
claude:
input: NA
model: claude-3-5-sonnet-latest
action: "Write a function to parse JSON"
output: claude-solution.py
compare:
input: [gpt4-solution.py, claude-solution.py]
model: gpt-4o-mini
action: "Compare these implementations"
output: STDOUT
๐ฏ Intelligent Flow Control
Let the LLM decide when work is complete, route dynamically based on content, and handle failures gracefully.
LLM-Decides Exit
Agentic loops continue until the model says "DONE" โ no fixed iteration counts.
Conditional Steps
Skip steps based on previous outputs or environment variables.
Quality Gates
Validate outputs before proceeding. Retry on failure with backoff.
Variables & State
Pass data between steps with $VARIABLES. Template interpolation.
classify:
input: STDIN
model: gpt-4o-mini
action: "Classify as: bug, feature, or docs. Output only the category."
output: $CATEGORY
handle-bug:
condition: "$CATEGORY == 'bug'"
input: STDIN
model: claude-code
agentic_loop:
exit_condition: llm_decides
max_iterations: 10
action: "Fix this bug. Run tests. Say DONE when fixed."
output: STDOUT
handle-docs:
condition: "$CATEGORY == 'docs'"
input: STDIN
model: gemini-2.5-flash
action: "Update documentation accordingly"
output: docs-update.md
๐ ๏ธ Tool Execution
Run shell commands, scripts, and CLIs within your workflows. Integrate with grep, jq, git, or any command-line tool.
get-diff:
tool: bash
input: "git diff HEAD~1"
output: $DIFF
review:
input: $DIFF
model: claude-code
action: "Review these changes for issues"
output: STDOUT
๐ Workflow Visualization
See the structure of any workflow at a glance. Understand parallel branches, sequential steps, and data flow.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ WORKFLOW: agentic-explore.yaml โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ INPUT: None required โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ explore_codebase โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Model: claude-code-sonnet โ
โ Explore this codebase and provide โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ OUTPUT: STDOUT โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ STATISTICS โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Steps: 1 total, 0 parallel โ
โ Valid: 1/1 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Multi-Provider Support
Connect to any LLM provider. Cloud APIs, local models via Ollama, enterprise deployments via AWS Bedrock, or agentic coding toolsโall in the same workflow.
| Category |
Provider |
Models |
Config |
| Cloud APIs |
Anthropic |
Claude 4.5 (Opus, Sonnet, Haiku), Claude 4, 3.7, 3.5 |
ANTHROPIC_API_KEY |
| OpenAI |
GPT-5.1, GPT-5, GPT-4o, o3, o4-mini |
OPENAI_API_KEY |
| Google |
Gemini 3, 2.5 (Pro, Flash), 1.5 |
GOOGLE_API_KEY |
| X.AI |
Grok-4, Grok-4-Heavy, Grok-Vision |
XAI_API_KEY |
| DeepSeek |
DeepSeek-Chat, Coder, Vision, Reasoner |
DEEPSEEK_API_KEY |
| Moonshot |
Moonshot v1 (8k, 32k, 128k) |
MOONSHOT_API_KEY |
| Enterprise |
AWS Bedrock |
Claude, Nova, Llama via Converse API |
AWS credentials |
| Local / Self-hosted |
Ollama |
Any model (Llama, Mistral, Qwen, etc.) |
Auto-detected |
| vLLM |
Any OpenAI-compatible endpoint |
VLLM_ENDPOINT |
| Agentic Tools |
Claude Code |
claude-code, claude-code-opus |
CLI installed |
| OpenAI Codex |
openai-codex, openai-codex-o3 |
CLI installed |
| Gemini CLI |
gemini-cli, gemini-cli-pro |
CLI installed |
analyze:
model: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
action: "Analyze this architecture"
input: STDIN
output: STDOUT
๐ Advanced I/O
Process files, URLs, databases, and images. Batch operations with wildcards. Automatic chunking for large files.
| Input Type |
Formats |
Features |
| Files |
Any text file, wildcards (*.go, src/**/*.ts) |
Multi-file input, auto-chunking for large files, file watching |
| Documents |
PDF, Markdown, plain text |
Page extraction, table parsing, inline images |
| Images |
PNG, JPEG, GIF, WebP |
Vision model analysis, screenshots, base64 encoding |
| URLs |
HTTP/HTTPS web pages |
Content extraction, screenshots, headless rendering |
| Databases |
PostgreSQL |
Query execution, result streaming, schema introspection |
| Streams |
STDIN, pipes |
Unix pipeline integration, streaming output |
review-code:
input: ./src/**/*.go
model: claude-3-5-sonnet-latest
action: "Review each file for security issues"
output: security-report.md
analyze-ui:
input: url:https://example.com
model: gpt-4o
action: "Analyze this UI for accessibility issues"
output: STDOUT
๐ Codebase Indexing
Generate rich code context for AI workflows. Index codebases once, reuse across workflows. Compare multiple projects or aggregate context from several repos.
$ comanda index capture ~/my-project -n myproject
$ comanda index list
NAME PATH LAST INDEXED FORMAT FILES
myproject ~/my-project 2024-02-25 15:00 structured 142
$ comanda index update myproject
Scanning for changes... 3 files modified
Updated in 0.3s (vs 2.1s full)
$ comanda index diff myproject
load_context:
codebase_index:
use: [project1, project2]
aggregate: true
compare:
input: |
Compare these codebases:
${INDEX:project1}
${INDEX:project2}
model: claude-code
action: "Identify shared patterns and differences"
output: comparison.md
TurboQuant compression: Indexes are automatically compressed using vector quantization and chunk deduplication, reducing size by up to 50% while preserving semantic quality. The comanda generate command also auto-detects available indexes in .comanda/ and includes them in the prompt context.
Building Rich Context with Agentic Loops
Combine indexing with agentic exploration to build deep, searchable knowledge bases. The agent explores the codebase iteratively, writing findings to a local search index (like qmd) for later retrieval.
index:
tool: bash
input: "comanda index capture ./src -n myproject"
output: $INDEX_RESULT
explore:
input: |
Codebase index:
${INDEX:myproject}
You have access to the full codebase via Read tool.
Explore systematically: architecture, patterns, key modules.
For each significant finding, write a markdown doc to ./docs/
Say DONE when you've built comprehensive documentation.
model: claude-code
allowed_paths: [., ./src, ./docs]
agentic_loop:
exit_condition: llm_decides
max_iterations: 50
action: "Explore and document this codebase thoroughly"
output: exploration-summary.md
build-search-index:
tool: bash
input: "qmd index ./docs --name project-knowledge"
output: STDOUT
Now you can query your knowledge base: qmd search "how does auth work" โ and get semantically relevant results from the agent's exploration. Great for onboarding, code review prep, or building RAG context for future workflows.
๐งฉ Skills System
Define reusable, parameterized workflows as Markdown files with YAML frontmatter. Skills are Claude-compatible, discoverable, and can be invoked from the CLI or within other workflows.
description: "Summarize a document or code file"
arguments:
file:
description: "Path to file to summarize"
required: true
format:
description: "Output format (bullets, prose, tldr)"
default: "bullets"
allowed-tools: [Read]
Read ${file} and summarize it in ${format} format.
Focus on key points, decisions, and action items.
$ comanda skills list
SKILL DESCRIPTION SOURCE
summarize Summarize a document or code file ~/.comanda/skills/
code-review Review code for issues bundled
$ comanda skills run summarize --file README.md --format tldr
summarize-docs:
skill: summarize
skill_args:
file: ./docs/API.md
format: prose
output: STDOUT
Skill locations: User skills in ~/.comanda/skills/, project skills in .comanda/skills/, plus bundled skills included with comanda. Skills support ${VAR} and ${VAR:-default} substitution.
๐ก๏ธ Security Scanning
Scan dependencies for known vulnerabilities using real-time data from OSV.dev. Works with npm, PyPI, Go, Cargo, and more.
parse-deps:
input: STDIN
model: grok-4-1-fast-non-reasoning
action: "Extract dependencies as JSON array"
output: ./deps.json
query-osv:
input: ./deps.json
model: NA
tool_config:
allowlist: [curl, jq]
action: NA
output: "tool: jq ... | curl api.osv.dev/v1/query ..."
generate-report:
input: ./vulns.json
model: grok-4-1-fast-non-reasoning
action: "Generate security report with CVEs and remediation"
output: STDOUT
๐ด CRITICAL minimist@1.2.5
GHSA-xvch-5gv4-984h โ Prototype Pollution
Fixed: 1.2.6
Run: npm install minimist@1.2.6
๐ HIGH lodash@4.17.20
GHSA-35jh-r3h4-6jhm โ Command Injection
Fixed: 4.17.21
Run: npm install lodash@4.17.21
โ
4 packages scanned, 2 vulnerabilities found
Real-time data: Queries OSV.dev API directlyโno stale vulnerability databases. Uses tool_config.allowlist to enable curl for API access. Works in CI/CD pipelines.
๐ Server Mode
Turn any workflow into an HTTP API. Perfect for integrating comanda into your existing services and CI/CD pipelines.
$ comanda server
$ curl -X POST "http://localhost:8080/process?filename=review.yaml" \
-d '{"input": "code to review"}'