Declarative AI Pipelines
for the Command Line
Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel.
Version control everything.
Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel.
Version control everything.
brew install kris-hansen/comanda/comanda
go install github.com/kris-hansen/comanda@latest
Agentic workflows should be predictable, declarative, and repeatable — like a Terraform plan, not a bag of scripts or a Python program drowning in opaque dependencies. Let SOTA models do the heavy lifting. Comanda is a thin orchestration layer that lets you define workflows in plain language, then execute them anywhere.
Run Claude Code, Codex, and Gemini CLI in parallel. Synthesize diverse AI perspectives into unified recommendations.
Define pipelines in version-controllable YAML. Share workflows with your team, run them in CI/CD.
Iterative refinement until the LLM decides work is complete. Quality gates, retries, and state management built in.
Works with pipes, redirects, and scripts. Process files, URLs, databases. Batch operations with wildcards.
Generate persistent code indexes. Multi-repo context for AI workflows. Compare and aggregate codebases.
Join developers using comanda to orchestrate AI pipelines.