ark run
Execute an ArkTeam pipeline definition locally. No Kubernetes cluster, no Redis, no operator required.
ark run <team.yaml> [flags]
Flags
| Flag | Default | Description |
|---|---|---|
--provider | auto | Model provider: auto, anthropic, openai, mock |
--watch | false | Stream step status and output as each step completes |
--trace | false | Collect OTel spans in-memory and print a trace tree on completion |
--mock | false | Shorthand for --provider mock |
--dry-run | false | Validate YAML and estimate token cost without making any model calls |
--no-mcp | false | Skip MCP tool server connections |
--input | — | Inject an input value: --input key=value. Repeatable. |
--output | — | Write the team’s final output to a file instead of stdout |
--output json | — | Machine-readable JSON output including step statuses and token counts |
--flow | — | Select a specific team by name when the YAML contains multiple teams |
--max-tokens | (from spec) | Override the team-level token budget |
--timeout | (from spec) | Override spec.timeoutSeconds |
--log-level | info | Log verbosity: debug, info, warn, error |
Provider auto-detection
When --provider auto (the default), the provider is inferred from the model name in the team spec:
| Model name prefix | Provider | Required env var |
|---|---|---|
gpt-*, o1-*, o3-* | OpenAI | OPENAI_API_KEY |
claude-* | Anthropic | ANTHROPIC_API_KEY |
mock | Built-in mock | None |
| anything else | OpenAI-compatible | OPENAI_API_KEY + OPENAI_BASE_URL |
For Ollama:
OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=ollama \
ark run team.yaml --provider openai --watch
Examples
# Mock provider — no credentials needed
ark run quickstart.yaml --provider mock --watch --input topic="transformers"
# Ollama
OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=ollama \
ark run team.yaml --provider openai --watch
# OpenAI with input override
OPENAI_API_KEY=sk-... \
ark run research-team.yaml --input topic="consensus algorithms" --watch
# Anthropic
ANTHROPIC_API_KEY=sk-ant-... \
ark run team.yaml --watch
# Write output to file
ark run team.yaml --provider auto --output ./result.txt
# Machine-readable JSON output
ark run team.yaml --provider mock --output json
# Override token budget
ark run team.yaml --provider auto --max-tokens 4000
# Dry run (estimate only, no API calls)
ark run team.yaml --dry-run --input topic="test"
# Skip MCP tool connections
ark run team.yaml --provider mock --no-mcp
# Select a team by name in a multi-team YAML
ark run multi-team.yaml --flow research-pipeline
--watch output format
researcher [running]
researcher [done] 1842 tokens 4.2s
└─ Key findings about transformer architecture...
writer [running]
writer [done] 624 tokens 2.1s
└─ Transformer models revolutionized NLP by...
Flow Succeeded in 6.3s — total: 2466 tokens
On failure:
researcher [failed]
└─ context deadline exceeded
Flow Failed in 30.0s — step "researcher" error: context deadline exceeded
JSON output format
{
"status": "succeeded",
"duration_ms": 6300,
"total_tokens": 2466,
"steps": [
{ "name": "researcher", "status": "succeeded", "tokens": 1842, "output": "..." },
{ "name": "writer", "status": "succeeded", "tokens": 624, "output": "..." }
]
}
Environment variables
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | API key for Anthropic provider |
OPENAI_API_KEY | API key for OpenAI provider (or any value for Ollama) |
OPENAI_BASE_URL | Override the API endpoint (e.g. http://localhost:11434/v1 for Ollama) |
ARK_PROVIDER | Default provider (overridden by --provider) |
ARK_LOG_LEVEL | Default log level (overridden by --log-level) |
Limitations
ark run is a local execution engine, not a full operator. Not available locally:
| Feature | Status |
|---|---|
| Pipeline mode | Available |
| Dynamic delegation mode | Cluster-only |
| Redis task queue | Not used (in-process queue) |
| Replica management | Not available |
| ArkEvent triggers | Not available |
| ArkMemory | Not available |
| Semantic health checks | Not available |
See also
- Local Development guide — full iteration workflow
- ark validate — validate without executing
- Quickstart — first run in five minutes