Providers
ark-operator supports multiple LLM providers through a pluggable LLMProvider interface. Built-in providers cover Anthropic and OpenAI (including any OpenAI-compatible endpoint). Custom providers can be registered without modifying operator code.
Provider auto-detection
When AGENT_PROVIDER is not set (or set to auto), the agent runtime infers the provider from the model name:
| Model name prefix | Provider | Required env var |
|---|---|---|
gpt-*, o1-*, o3-* | OpenAI | OPENAI_API_KEY |
claude-* | Anthropic | ANTHROPIC_API_KEY |
mock | Built-in mock | None |
| anything else | OpenAI-compatible | OPENAI_API_KEY + OPENAI_BASE_URL |
The anything else case covers local models like llama3.2, mistral, phi3, or any model served by an OpenAI-compatible endpoint.
Ollama (recommended for local and private deployments)
Ollama serves models locally using an OpenAI-compatible API. No API key required.
Local development with ark run:
ollama pull llama3.2
OPENAI_BASE_URL=http://localhost:11434/v1 \
OPENAI_API_KEY=ollama \
ark run team.yaml --provider openai --watch
In-cluster with Helm:
helm install ark-operator arkonis/ark-operator \
--namespace ark-system \
--create-namespace \
--set taskQueueURL=redis.ark-system.svc.cluster.local:6379 \
--set agentExtraEnv[0].name=AGENT_PROVIDER,agentExtraEnv[0].value=openai \
--set agentExtraEnv[1].name=OPENAI_BASE_URL,agentExtraEnv[1].value=http://ollama.ollama.svc.cluster.local:11434/v1 \
--set agentExtraEnv[2].name=OPENAI_API_KEY,agentExtraEnv[2].value=ollama
Use any model name supported by your Ollama installation in ArkAgent.spec.model.
OpenAI
# ark run
OPENAI_API_KEY=sk-... ark run team.yaml --watch
# Helm
helm install ark-operator arkonis/ark-operator \
--set apiKeys.openaiApiKey=sk-...
Provider auto-detected from model name: gpt-4o, gpt-4-turbo, o1-*, o3-*.
Anthropic
# ark run — change model to claude-* in your YAML first
ANTHROPIC_API_KEY=sk-ant-... ark run team.yaml --watch
# Helm
helm install ark-operator arkonis/ark-operator \
--set apiKeys.anthropicApiKey=sk-ant-...
Provider auto-detected for any model starting with claude-.
Other OpenAI-compatible endpoints
Any server that implements the OpenAI Chat Completions API works. Examples: vLLM, LM Studio, Together AI, Groq, Azure OpenAI.
# vLLM
OPENAI_BASE_URL=http://vllm.internal:8000/v1 \
OPENAI_API_KEY=your-key \
ark run team.yaml --provider openai
# Azure OpenAI
OPENAI_BASE_URL=https://my-instance.openai.azure.com/ \
OPENAI_API_KEY=your-azure-key \
ark run team.yaml --provider openai
In cluster, set these via agentExtraEnv in Helm values or inject them into the API keys Secret.
Overriding provider selection
Set AGENT_PROVIDER explicitly to bypass auto-detection:
| Value | Effect |
|---|---|
auto | Auto-detect from model name (default) |
anthropic | Always use Anthropic provider |
openai | Always use OpenAI-compatible provider |
mock | Always use mock provider (returns placeholder responses) |
In-cluster, set this via agentExtraEnv:
helm upgrade ark-operator arkonis/ark-operator \
--set agentExtraEnv[0].name=AGENT_PROVIDER,agentExtraEnv[0].value=openai
The LLMProvider interface
To add a custom provider (e.g., a proprietary API, a new model service, or a test double):
type LLMProvider interface {
RunTask(
ctx context.Context,
cfg *config.Config,
task queue.Task,
tools []mcp.Tool,
callTool func(context.Context, string, json.RawMessage) (string, error),
chunkFn func(string), // nil = no streaming
) (string, queue.TokenUsage, error)
}
Register it from an init() function in your provider package:
func init() {
providers.Register("myprovider", func() providers.LLMProvider {
return &MyProvider{}
})
}
Blank-import the package in cmd/main.go to activate it:
import _ "github.com/my-org/ark-operator-myprovider"
Set AGENT_PROVIDER=myprovider in your agent pods.
The mock provider
--provider mock (or AGENT_PROVIDER=mock) returns deterministic placeholder responses without any API calls. Use it for:
- Testing DAG wiring and
dependsOnchains - CI pipelines that validate team structure without consuming tokens
- Local demos with no credentials
ark run team.yaml --provider mock --watch
Mock responses take the form:
[mock response for step "research": topic=LLMs]
See also
- Environment Variables reference —
AGENT_PROVIDER,OPENAI_BASE_URL, and all provider-related vars - Helm Values reference —
agentExtraEnv,apiKeys.* - Multi-Model Teams guide — mixing providers in one team