Writing Prompts

System prompts are the primary way to configure agent behavior. ark-operator supports inline prompts, ConfigMap references, and shared fragments via ArkSettings.


Inline prompts

The simplest approach: write the prompt directly in the ArkAgent spec.

spec:
  model: llama3.2
  systemPrompt: |
    You are a data extraction assistant. Read the provided text and extract
    structured information.

    Return only a valid JSON object — no explanation, no markdown, no code fences.
    If a field cannot be determined, set it to null.

Works well for short prompts. For prompts longer than a few lines, use a ConfigMap.


ConfigMap references

Keep long prompts out of YAML and in a dedicated file. The operator reads the ConfigMap at reconcile time and restarts agent pods automatically when it changes.

Create a prompt file prompt.txt:

You are a data extraction assistant. Your job is to read the provided
text carefully and extract structured information from it.

Return only a valid JSON object — no explanation, no markdown, no code
fences. The JSON must match the schema the caller expects exactly.

If a field cannot be determined from the text, set it to null.
Never guess or invent values.

Create the ConfigMap:

kubectl create configmap extractor-prompt \
  --from-file=system.txt=./prompt.txt \
  --namespace my-namespace

Reference it from the ArkAgent:

spec:
  model: llama3.2
  systemPromptRef:
    configMapKeyRef:
      name: extractor-prompt
      key: system.txt

To update the prompt, edit prompt.txt and reapply:

kubectl create configmap extractor-prompt \
  --from-file=system.txt=./prompt.txt \
  --namespace my-namespace \
  --dry-run=client -o yaml | kubectl apply -f -

The operator detects the ConfigMap change and restarts agent pods. No YAML changes, no kubectl rollout restart needed.


Shared fragments with ArkSettings

When multiple agents share the same persona, output format, or constraint, define them once in an ArkSettings resource:

apiVersion: arkonis.dev/v1alpha1
kind: ArkSettings
metadata:
  name: analyst-base
  namespace: my-namespace
spec:
  temperature: "0.3"
  outputFormat: structured-json
  promptFragments:
    persona: "You are an expert analyst with 20 years of experience."
    outputRules: |
      Always cite your sources.
      Flag uncertainty explicitly with "UNCERTAIN:".
      Never speculate beyond what the data supports.

Reference it from any ArkAgent in the same namespace:

spec:
  model: llama3.2
  systemPrompt: "Analyze the provided dataset and identify key trends."
  configRef: analyst-base

The operator merges fragments in order: persona is prepended, the agent’s own systemPrompt is in the middle, and outputRules is appended.


The ark deploy workflow

For managing prompts as part of a GitOps workflow, use ark deploy to apply your YAML + ConfigMaps in the correct dependency order:

# Deploy everything in a multi-doc YAML file
ark deploy team.yaml -n my-namespace

ark deploy applies ConfigMaps before ArkAgents, ArkAgents before ArkTeams, and ArkTeams before ArkEvents — so dependencies are always satisfied.


Prompt engineering tips for agents

Be explicit about output format. Agent prompts are called in a tool-use loop; the model may need multiple rounds before producing the final output. If you need JSON, say so explicitly:

Return only a valid JSON object. Do not include any explanation, markdown, or code fences.

Use persona framing. Agents respond better to role-based instructions:

You are a senior software engineer specializing in distributed systems.

Separate concerns across roles. In a pipeline, each role should have one job. A researcher that also tries to summarize and format will underperform compared to dedicated roles for each task.

Keep prompts short for fast tasks. Token budget is shared between the system prompt and the task itself. A 2000-token system prompt leaves less budget for the actual task output.

Test with --mock first. Before burning tokens, validate your template wiring with ark run team.yaml --provider mock --watch. This checks that dependsOn chains are correct and template expressions resolve.


See also


Apache 2.0 · ARKONIS