Complete reference for AWF workflow YAML syntax.

Basic Structure

name: my-workflow
version: "1.0.0"
description: Workflow description

inputs:
  - name: file_path
    type: string
    required: true

states:
  initial: step1

  step1:
    type: step
    command: echo "Hello"
    on_success: step2
    on_failure: error

  step2:
    type: step
    command: echo "World"
    on_success: done
    on_failure: error

  done:
    type: terminal
    status: success

  error:
    type: terminal
    status: failure

Workflow Fields

FieldTypeRequiredDescription
namestringYesWorkflow identifier
versionstringNoSemantic version
descriptionstringNoHuman-readable description
inputsarrayNoInput parameter definitions
statesobjectYesState definitions
states.initialstringYesName of the starting state

State Types

TypeDescription
stepExecute a command
agentInvoke an AI agent (Claude, Codex, Gemini, etc.)
terminalEnd state with success/failure status
parallelExecute multiple steps concurrently
for_eachIterate over a list of items
whileRepeat until condition is false
operationExecute a declarative plugin operation (e.g., HTTP, GitHub, notifications)
call_workflowInvoke another workflow as a sub-workflow

Step State

Execute a shell command.

my_step:
  type: step
  command: |
    echo "Processing {{.inputs.file}}"
  dir: /tmp/workdir
  timeout: 30
  on_success: next_step
  on_failure: error
  continue_on_error: false
  retry:
    max_attempts: 3
    backoff: exponential

Step Options

OptionTypeDefaultDescription
commandstring-Shell command to execute (mutually exclusive with script_file); supports local-before-global resolution for AWF path variables
script_filestring-Path to external shell script file (mutually exclusive with command); supports local-before-global resolution
dirstringcwdWorking directory (supports interpolation and local-before-global resolution)
timeoutint or string0Execution timeout — integer seconds (30) or Go duration string ("1m30s", "500ms"). 0 = no timeout
on_successstring-Next state on success (exit code 0)
on_failurestring or object-Next state on failure — string (named terminal ref) or inline object (see Inline Error Shorthand)
continue_on_errorboolfalseAlways follow on_success regardless of exit code
retryobject-Retry configuration
transitionsarray-Conditional transitions

Timeout syntax: The timeout field accepts either an integer (seconds) or a Go duration string. Examples: 30 (30 seconds), "1m30s" (90 seconds), "500ms" (half second), "2h" (2 hours). Duration strings support: ns, us/µs, ms, s, m, h.

Shell Execution

Commands are executed using the user’s preferred shell, determined at runtime as follows:

  1. Preferred Shell Detection — AWF reads the $SHELL environment variable to detect the user’s configured shell (e.g., /bin/bash, /bin/zsh)
  2. Fallback — If $SHELL is unset, relative, or points to a non-existent binary, AWF falls back to /bin/sh
  3. Command Invocation — The selected shell is invoked with the -c flag to execute the workflow command

This ensures that bash-dependent syntax (arrays, [[, process substitution, {a..z} brace expansion, etc.) works on all systems, including Debian/Ubuntu where /bin/sh is dash.

Example Behavior:

  • On macOS/Arch Linux (where /bin/sh is bash-like):

    $ echo $SHELL
    /bin/bash
    $ awf run my-workflow
    # Uses /bin/bash → bash-specific syntax works
  • On Debian/Ubuntu (where /bin/sh is dash):

    $ echo $SHELL
    /bin/bash  # User has bash installed
    $ awf run my-workflow
    # Uses /bin/bash (detected from $SHELL) → bash-specific syntax works

Compatibility Notes:

  • POSIX-only commands — Work in any shell and are the most portable (recommended for external scripts)
  • Bash-specific syntax — Requires /bin/bash or compatible shell; check your $SHELL setting
  • Shell-specific features[[, arrays, process substitution, ANSI-C quoting ($'...') — require bash or zsh

To verify which shell AWF will use:

echo $SHELL  # Shows the detected shell
awf run my-workflow --dry-run  # Displays execution plan without running

External Script Files

Instead of inlining shell commands in YAML, you can load commands from external script files using the script_file field:

deploy:
  type: step
  script_file: scripts/deploy.sh
  timeout: 60
  on_success: verify
  on_failure: error

File: scripts/deploy.sh

#!/bin/sh
echo "Deploying version {{.inputs.version}} to {{.inputs.env}}"
kubectl apply -f manifests/
kubectl rollout status deployment/app

Mutual Exclusivity

You cannot specify both command and script_file on the same step:

# ❌ Invalid: both command and script_file
step:
  type: step
  command: echo "hello"
  script_file: scripts/hello.sh  # ERROR: only one allowed

# ✅ Valid: command only
step:
  type: step
  command: echo "hello"

# ✅ Valid: script_file only
step:
  type: step
  script_file: scripts/hello.sh

Path Resolution

Script file paths are resolved in this order:

  1. Absolute paths — used as-is:

    script_file: /opt/company/scripts/deploy.sh
  2. Home directory expansion — tilde is expanded:

    script_file: ~/scripts/build.sh
  3. Relative to workflow directory — resolved against the workflow file’s location:

    script_file: scripts/test.sh  # Resolves to <workflow_dir>/scripts/test.sh
  4. XDG scripts directory with local override — via template interpolation with local-before-global resolution:

    script_file: "{{.awf.scripts_dir}}/checks/lint.sh"
    # Checks in order:
    # 1. <workflow_dir>/scripts/checks/lint.sh (local override)
    # 2. ~/.config/awf/scripts/checks/lint.sh (global fallback)
Local-Before-Global Resolution

When using {{.awf.scripts_dir}} or {{.awf.prompts_dir}} in script_file, command, or dir fields, AWF prioritizes local project files over global ones:

  • If local file exists at <workflow_dir>/scripts/<suffix> → use it
  • If local file missing → fall back to global ~/.config/awf/scripts/<suffix>

This enables shared scripts at the global level while allowing projects to override them locally:

# Workflow at: ~/myproject/.awf/workflows/deploy.yaml
steps:
  deploy:
    type: step
    script_file: "{{.awf.scripts_dir}}/deploy.sh"
    on_success: verify

Resolution order:

  1. Check ~/myproject/.awf/workflows/scripts/deploy.sh (local override)
  2. Check ~/.config/awf/scripts/deploy.sh (global shared)
Using AWF Paths in Commands

The same local-before-global resolution applies when using {{.awf.scripts_dir}} or {{.awf.prompts_dir}} inside command: fields. This enables reusing shared helper scripts or templates from both command invocations and script files:

# Both of these use the same local-before-global resolution:

deploy_via_script:
  type: step
  script_file: "{{.awf.scripts_dir}}/deploy.sh"
  on_success: done

deploy_via_command:
  type: step
  command: "source {{.awf.scripts_dir}}/deploy.sh && run_deploy"
  on_success: done

Both resolve to the local scripts/deploy.sh if it exists, otherwise fall back to the global ~/.config/awf/scripts/deploy.sh.

Template Interpolation

Script file paths support template interpolation before resolution. Loaded script contents also undergo template interpolation with full access to workflow context variables:

build:
  type: step
  script_file: scripts/build.sh
  on_success: done

File: scripts/build.sh

#!/bin/sh
echo "Building {{.inputs.target}}"
cd {{.inputs.project_dir}}
make build
echo "Build output: {{.states.prepare.Output}}"

Dry Run

In --dry-run mode, the resolved script file path is displayed along with the loaded and interpolated content:

awf run deploy --dry-run
# Shows resolved script path and interpolated content

Shebang Support

Script files with a shebang line (#!...) are executed directly via the kernel’s interpreter dispatch, rather than through $SHELL -c. This allows you to use non-shell scripts (Python, Ruby, Perl, etc.) and shell scripts in different variants (bash, zsh, etc.) within the same workflow.

How it works:

  • If a script file starts with #!, AWF writes it to a temporary file and executes it directly
  • The kernel reads the shebang and launches the appropriate interpreter
  • Scripts without a shebang fall back to the user’s shell ($SHELL -c) for backward compatibility

Shebang Examples:

#!/usr/bin/env python3
# scripts/analyze.py
import sys
data = "{{.inputs.data}}"
print(f"Analyzing: {data}")

Workflow:

analyze:
  type: step
  script_file: scripts/analyze.py
  timeout: 30
  on_success: done

AWF detects the #!/usr/bin/env python3 shebang and executes the script via Python, not the shell.

Bash-specific script:

#!/bin/bash
# scripts/deploy.sh
set -euo pipefail
echo "Deploying to {{.inputs.env}}"
[[ -f config.yaml ]] && echo "Config found"
kubectl apply -f manifests/

Even if your $SHELL is zsh, this bash script executes via bash (not zsh) because of the shebang.

Shell script without shebang (legacy):

# scripts/legacy.sh
# No shebang line
echo "Running in {{inputs.shell}}"

This script executes via $SHELL -c (backward-compatible behavior).

Supported Shebang Formats:

All standard shebang formats are supported:

#!/bin/sh                           # Absolute path
#!/usr/bin/env python3              # env with single argument
#!/usr/bin/env -S python3 -u        # env with multiple arguments (POSIX)
#!/usr/bin/python3                  # Direct interpreter path

Temporary File Cleanup:

Temporary files are automatically cleaned up after execution, even if the script fails or execution is cancelled.

Error Handling

ErrorCauseExit Code
File not foundScript file path does not exist1
Permission deniedScript file is not readable1
File too largeScript file exceeds 1MB size limit1
Interpreter not foundShebang specifies non-existent interpreter127

Error messages include the resolved file path for easy debugging.


Agent State

Invoke an AI agent (Claude, Codex, Gemini, OpenCode) with a prompt template.

Basic Agent Step

analyze:
  type: agent
  provider: claude
  prompt: |
    Analyze this code for issues:
    {{.inputs.code}}
  options:
    model: claude-sonnet-4-20250514
  timeout: 120
  on_success: review
  on_failure: error

Conversation Mode

Enable multi-turn conversations with automatic context management:

refine_code:
  type: agent
  provider: claude
  mode: conversation
  system_prompt: |
    You are a code reviewer. Iterate until code is approved.
    Say "APPROVED" when done.
  initial_prompt: |
    Review this code:
    {{.inputs.code}}
  options:
    model: claude-sonnet-4-20250514
  conversation:
    max_turns: 10
    max_context_tokens: 100000
    strategy: sliding_window
    stop_condition: "response contains 'APPROVED'"
  on_success: deploy
  on_failure: error

Agent Options

OptionTypeRequiredDescription
providerstringYesAgent provider: claude, codex, gemini, opencode, openai_compatible
modestringNoSet to conversation for multi-turn mode
promptstringYes*Prompt template (supports {{.inputs.*}} and {{.states.*}} interpolation)
prompt_filestringNo*Path to external prompt template file (mutually exclusive with prompt)
system_promptstringNoSystem message (for conversation mode, preserved across turns)
initial_promptstringNo*First user message (for conversation mode)
output_formatstringNoPost-processing format: json (strip fences + validate JSON) or text (strip fences only)
conversationobjectNoConversation configuration (required if mode=conversation)
optionsmapNoProvider-specific options (varies by provider — see Agent Steps for each provider’s supported options)
timeoutint or stringNoExecution timeout — integer seconds (30) or Go duration string ("1m30s", "500ms"). 0 = no timeout
on_successstringNoNext state on success
on_failurestring or objectNoNext state on failure — string (named terminal ref) or inline object (see Inline Error Shorthand)
retryobjectNoRetry configuration (same as step retry)

* Use prompt or prompt_file for single-turn mode (mutually exclusive), initial_prompt for conversation mode. See Agent Steps - External Prompt Files for prompt_file details.

Conversation Configuration

OptionTypeDefaultDescription
max_turnsint10Maximum conversation turns
max_context_tokensintmodel limitToken budget for conversation
strategystring-Context window strategy: sliding_window, summarize (not yet implemented), truncate_middle (not yet implemented). Omitting means no context management is applied
stop_conditionstring-Expression to exit early
continue_fromstring-Step name to continue conversation from — resumes prior step’s session
inject_contextstring-Additional context to inject into user prompts on turns 2+. Supports template variables ({{.states.*}}, {{.inputs.*}}, etc.). Re-interpolated per turn.

Available Providers

ProviderBinary/EndpointConversation SupportDescription
claudeclaude CLIMulti-turn (session resume via -r)Anthropic Claude CLI
codexcodex CLIMulti-turn (session resume via resume)OpenAI Codex CLI
geminigemini CLIMulti-turn (session resume via --resume)Google Gemini CLI
opencodeopencode CLIMulti-turn (session resume via -s)OpenCode CLI
openai_compatibleHTTP APIFull multi-turn (messages array)Chat Completions API (OpenAI, Ollama, vLLM, Groq)

Conversation mode and providers: All providers support multi-turn conversations. CLI-based providers (claude, codex, gemini, opencode) use native session resume flags to maintain context across turns — session IDs are extracted from CLI output after the first turn and passed on subsequent turns. If session ID extraction fails, the provider falls back to stateless mode gracefully. openai_compatible maintains full conversation history via the Chat Completions API messages array.

Agent Output

Agent responses are captured in the step state:

FieldTypeDescription
{{.states.step_name.Output}}stringRaw response text
{{.states.step_name.Response}}objectParsed JSON (if response is valid JSON)
{{.states.step_name.TokensUsed}}intTokens consumed by this agent step

Multi-Turn Conversations

Recommended: Use conversation mode for iterative workflows:

review:
  type: agent
  provider: claude
  mode: conversation
  system_prompt: "You are a code reviewer."
  initial_prompt: "Review: {{.inputs.code}}"
  conversation:
    max_turns: 10
    stop_condition: "response contains 'APPROVED'"
  on_success: done

Legacy: Chain multiple agent steps with state passing:

states:
  initial: ask_question

  ask_question:
    type: agent
    provider: claude
    prompt: "Initial question here"
    on_success: follow_up

  follow_up:
    type: agent
    provider: claude
    prompt: |
      Based on your previous response:
      {{.states.ask_question.Output}}

      Please elaborate on point 3.
    on_success: done

  done:
    type: terminal

See Also: Conversation Mode Guide for detailed examples and best practices.

OpenAI-Compatible Provider

For any backend implementing the Chat Completions API (OpenAI, Ollama, vLLM, Groq, LM Studio), use openai_compatible. Unlike CLI-based providers, this sends HTTP requests directly — no CLI tool installation required.

analyze:
  type: agent
  provider: openai_compatible
  prompt: "Analyze: {{.inputs.data}}"
  options:
    base_url: http://localhost:11434/v1
    model: llama3
  timeout: 60
  on_success: next

Required options: base_url (root API URL, /chat/completions appended automatically) and model. Optional: api_key (falls back to OPENAI_API_KEY env var), temperature (0-2), max_completion_tokens (preferred; max_tokens accepted as deprecated fallback), top_p, system_prompt.

See Also: Agent Steps Guide for detailed examples and backend configurations.


Operation State

Execute a declarative plugin operation. Operations provide structured access to external services (e.g., GitHub) without shell scripting. Inputs are validated against the operation schema and outputs are accessible via {{.states.step_name.Response.field}}.

Basic Operation Step

get_issue:
  type: operation
  operation: github.get_issue
  inputs:
    number: "{{.inputs.issue_number}}"
  on_success: process
  on_failure: error

Operation Options

OptionTypeRequiredDescription
operationstringYesOperation name (e.g., github.get_issue)
inputsmapVariesInput parameters (validated against operation schema)
on_successstringNoNext state on success
on_failurestring or objectNoNext state on failure — string (named terminal ref) or inline object
retryobjectNoRetry configuration (same as step retry)

Operation Output

Operation results are captured as structured data:

FieldTypeDescription
{{.states.step_name.Output}}stringRaw JSON response
{{.states.step_name.Response.field}}anyParsed field from structured output

Output Interpolation

Chain operations by referencing previous step outputs:

states:
  initial: get_issue

  get_issue:
    type: operation
    operation: github.get_issue
    inputs:
      number: "{{.inputs.issue_number}}"
    on_success: show_title
    on_failure: error

  show_title:
    type: step
    command: echo "Issue: {{.states.get_issue.Response.title}}"
    on_success: done
    on_failure: error

  done:
    type: terminal
    status: success

  error:
    type: terminal
    status: failure

GitHub Operations

AWF includes a built-in GitHub plugin with 8 declarative operations. Authentication is handled automatically via gh CLI or GITHUB_TOKEN environment variable. The repository is auto-detected from git remote when the repo input is omitted.

Issue & PR Operations

OperationDescriptionRequired InputsOutputs
github.get_issueRetrieve issue datanumbernumber, title, body, state, labels
github.get_prRetrieve pull request datanumbernumber, title, body, state, headRefName, baseRefName, mergeable, mergedAt, labels
github.create_issueCreate a new issuetitlenumber, url
github.create_prCreate a new pull requesttitle, head, basenumber, url, already_exists
github.add_labelsAdd labels to issue or PRnumber, labelslabels
github.add_commentAdd a commentnumber, bodycomment_id, url
github.list_commentsList commentsnumbercomments, total

Common Optional Inputs

All GitHub operations accept these optional inputs:

InputTypeDescription
repostringRepository in owner/repo format (auto-detected from git remote if omitted)
fieldsarrayFields to include in output (limits data returned, supported by get operations)

Examples

Retrieve an issue:

get_issue:
  type: operation
  operation: github.get_issue
  inputs:
    number: 42
  on_success: next
  on_failure: error

Create a pull request:

create_pr:
  type: operation
  operation: github.create_pr
  inputs:
    title: "feat: add login page"
    head: feature/login
    base: main
    body: "Implements the login UI"
    draft: true
  on_success: next
  on_failure: error

Add labels to an issue:

label_issue:
  type: operation
  operation: github.add_labels
  inputs:
    number: "{{.inputs.issue_number}}"
    labels: ["bug", "priority-high"]
  on_success: done
  on_failure: error

Batch Operations

Execute multiple GitHub operations concurrently using github.batch. Batch operations support configurable concurrency and failure strategies.

label_multiple:
  type: operation
  operation: github.batch
  inputs:
    strategy: best_effort
    concurrency: 3
    operations:
      - name: github.add_labels
        number: 1
        labels: ["reviewed"]
      - name: github.add_labels
        number: 2
        labels: ["reviewed"]
      - name: github.add_labels
        number: 3
        labels: ["reviewed"]
  on_success: done
  on_failure: error

Batch Inputs

InputTypeDefaultDescription
operationsarray-Array of operation definitions (each with name and operation-specific inputs)
strategystringbest_effortExecution strategy
concurrencyint3Maximum concurrent operations

Batch Strategies

StrategyDescription
all_succeedAll operations must succeed; cancels remaining on first failure
any_succeedSucceed if at least one operation succeeds
best_effortComplete all operations, collect all results regardless of failures

Batch Outputs

OutputTypeDescription
totalintTotal operations attempted
succeededintSuccessfully completed count
failedintFailed operation count
resultsarrayIndividual operation results

Notification Operations

AWF includes a built-in notification provider with a single notify.send operation that dispatches to two backends. See Plugins - Built-in Notification Plugin for configuration details.

notify.send

InputTypeRequiredDescription
backendstringYesBackend: desktop, webhook
messagestringYesNotification message body
titlestringNoNotification title (defaults to “AWF Workflow”)
prioritystringNoPriority: low, default, high
webhook_urlstringNoWebhook URL (required for webhook backend)

Outputs: backend, status, response

Examples

Desktop notification after a build:

states:
  initial: build

  build:
    type: step
    command: make build
    on_success: notify
    on_failure: error

  notify:
    type: operation
    operation: notify.send
    inputs:
      backend: desktop
      title: "Build Complete"
      message: "{{workflow.name}} finished in {{workflow.duration}}"
    on_success: done
    on_failure: error

  done:
    type: terminal
    status: success

  error:
    type: terminal
    status: failure

Generic webhook (ntfy, Slack, Discord, Teams, PagerDuty, etc.):

notify_webhook:
  type: operation
  operation: notify.send
  inputs:
    backend: webhook
    webhook_url: "https://example.com/hooks/builds"
    message: "{{workflow.name}} completed"
  on_success: done
  on_failure: error

HTTP Operations

AWF includes a built-in HTTP operation provider for declarative REST API calls. The http.request operation supports GET, POST, PUT, and DELETE with configurable timeout and response capture. See Plugins - Built-in HTTP Operation for configuration details.

http.request

InputTypeRequiredDescription
urlstringYesHTTP endpoint URL (must start with http:// or https://)
methodstringYesHTTP method: GET, POST, PUT, DELETE (case-insensitive)
headersobjectNoCustom headers as key-value pairs
bodystringNoRequest body (for POST/PUT)
timeoutintegerNoPer-request timeout in seconds (default: 30)
retryable_status_codesarrayNoStatus codes that signal retryable failures (e.g., [429, 502, 503])

Outputs: status_code, body, headers, body_truncated

Examples

Simple GET request:

fetch_data:
  type: operation
  operation: http.request
  inputs:
    method: GET
    url: "https://api.example.com/users/{{.inputs.user_id}}"
    headers:
      Authorization: "Bearer {{.inputs.api_token}}"
      Accept: "application/json"
    timeout: 10
  on_success: process
  on_failure: error

POST with JSON body and retry:

create_resource:
  type: operation
  operation: http.request
  inputs:
    method: POST
    url: "https://api.example.com/resources"
    headers:
      Content-Type: "application/json"
      Authorization: "Bearer {{.inputs.api_token}}"
    body: '{"name": "{{.inputs.resource_name}}"}'
    timeout: 15
    retryable_status_codes: [429, 502, 503]
  retry:
    max_attempts: 3
    backoff: exponential
    initial_delay: 2s
  on_success: done
  on_failure: error

Access response fields in subsequent steps:

states:
  initial: fetch_user

  fetch_user:
    type: operation
    operation: http.request
    inputs:
      method: GET
      url: "https://api.example.com/users/1"
    on_success: show_result
    on_failure: error

  show_result:
    type: step
    command: |
      echo "Status: {{.states.fetch_user.Response.status_code}}"
      echo "Body: {{.states.fetch_user.Response.body}}"
      echo "Content-Type: {{.states.fetch_user.Response.headers.Content-Type}}"
    on_success: done
    on_failure: error

  done:
    type: terminal
    status: success

  error:
    type: terminal
    status: failure

DELETE request (no body):

delete_resource:
  type: operation
  operation: http.request
  inputs:
    method: DELETE
    url: "https://api.example.com/resources/{{.inputs.resource_id}}"
    headers:
      Authorization: "Bearer {{.inputs.api_token}}"
  on_success: done
  on_failure: error

Terminal State

End the workflow execution.

done:
  type: terminal
  status: success

error:
  type: terminal
  status: failure

Terminal Options

OptionTypeValuesDescription
statusstringsuccess, failureTerminal status
messagestring-Terminal message (displayed in output, supports template interpolation)

Inline Error Shorthand

Instead of defining separate named terminal states, you can specify an inline error object directly on on_failure:

build:
  type: step
  command: go build ./cmd/...
  on_success: test
  on_failure: {message: "Build failed"}

test:
  type: step
  command: go test ./...
  on_success: done
  on_failure: {message: "Tests failed", status: 2}

done:
  type: terminal
  status: success

When an inline error is triggered, AWF automatically synthesizes an anonymous terminal state with the specified message and status (default: 1).

Inline Error Syntax

The on_failure field accepts either:

  • String (named terminal reference, backward compatible): on_failure: error_terminal
  • Object (inline error, F066): on_failure: {message: "...", status: 3}

Inline Error Fields

FieldTypeRequiredDescription
messagestringYesTerminal message (can use template interpolation: {{.inputs.*}}, {{.states.*}}, {{.env.*}})
statusintNoExit code (default: 1 for failure)

Template Interpolation in Messages

Inline error messages support full template interpolation, including references to step outputs:

build:
  type: step
  command: make build
  on_failure: {message: "Build failed: {{.states.build.Output}}"}

test:
  type: step
  command: go test ./...
  on_failure: {message: "Tests failed in {{.inputs.environment}}", status: 5}

Backward Compatibility

Existing workflows using named terminal references continue to work unchanged:

# Still works — routes to named terminal state
on_failure: error_terminal

You can mix inline errors and named references in the same workflow:

build:
  type: step
  command: make build
  on_failure: {message: "Build failed"}  # Inline error

test:
  type: step
  command: npm test
  on_failure: error_terminal           # Named reference

Parallel State

Execute multiple steps concurrently. Branch children are defined as separate states and referenced by name in the parallel field.

parallel_build:
  type: parallel
  parallel:
    - lint
    - test
    - build
  strategy: all_succeed
  max_concurrent: 3
  on_success: deploy
  on_failure: error

lint:
  type: step
  command: golangci-lint run

test:
  type: step
  command: go test ./...

build:
  type: step
  command: go build ./cmd/...

Branch children (lint, test, build above) do not need on_success/on_failure or transitions — the parallel executor controls flow after each branch completes. If provided, transitions are accepted but ignored at runtime.

Parallel Options

OptionTypeDefaultDescription
parallelarray-List of step names to execute concurrently
strategystringall_succeedExecution strategy
max_concurrentintunlimitedMaximum concurrent steps
on_successstring-Next state when all branches complete successfully
on_failurestring or object-Next state on branch failure — string (named terminal ref) or inline object

Parallel Strategies

StrategyDescription
all_succeedAll steps must succeed, cancel remaining on first failure
any_succeedSucceed if at least one step succeeds
best_effortCollect all results, never cancel early

Accessing Parallel Results

# Branch children are top-level states — access their output directly
command: echo "{{.states.lint.Output}}"

For-Each Loop

Iterate over a list of items.

process_files:
  type: for_each
  items: '["a.txt", "b.txt", "c.txt"]'
  max_iterations: 100
  break_when: "states.process_single.ExitCode != 0"
  body:
    - process_single
  on_complete: aggregate

process_single:
  type: step
  command: |
    echo "Processing {{.loop.Item}} ({{.loop.Index1}}/{{.loop.Length}})"
  on_success: process_files

For-Each Options

OptionTypeDefaultDescription
itemsstring-Template expression or literal JSON array
bodyarray-List of step names to execute each iteration
max_iterationsint/string100Safety limit (max: 10000). Supports template interpolation and arithmetic expressions (+, -, *, /, %).
break_whenstring-Expression evaluated at runtime; loop exits when condition is true
on_completestring-Next state after loop completes

Loop Context Variables

VariableDescription
{{.loop.Item}}Current item value
{{.loop.Index}}0-based iteration index
{{.loop.Index1}}1-based iteration index
{{.loop.First}}True on first iteration
{{.loop.Last}}True on last iteration
{{.loop.Length}}Total items count
{{.loop.Parent}}Parent loop context (nested loops)

Dynamic Items

Items can come from a template expression:

items: "{{.inputs.files}}"

Dynamic Max Iterations

The max_iterations field supports interpolation and arithmetic expressions:

# From input parameter
max_iterations: "{{.inputs.retry_count}}"

# From environment variable
max_iterations: "{{.env.MAX_RETRIES}}"

# Pre-computed value from input
max_iterations: "{{.inputs.total_retries}}"

Supported arithmetic operators: +, -, *, /, %

Dynamic values are resolved at loop initialization time. Validation ensures:

  • Result is a positive integer
  • Value does not exceed 10000 (safety limit)

Use awf validate to detect undefined variables before runtime.


While Loop

Repeat until condition becomes false.

poll_status:
  type: while
  while: "states.check.Output != 'ready'"
  max_iterations: 60
  body:
    - check
    - wait
  on_complete: proceed

check:
  type: step
  command: curl -s https://api.example.com/status
  on_success: poll_status

wait:
  type: step
  command: sleep 5
  on_success: poll_status

While Options

OptionTypeDefaultDescription
whilestring-Condition expression evaluated at each iteration; loop continues while true. Supports template interpolation and boolean expressions.
bodyarray-List of step names to execute each iteration
max_iterationsint/string100Safety limit (max: 10000). Supports template interpolation and arithmetic expressions (+, -, *, /, %).
break_whenstring-Expression evaluated at runtime; loop exits when condition is true
on_completestring-Next state after loop completes

While Loop Context

VariableDescription
{{.loop.Index}}0-based iteration index
{{.loop.Index1}}1-based iteration index
{{.loop.First}}True on first iteration
{{.loop.Last}}Always false (unknown for while loops)
{{.loop.Length}}Always -1 (unknown for while loops)
{{.loop.Item}}Always nil for while loops
{{.loop.Parent}}Parent loop context (nested loops only)

Nested Loops

Loops can contain other loops. Inner loops access outer loop context via {{.loop.Parent.*}}:

outer_loop:
  type: for_each
  items: '["A", "B"]'
  body:
    - inner_loop
  on_complete: done

inner_loop:
  type: for_each
  items: '["1", "2"]'
  body:
    - process
  on_complete: outer_loop

process:
  type: step
  command: 'echo "outer={{.loop.Parent.Item}} inner={{.loop.Item}}"'
  on_success: inner_loop

Parent chains support arbitrary depth: {{.loop.Parent.Parent.Item}} for 3-level nesting.


Call Workflow (Sub-Workflow)

Invoke another workflow as a sub-workflow, passing inputs and capturing outputs.

analyze_code:
  type: call_workflow
  call_workflow:
    workflow: analyze-single-file
    inputs:
      file_path: "{{.inputs.target_file}}"
      max_tokens: "{{.inputs.max_tokens}}"
    outputs:
      result: analysis_result
    timeout: 300
  on_success: aggregate_results
  on_failure: handle_error

Call Workflow Options

OptionTypeDefaultDescription
workflowstring-Name of the workflow to invoke
inputsmap-Input mappings (parent var → child input)
outputsmap-Output mappings (child output → parent var)
timeoutint0Sub-workflow timeout in seconds (0 = inherit)

Child Workflow Definition

The child workflow must define its inputs and outputs:

# analyze-single-file.yaml
name: analyze-single-file
version: "1.0.0"

inputs:
  - name: file_path
    type: string
    required: true
  - name: max_tokens
    type: integer
    default: 2000

states:
  initial: read
  read:
    type: step
    command: cat "{{.inputs.file_path}}"
    on_success: analyze
    on_failure: error
  analyze:
    type: agent
    provider: claude
    prompt: "Analyze: {{.states.read.Output}}"
    timeout: 120
    on_success: done
    on_failure: error
  done:
    type: terminal
    status: success
  error:
    type: terminal
    status: failure

outputs:
  - name: analysis_result
    from: states.analyze.Output

Accessing Sub-Workflow Results

Outputs from the sub-workflow are accessible via the standard states interpolation:

# In parent workflow, after analyze_code step
aggregate_results:
  type: step
  command: echo "Analysis: {{.states.analyze_code.Output}}"
  on_success: done

Nested Sub-Workflows

Sub-workflows can call other sub-workflows. AWF tracks the call stack to detect circular references:

# workflow-a.yaml calls workflow-b
# workflow-b.yaml calls workflow-c
# Supported: A → B → C (3-level nesting)
# Blocked: A → B → A (circular reference)

Maximum nesting depth is 10 levels. Circular calls are detected at runtime with clear error messages showing the call stack.

Error Handling

Sub-workflow errors propagate to the parent:

  • If sub-workflow reaches a terminal state with status: failure, parent follows on_failure
  • If sub-workflow times out, parent receives timeout error and follows on_failure
  • If sub-workflow definition is not found, execution fails with undefined_subworkflow error

Using Call Workflow in Loops

Combine for_each with call_workflow to process multiple items in parallel sub-workflows. Loop items (especially complex objects) are automatically serialized to JSON:

# Example: Process multiple files across sub-workflows
prepare_items:
  type: step
  command: |
    echo '[
      {"file":"main.go","language":"Go"},
      {"file":"app.py","language":"Python"},
      {"file":"index.js","language":"JavaScript"}
    ]'
  capture:
    stdout: items_json
  on_success: process_files

process_files:
  type: for_each
  items: "{{.states.prepare_items.Output}}"
  body:
    - analyze_file

analyze_file:
  type: call_workflow
  call_workflow:
    workflow: analyze-source-file
    inputs:
      # {{.loop.Item}} is automatically JSON-serialized for complex types
      file_info: "{{.loop.Item}}"
    outputs:
      analysis: file_analysis
  on_success: next

next:
  type: terminal

Child workflow receives properly formatted JSON input:

name: analyze-source-file

inputs:
  - name: file_info
    type: string  # Receives JSON string

states:
  initial: parse
  parse:
    type: step
    command: |
      # Parse JSON input safely
      echo '{{.inputs.file_info}}' | jq -r '.file'
    on_success: done
  done:
    type: terminal

outputs:
  - name: file_analysis
    from: states.parse.Output

Retry Configuration

Automatic retry for failed steps.

flaky_api_call:
  type: step
  command: curl -f https://api.example.com/data
  retry:
    max_attempts: 5
    initial_delay: 1s
    max_delay: 30s
    backoff: exponential
    multiplier: 2
    jitter: 0.1
    retryable_exit_codes: [1, 22]
  on_success: process_data
  on_failure: error

Retry Options

OptionTypeDefaultDescription
max_attemptsint1Maximum attempts (must be >= 1)
initial_delayduration0Delay before first retry
max_delayduration-Maximum delay cap (omit for uncapped delays)
backoffstringconstantStrategy: constant, linear, exponential
multiplierfloat2Multiplier for exponential backoff (must be >= 0)
jitterfloat0Random jitter factor (must be between 0.0 and 1.0)
retryable_exit_codesarrayallExit codes to retry (empty = all non-zero)

Duration values accept Go duration strings (100ms, 1s, 2m30s) or plain integers (milliseconds). Invalid duration values produce a parse error at workflow load time.

Backoff Strategies

StrategyFormula
constantAlways initial_delay
linearinitial_delay * attempt
exponentialinitial_delay * multiplier^(attempt-1)

Conditional Transitions

Dynamic branching based on expressions.

process:
  type: step
  command: analyze.sh
  transitions:
    - when: "states.process.ExitCode == 0 and inputs.mode == 'full'"
      goto: full_report
    - when: "states.process.ExitCode == 0"
      goto: summary_report
    - goto: error  # default fallback (no when clause)

Transition Options

OptionTypeDescription
whenstringExpression to evaluate (optional for default)
gotostringTarget state if condition matches

Exit Code Routing Examples

Route based on command exit codes:

test_runner:
  type: step
  command: pytest
  transitions:
    - when: "states.test_runner.ExitCode == 0"
      goto: deploy
    - when: "states.test_runner.ExitCode > 1"
      goto: critical_failure
    - when: "states.test_runner.ExitCode != 0"  # Catch exit code 1
      goto: report_warnings
    - goto: unknown_error

Output-Based Routing Examples

Route based on command output:

check_config:
  type: step
  command: validate-config.sh
  transitions:
    - when: "states.check_config.Output contains 'READY'"
      goto: deploy
    - when: "states.check_config.Output contains 'WARNING'"
      goto: review_config
    - goto: abort

Combined Routing

Mix exit code and output conditions:

build:
  type: step
  command: make build
  transitions:
    - when: "states.build.ExitCode == 0 and states.build.Output contains 'OPTIMIZED'"
      goto: fast_deploy
    - when: "states.build.ExitCode == 0"
      goto: standard_deploy
    - goto: fix_errors

Supported Operators

TypeOperators
Comparison==, !=, <, >, <=, >= (numeric for ExitCode, string for Output)
Logicaland, or, not
Stringcontains, startsWith, endsWith
Grouping(expr)

Available Variables

VariableDescription
inputs.nameInput values
states.step_name.ExitCodeStep exit code (integer, POSIX 0-255)
states.step_name.OutputStep output (string)
env.VAR_NAMEEnvironment variables

Transition Evaluation

  • Transitions are evaluated on both success and failure paths (non-zero exit codes included)
  • Transitions are evaluated in order; first matching condition wins
  • When a transition matches, it takes priority over on_success, on_failure, and continue_on_error
  • A transition without when acts as default fallback
  • If no transition matches and no default fallback exists, falls back to legacy on_success/on_failure behavior

Input Definitions

Define and validate workflow inputs.

inputs:
  - name: file_path
    type: string
    required: true
    validation:
      file_exists: true
      file_extension: [".go", ".py", ".js"]

  - name: max_tokens
    type: integer
    default: 2000
    validation:
      min: 1
      max: 10000

  - name: env
    type: string
    default: staging
    validation:
      enum: [dev, staging, prod]

  - name: debug
    type: boolean
    default: false

Input Options

OptionTypeDescription
namestringInput identifier
typestringstring, integer, boolean
requiredboolIf true, must be provided
defaultanyDefault value if not provided
validationobjectValidation rules

Validation Rules

RuleTypeDescription
patternstringRegex pattern to match
enumarrayList of allowed values
minintMinimum value (integers only)
maxintMaximum value (integers only)
file_existsboolFile must exist on filesystem
file_extensionarrayAllowed file extensions

Validation Errors

Errors are collected and reported together:

input validation failed: 2 errors:
  - inputs.email: does not match pattern
  - inputs.count: value 150 exceeds maximum 100

Interactive Input Collection

When a workflow with required inputs is run from a terminal without providing all inputs via --input flags, AWF automatically prompts you for missing required values. This makes it easier to run workflows interactively without remembering all parameters upfront.

Example:

awf run deploy
# Output:
# env (string, required):
# > prod
#
# version (string, required):
# > 1.2.3
#
# Workflow started...

If the input has enum constraints, AWF displays numbered options:

awf run deploy
# Output:
# env (string, required):
# Available options:
#   1) dev
#   2) staging
#   3) prod
# Select option (1-3):
# > 2

Optional inputs can be skipped by pressing Enter. Invalid values are rejected with error messages, allowing you to correct and retry.

See Interactive Input Collection for more details.


Variable Interpolation

AWF uses {{.var}} syntax (Go template style with dot prefix).

# Inputs
command: echo "{{.inputs.variable_name}}"

# Previous step outputs
command: echo "{{.states.step_name.Output}}"

# Workflow metadata
command: echo "Workflow ID: {{.workflow.id}}"

# Environment variables
command: echo "Home: {{.env.HOME}}"

See Variable Interpolation Reference for complete details.


Hooks

Execute commands before/after steps.

my_step:
  type: step
  command: main-command
  pre_hook:
    command: echo "Before step"
  post_hook:
    command: echo "After step"
  on_success: next

Hook Options

OptionTypeDescription
commandstringHook command to execute
timeoutintHook timeout in seconds

Working Directory

Steps can specify a working directory:

build:
  type: step
  command: make build
  dir: "{{.inputs.project_path}}"
  on_success: test

The dir field supports variable interpolation.


See Also