Workflow Syntax Reference
Complete reference for AWF workflow YAML syntax.
Basic Structure
name: my-workflow
version: "1.0.0"
description: Workflow description
inputs:
- name: file_path
type: string
required: true
states:
initial: step1
step1:
type: step
command: echo "Hello"
on_success: step2
on_failure: error
step2:
type: step
command: echo "World"
on_success: done
on_failure: error
done:
type: terminal
status: success
error:
type: terminal
status: failureWorkflow Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Workflow identifier |
version | string | No | Semantic version |
description | string | No | Human-readable description |
inputs | array | No | Input parameter definitions |
states | object | Yes | State definitions |
states.initial | string | Yes | Name of the starting state |
State Types
| Type | Description |
|---|---|
step | Execute a command |
agent | Invoke an AI agent (Claude, Codex, Gemini, etc.) |
terminal | End state with success/failure status |
parallel | Execute multiple steps concurrently |
for_each | Iterate over a list of items |
while | Repeat until condition is false |
operation | Execute a declarative plugin operation (e.g., HTTP, GitHub, notifications) |
call_workflow | Invoke another workflow as a sub-workflow |
Step State
Execute a shell command.
my_step:
type: step
command: |
echo "Processing {{.inputs.file}}"
dir: /tmp/workdir
timeout: 30
on_success: next_step
on_failure: error
continue_on_error: false
retry:
max_attempts: 3
backoff: exponentialStep Options
| Option | Type | Default | Description |
|---|---|---|---|
command | string | - | Shell command to execute (mutually exclusive with script_file); supports
local-before-global resolution for AWF path variables |
script_file | string | - | Path to external shell script file (mutually exclusive with command); supports
local-before-global resolution |
dir | string | cwd | Working directory (supports interpolation and local-before-global resolution) |
timeout | int or string | 0 | Execution timeout — integer seconds (30) or Go duration string ("1m30s", "500ms"). 0 = no timeout |
on_success | string | - | Next state on success (exit code 0) |
on_failure | string or object | - | Next state on failure — string (named terminal ref) or inline object (see Inline Error Shorthand) |
continue_on_error | bool | false | Always follow on_success regardless of exit code |
retry | object | - | Retry configuration |
transitions | array | - | Conditional transitions |
Timeout syntax: The
timeoutfield accepts either an integer (seconds) or a Go duration string. Examples:30(30 seconds),"1m30s"(90 seconds),"500ms"(half second),"2h"(2 hours). Duration strings support:ns,us/µs,ms,s,m,h.
Shell Execution
Commands are executed using the user’s preferred shell, determined at runtime as follows:
- Preferred Shell Detection — AWF reads the
$SHELLenvironment variable to detect the user’s configured shell (e.g.,/bin/bash,/bin/zsh) - Fallback — If
$SHELLis unset, relative, or points to a non-existent binary, AWF falls back to/bin/sh - Command Invocation — The selected shell is invoked with the
-cflag to execute the workflow command
This ensures that bash-dependent syntax (arrays, [[, process substitution, {a..z} brace expansion, etc.) works on all systems, including Debian/Ubuntu where /bin/sh is dash.
Example Behavior:
On macOS/Arch Linux (where
/bin/shis bash-like):$ echo $SHELL /bin/bash $ awf run my-workflow # Uses /bin/bash → bash-specific syntax worksOn Debian/Ubuntu (where
/bin/shis dash):$ echo $SHELL /bin/bash # User has bash installed $ awf run my-workflow # Uses /bin/bash (detected from $SHELL) → bash-specific syntax works
Compatibility Notes:
- POSIX-only commands — Work in any shell and are the most portable (recommended for external scripts)
- Bash-specific syntax — Requires
/bin/bashor compatible shell; check your$SHELLsetting - Shell-specific features —
[[, arrays, process substitution, ANSI-C quoting ($'...') — require bash or zsh
To verify which shell AWF will use:
echo $SHELL # Shows the detected shell
awf run my-workflow --dry-run # Displays execution plan without runningExternal Script Files
Instead of inlining shell commands in YAML, you can load commands from external script files using the script_file field:
deploy:
type: step
script_file: scripts/deploy.sh
timeout: 60
on_success: verify
on_failure: errorFile: scripts/deploy.sh
#!/bin/sh
echo "Deploying version {{.inputs.version}} to {{.inputs.env}}"
kubectl apply -f manifests/
kubectl rollout status deployment/appMutual Exclusivity
You cannot specify both command and script_file on the same step:
# ❌ Invalid: both command and script_file
step:
type: step
command: echo "hello"
script_file: scripts/hello.sh # ERROR: only one allowed
# ✅ Valid: command only
step:
type: step
command: echo "hello"
# ✅ Valid: script_file only
step:
type: step
script_file: scripts/hello.shPath Resolution
Script file paths are resolved in this order:
Absolute paths — used as-is:
script_file: /opt/company/scripts/deploy.shHome directory expansion — tilde is expanded:
script_file: ~/scripts/build.shRelative to workflow directory — resolved against the workflow file’s location:
script_file: scripts/test.sh # Resolves to <workflow_dir>/scripts/test.shXDG scripts directory with local override — via template interpolation with local-before-global resolution:
script_file: "{{.awf.scripts_dir}}/checks/lint.sh" # Checks in order: # 1. <workflow_dir>/scripts/checks/lint.sh (local override) # 2. ~/.config/awf/scripts/checks/lint.sh (global fallback)
Local-Before-Global Resolution
When using {{.awf.scripts_dir}} or {{.awf.prompts_dir}} in script_file, command, or dir fields, AWF prioritizes local project files over global ones:
- If local file exists at
<workflow_dir>/scripts/<suffix>→ use it - If local file missing → fall back to global
~/.config/awf/scripts/<suffix>
This enables shared scripts at the global level while allowing projects to override them locally:
# Workflow at: ~/myproject/.awf/workflows/deploy.yaml
steps:
deploy:
type: step
script_file: "{{.awf.scripts_dir}}/deploy.sh"
on_success: verifyResolution order:
- Check
~/myproject/.awf/workflows/scripts/deploy.sh(local override) - Check
~/.config/awf/scripts/deploy.sh(global shared)
Using AWF Paths in Commands
The same local-before-global resolution applies when using {{.awf.scripts_dir}} or {{.awf.prompts_dir}} inside command: fields. This enables reusing shared helper scripts or templates from both command invocations and script files:
# Both of these use the same local-before-global resolution:
deploy_via_script:
type: step
script_file: "{{.awf.scripts_dir}}/deploy.sh"
on_success: done
deploy_via_command:
type: step
command: "source {{.awf.scripts_dir}}/deploy.sh && run_deploy"
on_success: doneBoth resolve to the local scripts/deploy.sh if it exists, otherwise fall back to the global ~/.config/awf/scripts/deploy.sh.
Template Interpolation
Script file paths support template interpolation before resolution. Loaded script contents also undergo template interpolation with full access to workflow context variables:
build:
type: step
script_file: scripts/build.sh
on_success: doneFile: scripts/build.sh
#!/bin/sh
echo "Building {{.inputs.target}}"
cd {{.inputs.project_dir}}
make build
echo "Build output: {{.states.prepare.Output}}"Dry Run
In --dry-run mode, the resolved script file path is displayed along with the loaded and interpolated content:
awf run deploy --dry-run
# Shows resolved script path and interpolated contentShebang Support
Script files with a shebang line (#!...) are executed directly via the kernel’s interpreter dispatch, rather than through $SHELL -c. This allows you to use non-shell scripts (Python, Ruby, Perl, etc.) and shell scripts in different variants (bash, zsh, etc.) within the same workflow.
How it works:
- If a script file starts with
#!, AWF writes it to a temporary file and executes it directly - The kernel reads the shebang and launches the appropriate interpreter
- Scripts without a shebang fall back to the user’s shell (
$SHELL -c) for backward compatibility
Shebang Examples:
#!/usr/bin/env python3
# scripts/analyze.py
import sys
data = "{{.inputs.data}}"
print(f"Analyzing: {data}")Workflow:
analyze:
type: step
script_file: scripts/analyze.py
timeout: 30
on_success: doneAWF detects the #!/usr/bin/env python3 shebang and executes the script via Python, not the shell.
Bash-specific script:
#!/bin/bash
# scripts/deploy.sh
set -euo pipefail
echo "Deploying to {{.inputs.env}}"
[[ -f config.yaml ]] && echo "Config found"
kubectl apply -f manifests/Even if your $SHELL is zsh, this bash script executes via bash (not zsh) because of the shebang.
Shell script without shebang (legacy):
# scripts/legacy.sh
# No shebang line
echo "Running in {{inputs.shell}}"This script executes via $SHELL -c (backward-compatible behavior).
Supported Shebang Formats:
All standard shebang formats are supported:
#!/bin/sh # Absolute path
#!/usr/bin/env python3 # env with single argument
#!/usr/bin/env -S python3 -u # env with multiple arguments (POSIX)
#!/usr/bin/python3 # Direct interpreter pathTemporary File Cleanup:
Temporary files are automatically cleaned up after execution, even if the script fails or execution is cancelled.
Error Handling
| Error | Cause | Exit Code |
|---|---|---|
| File not found | Script file path does not exist | 1 |
| Permission denied | Script file is not readable | 1 |
| File too large | Script file exceeds 1MB size limit | 1 |
| Interpreter not found | Shebang specifies non-existent interpreter | 127 |
Error messages include the resolved file path for easy debugging.
Agent State
Invoke an AI agent (Claude, Codex, Gemini, OpenCode) with a prompt template.
Basic Agent Step
analyze:
type: agent
provider: claude
prompt: |
Analyze this code for issues:
{{.inputs.code}}
options:
model: claude-sonnet-4-20250514
timeout: 120
on_success: review
on_failure: errorConversation Mode
Enable multi-turn conversations with automatic context management:
refine_code:
type: agent
provider: claude
mode: conversation
system_prompt: |
You are a code reviewer. Iterate until code is approved.
Say "APPROVED" when done.
initial_prompt: |
Review this code:
{{.inputs.code}}
options:
model: claude-sonnet-4-20250514
conversation:
max_turns: 10
max_context_tokens: 100000
strategy: sliding_window
stop_condition: "response contains 'APPROVED'"
on_success: deploy
on_failure: errorAgent Options
| Option | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | Agent provider: claude, codex, gemini, opencode, openai_compatible |
mode | string | No | Set to conversation for multi-turn mode |
prompt | string | Yes* | Prompt template (supports {{.inputs.*}} and {{.states.*}} interpolation) |
prompt_file | string | No* | Path to external prompt template file (mutually exclusive with prompt) |
system_prompt | string | No | System message (for conversation mode, preserved across turns) |
initial_prompt | string | No* | First user message (for conversation mode) |
output_format | string | No | Post-processing format: json (strip fences + validate JSON) or text (strip fences only) |
conversation | object | No | Conversation configuration (required if mode=conversation) |
options | map | No | Provider-specific options (varies by provider — see Agent Steps for each provider’s supported options) |
timeout | int or string | No | Execution timeout — integer seconds (30) or Go duration string ("1m30s", "500ms"). 0 = no timeout |
on_success | string | No | Next state on success |
on_failure | string or object | No | Next state on failure — string (named terminal ref) or inline object (see Inline Error Shorthand) |
retry | object | No | Retry configuration (same as step retry) |
* Use prompt or prompt_file for single-turn mode (mutually exclusive), initial_prompt for conversation mode. See
Agent Steps - External Prompt Files for prompt_file details.
Conversation Configuration
| Option | Type | Default | Description |
|---|---|---|---|
max_turns | int | 10 | Maximum conversation turns |
max_context_tokens | int | model limit | Token budget for conversation |
strategy | string | - | Context window strategy: sliding_window, summarize (not yet implemented), truncate_middle (not yet implemented). Omitting means no context management is applied |
stop_condition | string | - | Expression to exit early |
continue_from | string | - | Step name to continue conversation from — resumes prior step’s session |
inject_context | string | - | Additional context to inject into user prompts on turns 2+. Supports template variables ({{.states.*}}, {{.inputs.*}}, etc.). Re-interpolated per turn. |
Available Providers
| Provider | Binary/Endpoint | Conversation Support | Description |
|---|---|---|---|
claude | claude CLI | Multi-turn (session resume via -r) | Anthropic Claude CLI |
codex | codex CLI | Multi-turn (session resume via resume) | OpenAI Codex CLI |
gemini | gemini CLI | Multi-turn (session resume via --resume) | Google Gemini CLI |
opencode | opencode CLI | Multi-turn (session resume via -s) | OpenCode CLI |
openai_compatible | HTTP API | Full multi-turn (messages array) | Chat Completions API (OpenAI, Ollama, vLLM, Groq) |
Conversation mode and providers: All providers support multi-turn conversations. CLI-based providers (
claude,codex,gemini,opencode) use native session resume flags to maintain context across turns — session IDs are extracted from CLI output after the first turn and passed on subsequent turns. If session ID extraction fails, the provider falls back to stateless mode gracefully.openai_compatiblemaintains full conversation history via the Chat Completions API messages array.
Agent Output
Agent responses are captured in the step state:
| Field | Type | Description |
|---|---|---|
{{.states.step_name.Output}} | string | Raw response text |
{{.states.step_name.Response}} | object | Parsed JSON (if response is valid JSON) |
{{.states.step_name.TokensUsed}} | int | Tokens consumed by this agent step |
Multi-Turn Conversations
Recommended: Use conversation mode for iterative workflows:
review:
type: agent
provider: claude
mode: conversation
system_prompt: "You are a code reviewer."
initial_prompt: "Review: {{.inputs.code}}"
conversation:
max_turns: 10
stop_condition: "response contains 'APPROVED'"
on_success: doneLegacy: Chain multiple agent steps with state passing:
states:
initial: ask_question
ask_question:
type: agent
provider: claude
prompt: "Initial question here"
on_success: follow_up
follow_up:
type: agent
provider: claude
prompt: |
Based on your previous response:
{{.states.ask_question.Output}}
Please elaborate on point 3.
on_success: done
done:
type: terminalSee Also: Conversation Mode Guide for detailed examples and best practices.
OpenAI-Compatible Provider
For any backend implementing the
Chat Completions API (OpenAI, Ollama, vLLM, Groq, LM Studio), use openai_compatible. Unlike CLI-based providers, this sends HTTP requests directly — no CLI tool installation required.
analyze:
type: agent
provider: openai_compatible
prompt: "Analyze: {{.inputs.data}}"
options:
base_url: http://localhost:11434/v1
model: llama3
timeout: 60
on_success: nextRequired options: base_url (root API URL, /chat/completions appended automatically) and model. Optional: api_key (falls back to OPENAI_API_KEY env var), temperature (0-2), max_completion_tokens (preferred; max_tokens accepted as deprecated fallback), top_p, system_prompt.
See Also: Agent Steps Guide for detailed examples and backend configurations.
Operation State
Execute a declarative plugin operation. Operations provide structured access to external services (e.g., GitHub) without shell scripting. Inputs are validated against the operation schema and outputs are accessible via {{.states.step_name.Response.field}}.
Basic Operation Step
get_issue:
type: operation
operation: github.get_issue
inputs:
number: "{{.inputs.issue_number}}"
on_success: process
on_failure: errorOperation Options
| Option | Type | Required | Description |
|---|---|---|---|
operation | string | Yes | Operation name (e.g., github.get_issue) |
inputs | map | Varies | Input parameters (validated against operation schema) |
on_success | string | No | Next state on success |
on_failure | string or object | No | Next state on failure — string (named terminal ref) or inline object |
retry | object | No | Retry configuration (same as step retry) |
Operation Output
Operation results are captured as structured data:
| Field | Type | Description |
|---|---|---|
{{.states.step_name.Output}} | string | Raw JSON response |
{{.states.step_name.Response.field}} | any | Parsed field from structured output |
Output Interpolation
Chain operations by referencing previous step outputs:
states:
initial: get_issue
get_issue:
type: operation
operation: github.get_issue
inputs:
number: "{{.inputs.issue_number}}"
on_success: show_title
on_failure: error
show_title:
type: step
command: echo "Issue: {{.states.get_issue.Response.title}}"
on_success: done
on_failure: error
done:
type: terminal
status: success
error:
type: terminal
status: failureGitHub Operations
AWF includes a built-in GitHub plugin with 8 declarative operations. Authentication is handled automatically via gh CLI or GITHUB_TOKEN environment variable. The repository is auto-detected from git remote when the repo input is omitted.
Issue & PR Operations
| Operation | Description | Required Inputs | Outputs |
|---|---|---|---|
github.get_issue | Retrieve issue data | number | number, title, body, state, labels |
github.get_pr | Retrieve pull request data | number | number, title, body, state, headRefName, baseRefName, mergeable, mergedAt, labels |
github.create_issue | Create a new issue | title | number, url |
github.create_pr | Create a new pull request | title, head, base | number, url, already_exists |
github.add_labels | Add labels to issue or PR | number, labels | labels |
github.add_comment | Add a comment | number, body | comment_id, url |
github.list_comments | List comments | number | comments, total |
Common Optional Inputs
All GitHub operations accept these optional inputs:
| Input | Type | Description |
|---|---|---|
repo | string | Repository in owner/repo format (auto-detected from git remote if omitted) |
fields | array | Fields to include in output (limits data returned, supported by get operations) |
Examples
Retrieve an issue:
get_issue:
type: operation
operation: github.get_issue
inputs:
number: 42
on_success: next
on_failure: errorCreate a pull request:
create_pr:
type: operation
operation: github.create_pr
inputs:
title: "feat: add login page"
head: feature/login
base: main
body: "Implements the login UI"
draft: true
on_success: next
on_failure: errorAdd labels to an issue:
label_issue:
type: operation
operation: github.add_labels
inputs:
number: "{{.inputs.issue_number}}"
labels: ["bug", "priority-high"]
on_success: done
on_failure: errorBatch Operations
Execute multiple GitHub operations concurrently using github.batch. Batch operations support configurable concurrency and failure strategies.
label_multiple:
type: operation
operation: github.batch
inputs:
strategy: best_effort
concurrency: 3
operations:
- name: github.add_labels
number: 1
labels: ["reviewed"]
- name: github.add_labels
number: 2
labels: ["reviewed"]
- name: github.add_labels
number: 3
labels: ["reviewed"]
on_success: done
on_failure: errorBatch Inputs
| Input | Type | Default | Description |
|---|---|---|---|
operations | array | - | Array of operation definitions (each with name and operation-specific inputs) |
strategy | string | best_effort | Execution strategy |
concurrency | int | 3 | Maximum concurrent operations |
Batch Strategies
| Strategy | Description |
|---|---|
all_succeed | All operations must succeed; cancels remaining on first failure |
any_succeed | Succeed if at least one operation succeeds |
best_effort | Complete all operations, collect all results regardless of failures |
Batch Outputs
| Output | Type | Description |
|---|---|---|
total | int | Total operations attempted |
succeeded | int | Successfully completed count |
failed | int | Failed operation count |
results | array | Individual operation results |
Notification Operations
AWF includes a built-in notification provider with a single notify.send operation that dispatches to two backends. See
Plugins - Built-in Notification Plugin for configuration details.
notify.send
| Input | Type | Required | Description |
|---|---|---|---|
backend | string | Yes | Backend: desktop, webhook |
message | string | Yes | Notification message body |
title | string | No | Notification title (defaults to “AWF Workflow”) |
priority | string | No | Priority: low, default, high |
webhook_url | string | No | Webhook URL (required for webhook backend) |
Outputs: backend, status, response
Examples
Desktop notification after a build:
states:
initial: build
build:
type: step
command: make build
on_success: notify
on_failure: error
notify:
type: operation
operation: notify.send
inputs:
backend: desktop
title: "Build Complete"
message: "{{workflow.name}} finished in {{workflow.duration}}"
on_success: done
on_failure: error
done:
type: terminal
status: success
error:
type: terminal
status: failureGeneric webhook (ntfy, Slack, Discord, Teams, PagerDuty, etc.):
notify_webhook:
type: operation
operation: notify.send
inputs:
backend: webhook
webhook_url: "https://example.com/hooks/builds"
message: "{{workflow.name}} completed"
on_success: done
on_failure: errorHTTP Operations
AWF includes a built-in HTTP operation provider for declarative REST API calls. The http.request operation supports GET, POST, PUT, and DELETE with configurable timeout and response capture. See
Plugins - Built-in HTTP Operation for configuration details.
http.request
| Input | Type | Required | Description |
|---|---|---|---|
url | string | Yes | HTTP endpoint URL (must start with http:// or https://) |
method | string | Yes | HTTP method: GET, POST, PUT, DELETE (case-insensitive) |
headers | object | No | Custom headers as key-value pairs |
body | string | No | Request body (for POST/PUT) |
timeout | integer | No | Per-request timeout in seconds (default: 30) |
retryable_status_codes | array | No | Status codes that signal retryable failures (e.g., [429, 502, 503]) |
Outputs: status_code, body, headers, body_truncated
Examples
Simple GET request:
fetch_data:
type: operation
operation: http.request
inputs:
method: GET
url: "https://api.example.com/users/{{.inputs.user_id}}"
headers:
Authorization: "Bearer {{.inputs.api_token}}"
Accept: "application/json"
timeout: 10
on_success: process
on_failure: errorPOST with JSON body and retry:
create_resource:
type: operation
operation: http.request
inputs:
method: POST
url: "https://api.example.com/resources"
headers:
Content-Type: "application/json"
Authorization: "Bearer {{.inputs.api_token}}"
body: '{"name": "{{.inputs.resource_name}}"}'
timeout: 15
retryable_status_codes: [429, 502, 503]
retry:
max_attempts: 3
backoff: exponential
initial_delay: 2s
on_success: done
on_failure: errorAccess response fields in subsequent steps:
states:
initial: fetch_user
fetch_user:
type: operation
operation: http.request
inputs:
method: GET
url: "https://api.example.com/users/1"
on_success: show_result
on_failure: error
show_result:
type: step
command: |
echo "Status: {{.states.fetch_user.Response.status_code}}"
echo "Body: {{.states.fetch_user.Response.body}}"
echo "Content-Type: {{.states.fetch_user.Response.headers.Content-Type}}"
on_success: done
on_failure: error
done:
type: terminal
status: success
error:
type: terminal
status: failureDELETE request (no body):
delete_resource:
type: operation
operation: http.request
inputs:
method: DELETE
url: "https://api.example.com/resources/{{.inputs.resource_id}}"
headers:
Authorization: "Bearer {{.inputs.api_token}}"
on_success: done
on_failure: errorTerminal State
End the workflow execution.
done:
type: terminal
status: success
error:
type: terminal
status: failureTerminal Options
| Option | Type | Values | Description |
|---|---|---|---|
status | string | success, failure | Terminal status |
message | string | - | Terminal message (displayed in output, supports template interpolation) |
Inline Error Shorthand
Instead of defining separate named terminal states, you can specify an inline error object directly on on_failure:
build:
type: step
command: go build ./cmd/...
on_success: test
on_failure: {message: "Build failed"}
test:
type: step
command: go test ./...
on_success: done
on_failure: {message: "Tests failed", status: 2}
done:
type: terminal
status: successWhen an inline error is triggered, AWF automatically synthesizes an anonymous terminal state with the specified message and status (default: 1).
Inline Error Syntax
The on_failure field accepts either:
- String (named terminal reference, backward compatible):
on_failure: error_terminal - Object (inline error, F066):
on_failure: {message: "...", status: 3}
Inline Error Fields
| Field | Type | Required | Description |
|---|---|---|---|
message | string | Yes | Terminal message (can use template interpolation: {{.inputs.*}}, {{.states.*}}, {{.env.*}}) |
status | int | No | Exit code (default: 1 for failure) |
Template Interpolation in Messages
Inline error messages support full template interpolation, including references to step outputs:
build:
type: step
command: make build
on_failure: {message: "Build failed: {{.states.build.Output}}"}
test:
type: step
command: go test ./...
on_failure: {message: "Tests failed in {{.inputs.environment}}", status: 5}Backward Compatibility
Existing workflows using named terminal references continue to work unchanged:
# Still works — routes to named terminal state
on_failure: error_terminalYou can mix inline errors and named references in the same workflow:
build:
type: step
command: make build
on_failure: {message: "Build failed"} # Inline error
test:
type: step
command: npm test
on_failure: error_terminal # Named referenceParallel State
Execute multiple steps concurrently. Branch children are defined as separate states and referenced by name in the parallel field.
parallel_build:
type: parallel
parallel:
- lint
- test
- build
strategy: all_succeed
max_concurrent: 3
on_success: deploy
on_failure: error
lint:
type: step
command: golangci-lint run
test:
type: step
command: go test ./...
build:
type: step
command: go build ./cmd/...Branch children (lint, test, build above) do not need on_success/on_failure or transitions — the parallel executor controls flow after each branch completes. If provided, transitions are accepted but ignored at runtime.
Parallel Options
| Option | Type | Default | Description |
|---|---|---|---|
parallel | array | - | List of step names to execute concurrently |
strategy | string | all_succeed | Execution strategy |
max_concurrent | int | unlimited | Maximum concurrent steps |
on_success | string | - | Next state when all branches complete successfully |
on_failure | string or object | - | Next state on branch failure — string (named terminal ref) or inline object |
Parallel Strategies
| Strategy | Description |
|---|---|
all_succeed | All steps must succeed, cancel remaining on first failure |
any_succeed | Succeed if at least one step succeeds |
best_effort | Collect all results, never cancel early |
Accessing Parallel Results
# Branch children are top-level states — access their output directly
command: echo "{{.states.lint.Output}}"For-Each Loop
Iterate over a list of items.
process_files:
type: for_each
items: '["a.txt", "b.txt", "c.txt"]'
max_iterations: 100
break_when: "states.process_single.ExitCode != 0"
body:
- process_single
on_complete: aggregate
process_single:
type: step
command: |
echo "Processing {{.loop.Item}} ({{.loop.Index1}}/{{.loop.Length}})"
on_success: process_filesFor-Each Options
| Option | Type | Default | Description |
|---|---|---|---|
items | string | - | Template expression or literal JSON array |
body | array | - | List of step names to execute each iteration |
max_iterations | int/string | 100 | Safety limit (max: 10000). Supports template interpolation and arithmetic expressions (+, -, *, /, %). |
break_when | string | - | Expression evaluated at runtime; loop exits when condition is true |
on_complete | string | - | Next state after loop completes |
Loop Context Variables
| Variable | Description |
|---|---|
{{.loop.Item}} | Current item value |
{{.loop.Index}} | 0-based iteration index |
{{.loop.Index1}} | 1-based iteration index |
{{.loop.First}} | True on first iteration |
{{.loop.Last}} | True on last iteration |
{{.loop.Length}} | Total items count |
{{.loop.Parent}} | Parent loop context (nested loops) |
Dynamic Items
Items can come from a template expression:
items: "{{.inputs.files}}"Dynamic Max Iterations
The max_iterations field supports interpolation and arithmetic expressions:
# From input parameter
max_iterations: "{{.inputs.retry_count}}"
# From environment variable
max_iterations: "{{.env.MAX_RETRIES}}"
# Pre-computed value from input
max_iterations: "{{.inputs.total_retries}}"Supported arithmetic operators: +, -, *, /, %
Dynamic values are resolved at loop initialization time. Validation ensures:
- Result is a positive integer
- Value does not exceed 10000 (safety limit)
Use awf validate to detect undefined variables before runtime.
While Loop
Repeat until condition becomes false.
poll_status:
type: while
while: "states.check.Output != 'ready'"
max_iterations: 60
body:
- check
- wait
on_complete: proceed
check:
type: step
command: curl -s https://api.example.com/status
on_success: poll_status
wait:
type: step
command: sleep 5
on_success: poll_statusWhile Options
| Option | Type | Default | Description |
|---|---|---|---|
while | string | - | Condition expression evaluated at each iteration; loop continues while true. Supports template interpolation and boolean expressions. |
body | array | - | List of step names to execute each iteration |
max_iterations | int/string | 100 | Safety limit (max: 10000). Supports template interpolation and arithmetic expressions (+, -, *, /, %). |
break_when | string | - | Expression evaluated at runtime; loop exits when condition is true |
on_complete | string | - | Next state after loop completes |
While Loop Context
| Variable | Description |
|---|---|
{{.loop.Index}} | 0-based iteration index |
{{.loop.Index1}} | 1-based iteration index |
{{.loop.First}} | True on first iteration |
{{.loop.Last}} | Always false (unknown for while loops) |
{{.loop.Length}} | Always -1 (unknown for while loops) |
{{.loop.Item}} | Always nil for while loops |
{{.loop.Parent}} | Parent loop context (nested loops only) |
Nested Loops
Loops can contain other loops. Inner loops access outer loop context via {{.loop.Parent.*}}:
outer_loop:
type: for_each
items: '["A", "B"]'
body:
- inner_loop
on_complete: done
inner_loop:
type: for_each
items: '["1", "2"]'
body:
- process
on_complete: outer_loop
process:
type: step
command: 'echo "outer={{.loop.Parent.Item}} inner={{.loop.Item}}"'
on_success: inner_loopParent chains support arbitrary depth: {{.loop.Parent.Parent.Item}} for 3-level nesting.
Call Workflow (Sub-Workflow)
Invoke another workflow as a sub-workflow, passing inputs and capturing outputs.
analyze_code:
type: call_workflow
call_workflow:
workflow: analyze-single-file
inputs:
file_path: "{{.inputs.target_file}}"
max_tokens: "{{.inputs.max_tokens}}"
outputs:
result: analysis_result
timeout: 300
on_success: aggregate_results
on_failure: handle_errorCall Workflow Options
| Option | Type | Default | Description |
|---|---|---|---|
workflow | string | - | Name of the workflow to invoke |
inputs | map | - | Input mappings (parent var → child input) |
outputs | map | - | Output mappings (child output → parent var) |
timeout | int | 0 | Sub-workflow timeout in seconds (0 = inherit) |
Child Workflow Definition
The child workflow must define its inputs and outputs:
# analyze-single-file.yaml
name: analyze-single-file
version: "1.0.0"
inputs:
- name: file_path
type: string
required: true
- name: max_tokens
type: integer
default: 2000
states:
initial: read
read:
type: step
command: cat "{{.inputs.file_path}}"
on_success: analyze
on_failure: error
analyze:
type: agent
provider: claude
prompt: "Analyze: {{.states.read.Output}}"
timeout: 120
on_success: done
on_failure: error
done:
type: terminal
status: success
error:
type: terminal
status: failure
outputs:
- name: analysis_result
from: states.analyze.OutputAccessing Sub-Workflow Results
Outputs from the sub-workflow are accessible via the standard states interpolation:
# In parent workflow, after analyze_code step
aggregate_results:
type: step
command: echo "Analysis: {{.states.analyze_code.Output}}"
on_success: doneNested Sub-Workflows
Sub-workflows can call other sub-workflows. AWF tracks the call stack to detect circular references:
# workflow-a.yaml calls workflow-b
# workflow-b.yaml calls workflow-c
# Supported: A → B → C (3-level nesting)
# Blocked: A → B → A (circular reference)Maximum nesting depth is 10 levels. Circular calls are detected at runtime with clear error messages showing the call stack.
Error Handling
Sub-workflow errors propagate to the parent:
- If sub-workflow reaches a
terminalstate withstatus: failure, parent followson_failure - If sub-workflow times out, parent receives timeout error and follows
on_failure - If sub-workflow definition is not found, execution fails with
undefined_subworkflowerror
Using Call Workflow in Loops
Combine for_each with call_workflow to process multiple items in parallel sub-workflows. Loop items (especially complex objects) are automatically serialized to JSON:
# Example: Process multiple files across sub-workflows
prepare_items:
type: step
command: |
echo '[
{"file":"main.go","language":"Go"},
{"file":"app.py","language":"Python"},
{"file":"index.js","language":"JavaScript"}
]'
capture:
stdout: items_json
on_success: process_files
process_files:
type: for_each
items: "{{.states.prepare_items.Output}}"
body:
- analyze_file
analyze_file:
type: call_workflow
call_workflow:
workflow: analyze-source-file
inputs:
# {{.loop.Item}} is automatically JSON-serialized for complex types
file_info: "{{.loop.Item}}"
outputs:
analysis: file_analysis
on_success: next
next:
type: terminalChild workflow receives properly formatted JSON input:
name: analyze-source-file
inputs:
- name: file_info
type: string # Receives JSON string
states:
initial: parse
parse:
type: step
command: |
# Parse JSON input safely
echo '{{.inputs.file_info}}' | jq -r '.file'
on_success: done
done:
type: terminal
outputs:
- name: file_analysis
from: states.parse.OutputRetry Configuration
Automatic retry for failed steps.
flaky_api_call:
type: step
command: curl -f https://api.example.com/data
retry:
max_attempts: 5
initial_delay: 1s
max_delay: 30s
backoff: exponential
multiplier: 2
jitter: 0.1
retryable_exit_codes: [1, 22]
on_success: process_data
on_failure: errorRetry Options
| Option | Type | Default | Description |
|---|---|---|---|
max_attempts | int | 1 | Maximum attempts (must be >= 1) |
initial_delay | duration | 0 | Delay before first retry |
max_delay | duration | - | Maximum delay cap (omit for uncapped delays) |
backoff | string | constant | Strategy: constant, linear, exponential |
multiplier | float | 2 | Multiplier for exponential backoff (must be >= 0) |
jitter | float | 0 | Random jitter factor (must be between 0.0 and 1.0) |
retryable_exit_codes | array | all | Exit codes to retry (empty = all non-zero) |
Duration values accept Go duration strings (100ms, 1s, 2m30s) or plain integers (milliseconds). Invalid duration values produce a parse error at workflow load time.
Backoff Strategies
| Strategy | Formula |
|---|---|
constant | Always initial_delay |
linear | initial_delay * attempt |
exponential | initial_delay * multiplier^(attempt-1) |
Conditional Transitions
Dynamic branching based on expressions.
process:
type: step
command: analyze.sh
transitions:
- when: "states.process.ExitCode == 0 and inputs.mode == 'full'"
goto: full_report
- when: "states.process.ExitCode == 0"
goto: summary_report
- goto: error # default fallback (no when clause)Transition Options
| Option | Type | Description |
|---|---|---|
when | string | Expression to evaluate (optional for default) |
goto | string | Target state if condition matches |
Exit Code Routing Examples
Route based on command exit codes:
test_runner:
type: step
command: pytest
transitions:
- when: "states.test_runner.ExitCode == 0"
goto: deploy
- when: "states.test_runner.ExitCode > 1"
goto: critical_failure
- when: "states.test_runner.ExitCode != 0" # Catch exit code 1
goto: report_warnings
- goto: unknown_errorOutput-Based Routing Examples
Route based on command output:
check_config:
type: step
command: validate-config.sh
transitions:
- when: "states.check_config.Output contains 'READY'"
goto: deploy
- when: "states.check_config.Output contains 'WARNING'"
goto: review_config
- goto: abortCombined Routing
Mix exit code and output conditions:
build:
type: step
command: make build
transitions:
- when: "states.build.ExitCode == 0 and states.build.Output contains 'OPTIMIZED'"
goto: fast_deploy
- when: "states.build.ExitCode == 0"
goto: standard_deploy
- goto: fix_errorsSupported Operators
| Type | Operators |
|---|---|
| Comparison | ==, !=, <, >, <=, >= (numeric for ExitCode, string for Output) |
| Logical | and, or, not |
| String | contains, startsWith, endsWith |
| Grouping | (expr) |
Available Variables
| Variable | Description |
|---|---|
inputs.name | Input values |
states.step_name.ExitCode | Step exit code (integer, POSIX 0-255) |
states.step_name.Output | Step output (string) |
env.VAR_NAME | Environment variables |
Transition Evaluation
- Transitions are evaluated on both success and failure paths (non-zero exit codes included)
- Transitions are evaluated in order; first matching condition wins
- When a transition matches, it takes priority over
on_success,on_failure, andcontinue_on_error - A transition without
whenacts as default fallback - If no transition matches and no default fallback exists, falls back to legacy
on_success/on_failurebehavior
Input Definitions
Define and validate workflow inputs.
inputs:
- name: file_path
type: string
required: true
validation:
file_exists: true
file_extension: [".go", ".py", ".js"]
- name: max_tokens
type: integer
default: 2000
validation:
min: 1
max: 10000
- name: env
type: string
default: staging
validation:
enum: [dev, staging, prod]
- name: debug
type: boolean
default: falseInput Options
| Option | Type | Description |
|---|---|---|
name | string | Input identifier |
type | string | string, integer, boolean |
required | bool | If true, must be provided |
default | any | Default value if not provided |
validation | object | Validation rules |
Validation Rules
| Rule | Type | Description |
|---|---|---|
pattern | string | Regex pattern to match |
enum | array | List of allowed values |
min | int | Minimum value (integers only) |
max | int | Maximum value (integers only) |
file_exists | bool | File must exist on filesystem |
file_extension | array | Allowed file extensions |
Validation Errors
Errors are collected and reported together:
input validation failed: 2 errors:
- inputs.email: does not match pattern
- inputs.count: value 150 exceeds maximum 100Interactive Input Collection
When a workflow with required inputs is run from a terminal without providing all inputs via --input flags, AWF automatically prompts you for missing required values. This makes it easier to run workflows interactively without remembering all parameters upfront.
Example:
awf run deploy
# Output:
# env (string, required):
# > prod
#
# version (string, required):
# > 1.2.3
#
# Workflow started...If the input has enum constraints, AWF displays numbered options:
awf run deploy
# Output:
# env (string, required):
# Available options:
# 1) dev
# 2) staging
# 3) prod
# Select option (1-3):
# > 2Optional inputs can be skipped by pressing Enter. Invalid values are rejected with error messages, allowing you to correct and retry.
See Interactive Input Collection for more details.
Variable Interpolation
AWF uses {{.var}} syntax (Go template style with dot prefix).
# Inputs
command: echo "{{.inputs.variable_name}}"
# Previous step outputs
command: echo "{{.states.step_name.Output}}"
# Workflow metadata
command: echo "Workflow ID: {{.workflow.id}}"
# Environment variables
command: echo "Home: {{.env.HOME}}"See Variable Interpolation Reference for complete details.
Hooks
Execute commands before/after steps.
my_step:
type: step
command: main-command
pre_hook:
command: echo "Before step"
post_hook:
command: echo "After step"
on_success: nextHook Options
| Option | Type | Description |
|---|---|---|
command | string | Hook command to execute |
timeout | int | Hook timeout in seconds |
Working Directory
Steps can specify a working directory:
build:
type: step
command: make build
dir: "{{.inputs.project_path}}"
on_success: testThe dir field supports variable interpolation.
See Also
- Commands - CLI command reference
- Templates - Reusable workflow templates
- Examples - Workflow examples
- Variable Interpolation - Template variables
- Input Validation - Validation rules