Skip to main content

Runtimes

AI Runner provides a single interface across AI coding tools. Pick your runtime once, and the same flags, shebangs, and scripts work identically. AI Runner handles the mapping.

Supported Runtimes

RuntimeFlagDefault ModelEcosystemInstall
Claude Code--ccclaude-opus-4-6Anthropiccurl -fsSL https://claude.ai/install.sh | bash
Codex CLI--codexgpt-5.4OpenAInpm install -g @openai/codex
Claude Code is the default when both are installed. If only Codex is installed, it becomes the default automatically.

Selecting a Runtime

# Auto-detect (tries Claude Code first, falls back to Codex)
ai task.md

# Explicit selection
ai --cc task.md           # Claude Code
ai --codex task.md        # Codex CLI

# Save as default
ai --codex --set-default  # Always use Codex
ai --clear-default        # Reset to auto-detect
In shebangs:
#!/usr/bin/env -S ai --codex --high
Analyze this codebase with OpenAI's flagship model.

Universal Flags

Most flags work identically with both runtimes. AI Runner maps them to each tool’s native equivalents:
CategoryFlagsBehavior
Model tiers--opus/--high, --sonnet/--mid, --haiku/--lowMapped to each runtime’s models
Custom model--model <id>Passed to runtime’s model flag
Effort--effort <low|medium|high|max>Controls reasoning depth
Local providers--ollama, --lmstudioBoth runtimes support local models
Azure--azureMaps to each runtime’s Azure service
Permissions--auto, --bypass, --skipMapped to each runtime’s equivalents
Resume--resumeResume previous conversation
Streaming--live, --quietReal-time output with heartbeat
Defaults--set-default, --clear-defaultPersists tool + provider + model
ScriptingShebangs, variables, pipes, stdinIdentical behavior

Cross-Interpreter Model Tiers

The same tier flag selects each runtime’s equivalent model:
TierFlagClaude CodeCodex CLI
High--opus / --highclaude-opus-4-6gpt-5.4
Mid--sonnet / --midclaude-sonnet-4-6gpt-5.3-codex
Low--haiku / --lowclaude-haiku-4-5gpt-5.4-mini
ai --high task.md           # Claude Code: Opus 4.6
ai --codex --high task.md   # Codex: gpt-5.4

Cross-Interpreter Effort Levels

Control how deeply the model reasons:
AI RunnerClaude CodeCodex CLI
--effort low--effort low-c model_reasoning_effort=low
--effort medium--effort medium-c model_reasoning_effort=medium
--effort high--effort high-c model_reasoning_effort=high
--effort max--effort max-c model_reasoning_effort=xhigh
ai --effort high task.md          # Claude Code: deeper reasoning
ai --codex --effort max task.md   # Codex: maximum reasoning (xhigh)

Cross-Interpreter Providers

FlagClaude CodeCodex CLI
--ollamaOllama (Anthropic-compat API)Ollama (--oss)
--lmstudioLM Studio (Anthropic-compat)LM Studio (--oss)
--azureAzure Foundry (Anthropic models)Azure OpenAI (GPT models)
--apikeyAnthropic API keyForce OpenAI API key (vs browser auth)
--awsAWS BedrockNot supported
--vertexGoogle Vertex AINot supported
--vercelVercel AI GatewayNot supported
--proClaude Pro subscriptionNot supported
--profileNot supportedConfig.toml profile (OpenRouter, Mistral, etc.)
--azure maps to different services per runtime: Azure Foundry (Anthropic) for Claude Code, Azure OpenAI (GPT) for Codex. The user writes the same flag; AI Runner handles the translation.

Cross-Interpreter Permissions

AI RunnerClaude CodeCodex CLISafety Level
(default)Ask for approvalAsk for approvalManual
--autoAI classifier decidesSandboxed autoSmart auto
--bypassAll auto, no classifierSandboxed autoPermissive
--skipZero safety checksZero safety + no sandboxNuclear
--skip bypasses ALL safety on both runtimes. Only use with trusted scripts in trusted directories.

Runtime-Specific Features

Claude Code Only

These flags require Claude Code and will error with Codex:
  • --aws — AWS Bedrock
  • --vertex — Google Vertex AI
  • --vercel — Vercel AI Gateway
  • --pro — Claude Pro subscription
  • --team — Agent Teams
  • --chrome — Browser automation

Codex CLI Only

  • --profile <name> — Select a named config profile from ~/.codex/config.toml
ai --codex --profile openrouter task.md
ai --codex --profile azure task.md
Configure profiles in ~/.codex/config.toml:
[model_providers.openrouter]
name = "OpenRouter"
base_url = "https://openrouter.ai/api/v1"
env_key = "OPENROUTER_API_KEY"

[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://YOUR_RESOURCE.openai.azure.com/openai/v1"
env_key = "AZURE_OPENAI_API_KEY"

Authentication

Claude Code

Uses Anthropic authentication:
  • Claude Pro/Max subscription (browser login)
  • API key (ANTHROPIC_API_KEY)
  • Cloud provider credentials (AWS, GCP, Azure)

Codex CLI

Uses OpenAI authentication:
  • Browser login (default): Run codex login once, then Codex uses stored tokens
  • API key: Set OPENAI_API_KEY or CODEX_API_KEY in environment or ~/.ai-runner/secrets.sh
  • Use --apikey to force API key mode (for CI/CD)
# Browser auth (default — no env var needed)
ai --codex task.md

# Force API key auth
ai --codex --apikey task.md

Same Script, Both Runtimes

A script written for one runtime works on the other:
#!/usr/bin/env -S ai --high
Analyze the architecture of this codebase and summarize the key patterns.
./analyze.md                  # Uses default runtime
ai --cc ./analyze.md          # Force Claude Code
ai --codex ./analyze.md       # Force Codex CLI
The runtime selection is orthogonal to the script content. Model tiers, effort levels, permissions, and streaming all adapt automatically.

Script Portability

Scripts are portable across runtimes. If a script includes interpreter-specific flags, running it on another runtime produces a warning but continues normally:
# Script with Claude Code-specific --chrome flag
#!/usr/bin/env -S ai --auto --chrome
Test the login flow in the browser.
Running this with Codex:
ai --codex ./test-login.md
# [AI Runner] Warning: --chrome is not supported by Codex CLI, ignoring
# (script runs normally without browser automation)
AI Runner uses a flag firewall that discovers each tool’s supported flags from --help output and caches them by version. Unsupported flags are warned and skipped. Flag+value pairs (like --max-turns 5) are skipped together. This means you can write scripts that use the full power of one runtime and safely run them on another — they degrade gracefully instead of failing.