Skip to main content
Provider flags let you switch between different AI providers and models on the fly. Use your Claude subscription, cloud platforms, or local models without changing configuration files.
Most provider flags work with both Claude Code and Codex CLI. Each flag entry notes which runtimes support it. See Runtimes for the full compatibility matrix.

Available Providers

Local Providers (Free)

--ollama
flag
Ollama - Run models locally or on Ollama’s cloud
ai --ollama
ai --ollama task.md
ai --ollama --model qwen3-coder
ai --ollama --model minimax-m2.5:cloud  # Cloud model, no GPU needed
Short form: --olRuntimes: Claude Code, Codex CLISetup:
brew install ollama              # macOS
ollama pull qwen3-coder          # Pull a model
ollama serve                     # Start server
Requirements:
  • Ollama installed and running at http://localhost:11434
  • 24GB+ VRAM for local coding models (cloud models work on any hardware)
See Local Providers for details.
--lmstudio
flag
LM Studio - Local models with MLX support (fast on Apple Silicon)
ai --lmstudio
ai --lm task.md
ai --lm --model mlx-community/Qwen2.5-Coder-32B-Instruct-4bit
Short form: --lmRuntimes: Claude Code, Codex CLISetup:
  1. Download LM Studio from lmstudio.ai
  2. Load a model in the UI
  3. Start server: lms server start --port 1234
Requirements:
  • LM Studio running at http://localhost:1234
  • 24GB+ unified memory for coding models
See Local Providers for details.

Cloud Providers

--aws
flag
AWS Bedrock - Claude models on Amazon Web Services
ai --aws
ai --aws --opus task.md
ai --aws --haiku --resume
Runtimes: Claude Code onlySetup in ~/.ai-runner/secrets.sh:
export AWS_PROFILE="your-profile-name"
export AWS_REGION="us-west-2"
Authentication:
  • AWS credentials file (~/.aws/credentials)
  • Or IAM role (for EC2/ECS)
See Cloud Providers for details.
--vertex
flag
Google Vertex AI - Claude models on Google Cloud Platform
ai --vertex
ai --vertex task.md
ai --vertex --sonnet --resume
Runtimes: Claude Code onlySetup in ~/.ai-runner/secrets.sh:
export ANTHROPIC_VERTEX_PROJECT_ID="your-gcp-project-id"
export CLOUD_ML_REGION="global"
Authentication:
  • Application default credentials (gcloud auth application-default login)
  • Or service account key file
See Cloud Providers for details.
--apikey
flag
Anthropic API - Direct access to Anthropic’s API
ai --apikey
ai --apikey --opus task.md
Runtimes: Claude Code (Anthropic API), Codex CLI (forces OpenAI API key auth)Setup in ~/.ai-runner/secrets.sh:
export ANTHROPIC_API_KEY="sk-ant-..."
Authentication:See Cloud Providers for details.
--azure
flag
Microsoft Azure — Claude Code: Azure Foundry (Anthropic models) / Codex CLI: Azure OpenAI (GPT models)Runtimes: Claude Code, Codex CLI (different services)
ai --azure
ai --azure task.md
Setup in ~/.ai-runner/secrets.sh:
export ANTHROPIC_FOUNDRY_API_KEY="your-azure-api-key"
export ANTHROPIC_FOUNDRY_RESOURCE="your-resource-name"
Authentication:
  • Azure Foundry API key
See Cloud Providers for details.
--vercel
flag
Vercel AI Gateway - Access 100+ models (OpenAI, xAI, Google, Meta, and more)
ai --vercel
ai --vercel --model openai/gpt-5.2-codex
ai --vercel --model anthropic/claude-opus-4.7
ai --vercel --model google/gemini-exp-2506
Runtimes: Claude Code onlySetup in ~/.ai-runner/secrets.sh:
export VERCEL_AI_GATEWAY_TOKEN="vck_..."
Authentication:Supported models:
  • OpenAI (GPT-4, GPT-5, etc.)
  • xAI (Grok models)
  • Google (Gemini models)
  • Meta (Llama models)
  • Anthropic (Claude models)
  • And many more
See Cloud Providers for details.
--pro
flag
Claude Pro/Max - Your regular Claude subscription
ai --pro
ai --pro --resume
Runtimes: Claude Code onlyAuthentication:
  • Claude subscription (log in with claude)
Note: This is the default if you’re logged in and don’t specify another provider.

Provider Precedence

When no provider is specified, AI Runner auto-detects in this order:
  1. Saved default from --set-default
  2. Claude subscription (if logged in with claude)
  3. Ollama (if running locally)
  4. Anthropic API (if ANTHROPIC_API_KEY is configured)
  5. Cloud providers (AWS, Vertex, Vercel, Azure — if configured)
CLI flags always override saved defaults and auto-detection.

Examples

Basic Usage

# Use default provider (Claude subscription)
ai task.md

# Override to use AWS Bedrock
ai --aws task.md

# Use local Ollama (free)
ai --ollama task.md

Combining Providers and Models

# AWS with Opus
ai --aws --opus task.md

# Ollama with specific model
ai --ollama --model qwen3-coder task.md

# Vercel with OpenAI
ai --vercel --model openai/gpt-5.2-codex task.md

Interactive Sessions

# Start session with AWS
ai --aws

# Start with Ollama and bypass permissions
ai --ollama --bypass

# Start with Vertex and agent teams
ai --vertex --team

Resume Conversations

# Hit rate limit on Pro
ai --pro
# "Rate limit exceeded"

# Switch to AWS and continue
ai --aws --resume

# Or switch to free Ollama
ai --ollama --resume

Shebang Scripts

#!/usr/bin/env -S ai --aws --opus
Analyze this codebase with AWS Bedrock.
#!/usr/bin/env -S ai --ollama --haiku
Quick analysis with local Ollama.

Set Default Provider

# Save AWS as default
ai --aws --set-default

# Now 'ai' uses AWS by default
ai task.md

# Override still works
ai --ollama task.md

# Clear default
ai --clear-default

Provider Status

Check which providers are configured:
ai-status
See ai-status reference for details.

Multiple Providers Workflow

Switch between providers based on your needs:
# Quick tests: use free Ollama
ai --ollama --haiku quick-test.md

# Complex tasks: use AWS with Opus
ai --aws --opus complex-task.md

# Cost-sensitive: use Haiku on any provider
ai --apikey --haiku analyze.md

# Rate-limited: fall back to different provider
ai --aws --resume  # Continue after hitting Pro rate limit

Provider-Gated Flags

Some passthrough flags only work with specific providers:
--chrome requires a direct Anthropic plan (Pro, Max, Teams, Enterprise). It does not work through Bedrock, Vertex, Azure Foundry, Ollama, or LM Studio. AI Runner warns when --chrome is used with incompatible providers.

Model Flags

Select model tiers and specific model IDs

Provider Guides

Detailed setup for each provider