Tier Flags
High Tier (Opus)
Highest-tier model - Most capable, best for complex reasoningEquivalent:
--highDefault model: claude-opus-4-6-v1:0Use for:- Complex architectural decisions
- Difficult refactoring
- Security analysis
- High-stakes code generation
Alias for
--opusMid Tier (Sonnet)
Mid-tier model - Balanced capability and speed (default)Equivalent:
--midDefault model: claude-sonnet-4-6-v1:0Use for:- Most coding tasks
- General development
- Documentation
- Code review
Alias for
--sonnetLow Tier (Haiku)
Lowest-tier model - Fastest and most cost-effectiveEquivalent:
--lowDefault model: claude-haiku-4-5-20251001-v1:0Use for:- Quick tests
- Simple tasks
- Cost-sensitive workloads
- High-volume automation
Alias for
--haikuCustom Model Selection
Specify exact model ID - Use any model supported by your providerFormat depends on provider:
- AWS Bedrock:
us.anthropic.claude-opus-4-6-v1:0 - Vertex AI:
claude-opus-4-6@20250514 - Anthropic API:
claude-opus-4-6 - Ollama: Model name from
ollama list - Vercel:
provider/model-name(e.g.,openai/gpt-4)
--model overrides tier flags (--opus, --sonnet, --haiku)Model Defaults Per Provider
Claude Subscription (Pro/Max)
API Providers (AWS, Vertex, Anthropic, Azure)
| Tier | Default Model |
|---|---|
Opus (--opus, --high) | claude-opus-4-6-v1:0 |
Sonnet (--sonnet, --mid) | claude-sonnet-4-6-v1:0 |
Haiku (--haiku, --low) | claude-haiku-4-5-20251001-v1:0 |
Local Providers (Ollama, LM Studio)
--model or configure a default in ~/.ai-runner/secrets.sh.
Vercel AI Gateway
provider/model-name format. See Vercel AI Gateway docs for available models.
Configuration
View Default Models
Override Model Defaults
Edit~/.ai-runner/secrets.sh:
~/.ai-runner/models.sh for all available override variables.
Dual Model Configuration
Claude Code uses two models:- Primary model (
ANTHROPIC_MODEL) - Interactive work, selected by tier flags - Small/fast model (
ANTHROPIC_SMALL_FAST_MODEL) - Background operations (defaults to Haiku)
- Primary: Opus (
claude-opus-4-6-v1:0) - Background: Haiku (
claude-haiku-4-5-20251001-v1:0)
ai-status output.
Examples
Tier Selection
Custom Models
Shebang Scripts
Resume with Different Model
Cost Optimization
Model Recommendations
| Task Type | Recommended Tier | Reason |
|---|---|---|
| Complex refactoring | Opus | Needs deep understanding |
| Architectural design | Opus | High-stakes decisions |
| Security analysis | Opus | Accuracy critical |
| General coding | Sonnet | Balanced performance |
| Documentation | Sonnet | Good quality/speed ratio |
| Code review | Sonnet | Sufficient capability |
| Quick tests | Haiku | Speed matters |
| Bulk automation | Haiku | Cost-effective |
| Simple tasks | Haiku | Overhead not needed |
Troubleshooting
Model not found error
Model not found error
Model IDs are provider-specific. Check your provider’s available models:See provider documentation for correct model ID format.
Wrong model being used
Wrong model being used
Check for overrides in Verify active model in session:
~/.ai-runner/secrets.sh:Tier flags don't work with Ollama
Tier flags don't work with Ollama
Local providers (Ollama, LM Studio) don’t have tier flags. Use
--model instead: