Configuration Files
AI Runner uses a configuration directory at~/.ai-runner/ (or legacy ~/.claude-switcher/):
secrets.sh: User-edited file for API keys and model overrides (never overwritten)models.sh: System defaults (updated bysetup.shwhen defaults change)defaults.sh: Persistent provider/model preferences saved with--set-default
Model Overrides in secrets.sh
Override default model identifiers for any provider by adding exports to~/.ai-runner/secrets.sh:
AWS Bedrock
Google Vertex AI
Anthropic API
Microsoft Azure
Model names are deployment names from your Azure portal:Vercel AI Gateway
Useprovider/model format (dots not dashes in version numbers):
Ollama (Local + Cloud)
By default, AI Runner auto-detects available models. Override with:LM Studio (Local)
By default, AI Runner uses the first loaded model for all tiers. Override:Dual Model System
Claude Code uses two models for optimal performance and cost:1. Primary Model (ANTHROPIC_MODEL)
The main model you interact with. Set by your tier flags:
- Main conversation
- Complex reasoning
- User-facing responses
2. Small/Fast Model (ANTHROPIC_SMALL_FAST_MODEL)
The background/auxiliary model for lightweight operations. Automatically set based on provider:
- Sub-agents and teammates (with
--team) - File operations and analysis
- Quick auxiliary tasks
- Background work that doesn’t need the full model
How It’s Applied
When you runai --aws --opus, the scripts set:
Overriding the Small/Fast Model
Customize in~/.ai-runner/secrets.sh:
- Agent teams: Higher-quality teammates with
--team - Complex file operations: Better analysis with Sonnet background model
- Consistency: Same model for all operations (disable two-model system)
Persistent Defaults
Save your preferred provider and model combination:~/.ai-runner/defaults.sh):
- CLI flags:
ai --vertex task.md - Shebang flags:
#!/usr/bin/env -S ai --aws - Saved defaults:
--set-default - Auto-detection: Current Claude subscription
Model Configuration Files
config/models.sh (System Defaults)
Shipped with AI Runner, defines default model IDs for each provider:
~/.ai-runner/models.sh (User Copy)
Copied from config/models.sh during setup.sh. Updated when system defaults change:
- After
git pullif model defaults changed - To get new model versions (e.g., Opus 4.6 → Opus 4.7)
- If your models aren’t working (outdated IDs)
- You have custom overrides in
secrets.sh(they take precedence) - You want to pin specific versions
Override Hierarchy
Version File and Update Checking
AI Runner includes automatic update checking with smart caching.Version File (VERSION)
Shipped with AI Runner, defines the current version:
- Installed to:
/usr/local/share/ai-runner/VERSION - Used by:
ai --version,ai-status, update checker
Update Checking
AI Runner checks for updates once every 24 hours (non-blocking):-
Cache-only check (runs in background on startup):
- Queries GitHub API for latest release
- Compares with installed version
- Caches result for 24 hours
- Never blocks startup
-
Notice display (if update available):
-
Manual update:
Disabling Update Checks
Add to your shell profile:Update Cache Location
Update check results are cached at:Manual Update (Without ai update)
If you prefer manual updates:
secrets.sh and prompts about updating models.sh.
Environment Variables Reference
Runtime Variables (Set by AI Runner)
| Variable | Purpose | Set By |
|---|---|---|
ANTHROPIC_MODEL | Primary model for main work | Provider scripts based on --opus/--sonnet/--haiku/--model |
ANTHROPIC_SMALL_FAST_MODEL | Background/auxiliary model | Provider scripts from CLAUDE_SMALL_FAST_MODEL_<PROVIDER> |
ANTHROPIC_BASE_URL | API endpoint | Provider scripts (Ollama, LM Studio, Vercel) |
ANTHROPIC_AUTH_TOKEN | Bearer token auth | Provider scripts (Vercel) |
CLAUDE_CODE_USE_BEDROCK | Enable AWS Bedrock mode | AWS provider |
CLAUDE_CODE_USE_VERTEX | Enable Vertex AI mode | Vertex provider |
CLAUDE_CODE_USE_FOUNDRY | Enable Azure Foundry mode | Azure provider |
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS | Enable agent teams | --team flag |
AI_LIVE_OUTPUT | Enable live streaming | --live flag |
AI_QUIET | Suppress narration | --quiet flag |
AI_SESSION_ID | Unique session identifier | Generated per session |
User Configuration Variables (Set in secrets.sh)
| Variable | Purpose | Example |
|---|---|---|
ANTHROPIC_API_KEY | Anthropic API key | sk-ant-... |
AWS_PROFILE | AWS credentials profile | my-profile |
AWS_REGION | AWS region | us-west-2 |
ANTHROPIC_VERTEX_PROJECT_ID | GCP project ID | my-project-123 |
CLOUD_ML_REGION | Vertex AI region | global |
VERCEL_AI_GATEWAY_TOKEN | Vercel gateway token | vck_... |
ANTHROPIC_FOUNDRY_API_KEY | Azure API key | your-key |
ANTHROPIC_FOUNDRY_RESOURCE | Azure resource name | your-resource |
CLAUDE_MODEL_<TIER>_<PROVIDER> | Model override | CLAUDE_MODEL_OPUS_AWS="..." |
CLAUDE_SMALL_FAST_MODEL_<PROVIDER> | Background model override | CLAUDE_SMALL_FAST_MODEL_AWS="..." |
AI_NO_UPDATE_CHECK | Disable update checking | 1 |
Troubleshooting Configuration
Check Current Configuration
- Active provider
- Authentication method
- Primary model
- Small/fast model
- Agent teams status
- Update availability
View Effective Model IDs
Reset to System Defaults
Verify Override Hierarchy
Related Documentation
- Provider Setup — Authentication and credentials
- Agent Teams — Multi-agent collaboration
- Scripting Guide — Executable markdown and automation