This file is created automatically by ./setup.sh from the secrets.example.sh template:
Copy
nano ~/.ai-runner/secrets.sh
Andi AIRun loads this file at startup. You don’t need to set environment variables in your shell profile or .bashrc — just add them to secrets.sh, and then switch providers freely with ai --aws, ai --vertex, etc.
You only need to configure the providers you want to use. Configure multiple providers to switch between them when you hit rate limits or want to use different models.
# Use high tier for complex reasoningai --aws --opus task.md# Use mid tier (default)ai --aws task.md# Use low tier for speed and cost savingsai --vertex --haiku simple-fix.md
Andi AIRun uses a “small/fast” model for background operations (like file searches, quick checks). By default, this is set to the Low tier model (Haiku).For local providers (Ollama, LM Studio), the background model defaults to the same model as the main tier to avoid costly model swapping.
# Ollamaexport OLLAMA_MODEL_HIGH="qwen3-coder"export OLLAMA_MODEL_MID="qwen3-coder"export OLLAMA_MODEL_LOW="qwen2.5-coder:7b"export OLLAMA_SMALL_FAST_MODEL="qwen3-coder" # Same model to avoid swapping# LM Studioexport LMSTUDIO_MODEL_HIGH="openai/gpt-oss-20b"export LMSTUDIO_MODEL_MID="openai/gpt-oss-20b"export LMSTUDIO_MODEL_LOW="ibm/granite-4-micro"export LMSTUDIO_SMALL_FAST_MODEL="openai/gpt-oss-20b" # Same model
# Use a specific modelai --vercel --model xai/grok-code-fast-1# Use a specific Ollama modelai --ollama --model glm-5:cloud# Use a specific AWS modelai --aws --model global.anthropic.claude-opus-4-6-v1