Available Providers
Local Providers (Free)
Ollama - Run models locally or on Ollama’s cloudShort form: Requirements:
--olSetup:- Ollama installed and running at
http://localhost:11434 - 24GB+ VRAM for local coding models (cloud models work on any hardware)
LM Studio - Local models with MLX support (fast on Apple Silicon)Short form:
--lmSetup:- Download LM Studio from lmstudio.ai
- Load a model in the UI
- Start server:
lms server start --port 1234
- LM Studio running at
http://localhost:1234 - 24GB+ unified memory for coding models
Cloud Providers
AWS Bedrock - Claude models on Amazon Web ServicesSetup in Authentication:
~/.ai-runner/secrets.sh:- AWS credentials file (
~/.aws/credentials) - Or IAM role (for EC2/ECS)
Google Vertex AI - Claude models on Google Cloud PlatformSetup in Authentication:
~/.ai-runner/secrets.sh:- Application default credentials (
gcloud auth application-default login) - Or service account key file
Anthropic API - Direct access to Anthropic’s APISetup in Authentication:
~/.ai-runner/secrets.sh:- API key from console.anthropic.com
Microsoft Azure - Claude models on Azure FoundrySetup in Authentication:
~/.ai-runner/secrets.sh:- Azure Foundry API key
Vercel AI Gateway - Access 100+ models (OpenAI, xAI, Google, Meta, and more)Setup in Authentication:
~/.ai-runner/secrets.sh:- Vercel AI Gateway token from vercel.com/ai-gateway
- OpenAI (GPT-4, GPT-5, etc.)
- xAI (Grok models)
- Google (Gemini models)
- Meta (Llama models)
- Anthropic (Claude models)
- And many more
Claude Pro/Max - Your regular Claude subscriptionAuthentication:
- Claude subscription (log in with
claude)
Provider Precedence
When no provider is specified, AI Runner auto-detects in this order:- Saved default from
--set-default - Claude subscription (if logged in)
- Error (no provider available)