Skip to main content

Installation

This guide covers installation, configuration, and verification of Andi AIRun on all supported platforms.

Supported Platforms

  • macOS 13.0+
  • Linux (Ubuntu 20.04+, Debian 10+)
  • Windows 10+ via WSL

Prerequisites

Before installing AIRun, you need Claude Code installed on your system.
1

Check if Claude Code is installed

claude --version
If this command works, you already have Claude Code installed and can skip to the next section.
2

Install Claude Code

If Claude Code is not installed, run:
curl -fsSL https://claude.ai/install.sh | bash
You’ll need an active Claude subscription (Pro or Max) or API credentials to use Claude Code.

Install AIRun

1

Clone the repository

git clone https://github.com/andisearch/airun.git
cd airun
2

Run the setup script

./setup.sh
The setup script will:
  • Install commands to /usr/local/bin (may require sudo)
  • Create ~/.ai-runner/ configuration directory
  • Copy secrets.example.sh to ~/.ai-runner/secrets.sh
  • Install library scripts to /usr/local/share/ai-runner
  • Migrate existing ~/.claude-switcher/ configuration if present
The setup is non-destructive. Your plain claude command always works untouched as before. All AIRun operations are session-scoped and automatically restore your original configuration on exit.
3

Verify installation

ai --version
You should see the AIRun version number (e.g., ai-runner v1.2.0).

Configure Providers

AIRun supports multiple AI providers. You only need to configure the providers you want to use.
1

Open the secrets file

nano ~/.ai-runner/secrets.sh
This file contains templates for all supported providers. Uncomment and fill in the credentials for the providers you want to use.
2

Configure your providers

Choose the providers you want to configure:
Ollama - Runs models locally or on Ollama’s cloud:
# Install Ollama
brew install ollama                   # macOS
curl -fsSL https://ollama.com/install.sh | sh  # Linux / WSL

# Quick setup (Ollama 0.15+)
ollama launch claude                  # Auto-configure and launch Claude Code

# Or manual setup
ollama pull qwen3-coder               # Pull a model (needs 24GB+ VRAM)
ai --ollama                           # Run with Ollama

# Cloud models — no GPU required, runs on Ollama's servers
ollama pull minimax-m2.5:cloud        # Best coding (80% SWE-bench, MIT)
ollama pull glm-5:cloud               # Best reasoning (78% SWE-bench, MIT)
ai --ollama --model minimax-m2.5:cloud
Hardware note: Coding models need 24GB+ VRAM (or unified memory on Apple Silicon). Ollama’s cloud models work on any hardware.
LM Studio - Local models with MLX support (fast on Apple Silicon):
# 1. Download from lmstudio.ai and load a model
# 2. Start the server: lms server start --port 1234
ai --lm                               # Run with LM Studio
Add to ~/.ai-runner/secrets.sh:
# Option 1: AWS Bedrock API Key (recommended for simplicity)
export AWS_BEARER_TOKEN_BEDROCK="your-bedrock-api-key"

# Option 2: AWS Access Keys
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_SESSION_TOKEN="your_session_token"  # Optional, for temporary credentials

# Option 3: AWS Profile
export AWS_PROFILE="your-profile-name"

# Required for all AWS auth methods:
export AWS_REGION="us-west-2"
See Claude Code AWS Bedrock docs for more details.
Add to ~/.ai-runner/secrets.sh:
# Required
export ANTHROPIC_VERTEX_PROJECT_ID="your_gcp_project_id"
export CLOUD_ML_REGION="global"  # or "us-east5", "us-central1", etc.
Google Cloud Authentication (in precedence order):
  1. Service Account Key File (highest precedence, recommended for production/CI):
    export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
    
  2. Application Default Credentials (recommended for local development):
    gcloud auth application-default login
    # No additional environment variable needed
    
  3. gcloud User Credentials (fallback):
    gcloud auth login
    # No additional environment variable needed
    
AIRun automatically detects and uses the appropriate method.See Claude Code Vertex AI docs for more details.
Add to ~/.ai-runner/secrets.sh:
export ANTHROPIC_API_KEY="sk-ant-..."
Get your API key from console.anthropic.com.
Add to ~/.ai-runner/secrets.sh:
# Option 1: API Key authentication
export ANTHROPIC_FOUNDRY_API_KEY="your-azure-api-key"

# Option 2: Use Azure default credential chain (az login)
# If ANTHROPIC_FOUNDRY_API_KEY is not set, Azure default credentials will be used

# Required: Azure resource name or full base URL
export ANTHROPIC_FOUNDRY_RESOURCE="your-resource-name"
# Or provide the full URL:
# export ANTHROPIC_FOUNDRY_BASE_URL="https://your-resource-name.services.ai.azure.com"
See Claude Code Azure docs for more details.
Add to ~/.ai-runner/secrets.sh:
export VERCEL_AI_GATEWAY_TOKEN="vck_..."
export VERCEL_AI_GATEWAY_URL="https://ai-gateway.vercel.sh"  # Default, can be customized
Get your token from the Vercel dashboard.Vercel AI Gateway supports 100+ models from OpenAI, xAI, Google, Meta, and more. See vercel.com/ai-gateway.
3

Save the file

After adding your credentials, save and close the file.
The secrets.sh file contains sensitive credentials. Keep it secure and never commit it to version control.

Verify Configuration

1

Check overall status

ai-status
This command shows:
  • Current tool and provider
  • Authentication method
  • Configured models
  • Available providers
Look for green checkmarks next to providers you’ve configured.
2

Test with your default provider

ai
This launches an interactive Claude Code session. You should see a message indicating which provider and model are active.Type /status in Claude to verify the authentication method.
3

Test provider switching

Try switching to different providers:
# Test Ollama (if configured)
ai --ollama

# Test AWS Bedrock (if configured)
ai --aws

# Test with specific model tier
ai --vertex --opus
Each command should launch Claude Code with the specified provider.

Model Configuration (Optional)

AIRun uses sensible default models for each provider, but you can override them.

Default Model Tiers

AIRun provides three model tiers:
  • --opus / --high - Highest-tier model (Opus 4.6)
  • --sonnet / --mid - Mid-tier model (Sonnet 4.6, default for cloud providers)
  • --haiku / --low - Lowest-tier model (Haiku 4.5, fastest)

Override Default Models

To use different model versions, add overrides to ~/.ai-runner/secrets.sh:
# AWS Bedrock Models
export CLAUDE_MODEL_SONNET_AWS="global.anthropic.claude-sonnet-4-6"
export CLAUDE_MODEL_OPUS_AWS="global.anthropic.claude-opus-4-6-v1"
export CLAUDE_MODEL_HAIKU_AWS="us.anthropic.claude-haiku-4-5-20251001-v1:0"

# Google Vertex Models
export CLAUDE_MODEL_SONNET_VERTEX="claude-sonnet-4-6"
export CLAUDE_MODEL_OPUS_VERTEX="claude-opus-4-6"
export CLAUDE_MODEL_HAIKU_VERTEX="claude-haiku-4-5@20251001"

# Anthropic API Models
export CLAUDE_MODEL_SONNET_ANTHROPIC="claude-sonnet-4-6"
export CLAUDE_MODEL_OPUS_ANTHROPIC="claude-opus-4-6"
export CLAUDE_MODEL_HAIKU_ANTHROPIC="claude-haiku-4-5"

Dual Model Configuration

Claude Code uses two models:
  1. ANTHROPIC_MODEL - Main model for interactive work
  2. ANTHROPIC_SMALL_FAST_MODEL - Background operations (defaults to Haiku)
You can override the small/fast model:
# AWS Bedrock Small/Fast Model
export CLAUDE_SMALL_FAST_MODEL_AWS="us.anthropic.claude-haiku-4-5-20251001-v1:0"

Set Default Provider

Save your preferred provider and model as the default:
# Set default
ai --aws --opus --set-default

# Now 'ai' uses AWS + Opus by default
ai task.md

# Clear default
ai --clear-default

Updating

1

Update AIRun

ai update
Or manually:
cd airun && git pull && ./setup.sh
AIRun checks for updates once every 24 hours (non-blocking) and shows a notice when a new version is available. Your API keys in ~/.ai-runner/secrets.sh are preserved during updates.
Disable update checks by setting: export AI_NO_UPDATE_CHECK=1 in your ~/.ai-runner/secrets.sh

Troubleshooting

Common Issues

  1. Verify API key: grep ANTHROPIC_API_KEY ~/.ai-runner/secrets.sh
  2. Confirm you’re using ai (not plain claude)
  3. Run ai-status during the session
  4. In Claude, run /status to see authentication method
  1. Use ai --pro or plain claude
  2. Run /status in Claude to verify authentication
  1. Verify the server is running:
    # For Ollama
    ollama list
    
    # For LM Studio
    curl http://localhost:1234/v1/models
    
  2. Check the host configuration in ~/.ai-runner/secrets.sh
  3. Ensure you have a model loaded
The setup script needs write access to /usr/local/bin. If you see permission errors:
sudo ./setup.sh
The script will automatically request sudo access only if needed.

Session-Scoped Behavior

ai with no flags uses your regular Claude subscription, identical to running claude directly. Provider flags (--aws, --ollama, etc.) only affect the current session:
  • On exit, your original Claude settings are automatically restored
  • Plain claude in another terminal is completely unaffected
  • No global configuration is changed

Uninstallation

To remove AIRun:
cd airun
./uninstall.sh
This removes all installed commands and optionally removes the configuration directory.

Next Steps