Provider Setup
aidaemon supports three provider types, all configured in the [provider] section.
Provider Kinds
google_genai (recommended)
Native Google Generative AI API. The recommended provider โ Gemini models offer excellent tool-use capabilities, fast response times, and generous free-tier API access via Google AI Studio.
[provider]
kind = "google_genai"
api_key = "AIza..."
[provider.models]
primary = "gemini-3-flash-preview"
fast = "gemini-2.5-flash-lite"
smart = "gemini-3-pro-preview"Gemini Web Grounding
When using google_genai, aidaemon automatically enables Google Search grounding. This allows Gemini models to search the web as part of their responses. Models that don’t support grounding with function calling are detected automatically and fall back gracefully.
openai_compatible
Works with any API that implements the OpenAI chat completions format. This includes OpenAI, OpenRouter, Moonshot, MiniMax, Cloudflare AI Gateway, Ollama, and many others.
[provider]
kind = "openai_compatible"
api_key = "sk-..."
base_url = "https://api.openai.com/v1"
[provider.models]
primary = "gpt-5-mini"
fast = "gpt-5-nano"
smart = "gpt-5.1"anthropic
Native Anthropic API (Messages API format). Use this for direct Anthropic access without going through an OpenAI-compatible proxy.
[provider]
kind = "anthropic"
api_key = "sk-ant-..."
[provider.models]
primary = "claude-sonnet-4-5"
fast = "claude-haiku-4-5"
smart = "claude-opus-4-6"OpenRouter
OpenRouter provides access to models from multiple providers through a single API key and the OpenAI-compatible format.
[provider]
kind = "openai_compatible"
api_key = "sk-or-..."
base_url = "https://openrouter.ai/api/v1"
[provider.models]
primary = "openai/gpt-5-mini"
fast = "mistralai/mistral-small-3.1-24b-instruct"
smart = "openai/gpt-5.1"mistralai/mistral-small-3.1-24b-instruct, mistralai/mistral-nemo, google/gemma-3-12b-it, and openai/gpt-5-nano.Moonshot AI (Kimi)
Moonshot provides Kimi models over an OpenAI-compatible API.
[provider]
kind = "openai_compatible"
api_key = "YOUR_MOONSHOT_API_KEY"
base_url = "https://api.moonshot.ai/v1"
[provider.models]
primary = "kimi-k2.5"
fast = "kimi-k2.5"
smart = "kimi-k2.5"MiniMax
MiniMax provides an OpenAI-compatible API endpoint at https://api.minimax.io/v1.
[provider]
kind = "openai_compatible"
api_key = "YOUR_MINIMAX_API_KEY"
base_url = "https://api.minimax.io/v1"
[provider.models]
primary = "MiniMax-M2.5"
fast = "MiniMax-M2.5-highspeed"
smart = "MiniMax-M2.5"Cloudflare AI Gateway
Cloudflare AI Gateway sits in front of upstream providers and exposes an OpenAI-compatible endpoint. Use this when you want centralized logging, caching, controls, or rate limiting across providers.
[provider]
kind = "openai_compatible"
api_key = "sk-..." # Upstream provider key
gateway_token = "cf-gw-..." # Optional: Authenticated Gateway mode
base_url = "https://gateway.ai.cloudflare.com/v1/<ACCOUNT_ID>/<GATEWAY_ID>/compat"
[provider.models]
primary = "gpt-5-mini"
fast = "gpt-5-nano"
smart = "gpt-5.1"api_key (basic mode), or add gateway_token to send cf-aig-authorization for Authenticated Gateway mode.Ollama (Local)
Run models locally with Ollama. No API key required.
[provider]
kind = "openai_compatible"
api_key = "ollama"
base_url = "http://localhost:11434/v1"
[provider.models]
primary = "llama3.1"
fast = "llama3.1"
smart = "llama3.1"http://localhost:11434/api/tags.llama.cpp (Local)
You can also run aidaemon with llama.cpp via llama-server in OpenAI-compatible mode.
[provider]
kind = "openai_compatible"
api_key = "llama" # Any value if your local server does not enforce auth
base_url = "http://127.0.0.1:8080/v1"
[provider.models]
primary = "your-model-id"
fast = "your-model-id"
smart = "your-model-id"/v1/chat/completions is required. /v1/models is strongly recommended so the /models command works. For security, HTTP is only allowed to localhost addresses in aidaemon.